I originally wrote about this topic in 2023, but recent developments—particularly the insights shared in AI Is Coming for Culture—have made me want to revisit these ideas. The acceleration of AI’s impact on our cultural and information ecosystems has only reinforced the urgency of building robust trust infrastructure for the digital world.
A Technology Solution for the Trust Crisis of Digital Trust
Digital journalism, in particular local news, is quietly undergoing a renaissance. While many observers focus on the problems of mainstream media versus social media platforms, many scrappy techno/journo projects are underway. They’re experimenting with form and content in ways that point to a genuine evolution in our information ecosystems.
404 Media is a great example of the transformation underway. The company describes itself as a “journalist-founded digital media company exploring the ways technology is shaping–and is shaped by–our world.” Somewhat ironically, their greatest existential threat is the way that technology turns out to be shaping our world, at least our digital media, for the worst.
The founders of 404 Media, recently explained their predicament in a post, AI Spam Is Eating the Internet, Stealing Our Work, and Destroying Discoverability.
As they say, “We are realizing that in order to combat the fracturing of social media platforms, a Google discoverability crisis fueled by AI generated spam and AI-fueled SEO, and a media business environment that is in utter freefall, we need to be able to reach our readers directly using a platform that we own and control.”
The problem? AI tools capable of mass producing and syndicating thousands of variations on an original, authentic article. That’s not even to mention the fact that the content that’s not closely related to any single source, but rather a mashup of hundreds or thousands of sources.
A thin information gruel, if you will. An infinite information sausage.
Reality simulation isn’t only a problem for publishers and media companies, who are losing readers and revenue. It’s also a huge problem for companies that rely on advertising revenue, even behemoths like Google. As AI-generated synthetic content proliferates, it pollutes search results and degrades trust. Advertisers spend less as audiences turn to alternative sources for their information, like Perplexity or other AI-generated summaries.
A synthetic information Ouroboros.
The problem, of course, is not limited to scrappy startups like 404 Media and advertisers. It’s an existential problem for the Internet as we know it. Sam Harris, the philosopher, neuroscientist, podcast host and author, recently proclaimed in a podcast interview that “the internet will die because of AI.”
But is there a positive side? According to Andrew Golis, “There will be silver linings to The Great Robot Spam Flood of 2024.” Will the robots make us more human? “This flood of authorless “content” will help truly authored creativity shine in contrast.”
Perhaps. But how can humans judge what is authentic? In the absence of an unhackable and ungameable system of identity and authenticity, how can we distinguish between the authentic and the synthetic? In other words, how can the internet know what you trust?
What’s to be done? Can trust technology preserve the epistemic commons and free speech?
In October 2023, the Biden administration published an Executive Order “Protect Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content.”
The (re)-fragmentation of trust: Charlie Beckett, his predictions for journalism in 2024, states that trust will be recognized as a useless metric. “Of course I don’t trust the news media collectively, or automatically. I trust some brands some of the time.”
Academics are confirming this trend. The Swiss-based International Panel on the Information Environment (IPIE) is an independent and global science organization whose goal is to study threats to the world’s information environment.
In their 2023 survey of academics in the field, Trends in the Global Information Environment, the IPIE found that participants in the study “perceive serious issues with artificial intelligence and online content moderation, which they attribute to a lack of accountability when content moderation is badly done (66%) and poorly designed AI-powered content moderation systems (55%).”
Digital Trust is Critical to Digital Discernment
All this brings us to the current state of the internet’s trust infrastructure. The fact is that few people know that the internet even has a trust infrastructure. This only serves to illustrate the failure of the tech industry as a whole.
The bedrock of Internet trust, Web PKI, is inadequate, outdated, and poorly governed. It has done little to prevent the crisis of trust that afflicts the Internet as a whole, but especially social media platforms. In our emerging AI-generated digital reality, the crisis of authenticity threatens to become even more acute.
Even industry stalwarts, whose businesses are based on selling PKI to huge enterprise companies, are commenting on the gap in their existing products, as the need for identity and provenance will become the foundation for content authenticity.
This gap, now exacerbated, has long been recognized. Even going back to 2014, Moxie Marlinspike pointed out the need for an alternative to Web PKI in his seminal talk, SSL and the Future of Authenticity.
The problem is that you can’t have trust without identity and authenticity. As an industry we’ve invested huge amounts of time and effort into authentication and access control, and so-called identity and access management, for humans. What we have completely ignored is authentication for the “content” produced by those humans.
Noosphere’s goal is to protect against AI-generated content and safeguard our human-to-human future. We’re building API-first services that make trust relationships transparent and manageable for non-technical people. To accomplish this, we’re building trust infrastructure, trust protocols, and trust services for the Internet, enabling trust agility for the future of digital authenticity.
References
- FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence | The White House
- Trends in the Global Information Environment: 2023 Expert Survey Results | IPIE
- SSL And the Future Of Authenticity | Moxie Marlinspike
- Why a ‘perfect storm’ of misinformation may loom in 2024 | Washington Post
- Disinformation Experts Are Changing Tactics For 2024 After GOP Attacks | HuffPost
- YouTube launches new watch page that only shows videos from “authoritative” news sources | Nieman Lab
- Robert F. Kennedy Jr: Government Secrets, Censorship, & How To End Chronic Disease | YouTube
- Junk websites filled with AI-generated text are pulling in money from programmatic ads | MIT Technology Review
- Biden Camp Forms ‘MISINFORMATION’ Group, CLEAR Ploy To Enact MORE Gov’t Censorship: Shellenberger
- Europe’s Largest News Aggregator Orders Editors to Play Down Palestinian Deaths
- Widespread Fake News About Israel-Palestine May Be Driven By Musk’s Monetization Scheme
- Opinion | Hyperpartisan ‘local news’ sites are dangerous to democracy
- Plagiarism-Bot? How Low-Quality Websites Are Using AI to Deceptively Rewrite Content from Mainstream News Outlets
- Labeling Misinformation Isn’t Enough. Here’s What Platforms Need to Do Next. - The Partnership on AI