You Won't Believe the Shocking Truth: Why 78% of News Audiences Still Trust Humans Over AI!

A profound transformation is underway in how people access, interpret, and trust the news, as artificial intelligence reshapes the everyday information habits of millions. Despite the rapid uptake of AI-generated news tools, public confidence in these sources continues to decline. A landmark survey conducted by the Reuters Institute for the Study of Journalism across six countries revealed that weekly use of generative AI systems has nearly doubled over the past year.

While individuals increasingly rely on these tools for tasks like researching topics and consuming news, trust has not kept pace. Popular AI tools such as ChatGPT, Gemini, and Copilot lag significantly behind traditional news organizations in public confidence. Many users who encounter AI-generated summaries through search engines often do not click through to read the original reporting, marking a fundamental shift in how journalism reaches its audience.

This shift is already visible. More than half of survey respondents reported seeing AI-generated answers in search results within the past week, a number higher than those who used any standalone AI system. In other words, many individuals are exposed to AI-generated interpretations of journalism without even realizing it.

đź“° Table of Contents
  1. The Risks of AI-Generated Information
  2. Addressing the Trust Deficit

The Risks of AI-Generated Information

Concerns about the accuracy of AI-generated content are significant. The largest international study to date on AI assistants and news, coordinated by the European Broadcasting Union (EBU) and led by the BBC, found that 45 percent of all AI-generated answers contained at least one significant error. Across 3,000 outputs tested in 18 countries, journalists identified systemic problems, such as sourcing failures, hallucinated details, and outdated or misleading information. Notably, Gemini performed the worst, with significant issues in more than three-quarters of its responses.

These errors are concerning not only because audiences often assume AI summaries are accurate but also because they tend to blame both the AI tool and the cited news outlet for inaccuracies, even when the mistakes have nothing to do with the publishers. This presents an existential risk for public service media already grappling with disinformation and skepticism. As the EBU warns, when people cannot discern what information is reliable, “they end up trusting nothing at all.”

Meanwhile, the volume of AI-generated written content online has overtaken human-written material. An analysis by Graphite using a dataset of 65,000 articles reports that AI-generated content surpassed human writing on the open web in late 2024. However, most of this content never reaches readers; Graphite’s parallel study indicates that AI-generated articles rarely appear in Google Search or ChatGPT results, creating a hidden layer of mass-produced, low-quality material that remains largely invisible.

Even if this “AI sludge” stays out of sight, its visible outputs—AI-curated, AI-framed summaries—are reshaping public understanding in consequential ways.

Adding to the concerns, a recent survey by the Pew Research Center reveals that half of Americans anticipate a negative impact of AI on news within the next 20 years. Nearly six in ten believe it will lead to fewer journalism jobs. Even among those who are optimistic about AI’s broader societal benefits, skepticism about its effects on news remains high. Two-thirds of respondents expressed strong concern about AI spreading inaccurate information, a sentiment shared across political lines, showcasing a rare point of bipartisan alignment.

However, educational levels reveal a divide: individuals with more formal education tend to be more pessimistic about AI’s impact on journalism and more doubtful of AI’s ability to produce accurate news.

Addressing the Trust Deficit

A clear mismatch emerges from these studies. AI tools are increasingly acting as de facto news editors, summarizing articles, selecting sources, and influencing what millions of users see, yet they operate outside the transparency and accountability obligations that traditional news publishers must adhere to. This regulatory vacuum is exacerbated by frameworks like the Digital Services Act, AI Act, and Media Freedom Act, which each regulate parts of the digital ecosystem but do not adequately address the new reality that AI tools are now selecting, reshaping, and interpreting news on behalf of millions of citizens.

Despite these uncertainties, one encouraging conclusion for newsrooms is that audiences still trust human journalism more than AI. They express a clear preference for news produced and edited by people, believing that human-led reporting is more credible, transparent, and responsible. This belief represents a competitive advantage for newsrooms, but only if they can effectively communicate it and utilize AI responsibly behind the scenes without compromising their editorial integrity.

As AI becomes a central gateway to information, the challenge lies in ensuring that innovation does not come at the expense of trust and that journalism remains a reliable anchor in an increasingly automated news environment.

You might also like:

Go up