BBC Study Reveals AI Assistants Struggle With News Content Accuracy - 1

Photo by Roman Kraft on Unsplash

BBC Study Reveals AI Assistants Struggle With News Content Accuracy

  • Written by Andrea Miliani Former Tech News Expert
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

The BBC has published a new study revealing that publicly available AI assistants struggle to provide accurate responses to questions from experts about news published on its website.

In a Rush? Here are the Quick Facts!

  • BBC journalists and experts revealed that 51% of the answers provided by popular AI chatbots contained significant issues.
  • The study shows that 13% of the quotes provided by the AI models were altered or made up.
  • 19% of the information provided by the AI tools—from BBC’s News content—was inaccurate.

According to the study shared by the BBC , the researchers considered popular generative AI chatbots such as ChatGPT, Perplexity, Copilot, and Gemini. The study was performed over a month—providing access to BBC’s website—and it showed that 51% of the answers given by the AI tools had significant issues.

Journalists and experts considered the accuracy, impartiality, and analysis of how the AI technologies represented the content provided. Other findings in the study showed that 13% of the quotes provided by the AI models were either altered from the original source or made up, and that 19% of the results considering BBC’s content had factual errors or inaccuracies—related to numbers, dates, or statements.

“We’re excited about the future of AI and the value it can bring audiences,” said Pete Archer, Programme Director for Generative AI at the BBC, in a public announcement . “But AI is also bringing significant challenges for audiences. People may think they can trust what they’re reading from these AI assistants, but this research shows they can produce responses to questions about key news events that are distorted, factually incorrect, or misleading.”

The experts noted that the information provided by ChatGPT and Copilot was not up-to-date. As an example, they said that both chatbots stated that Rishi Sunak and Nicola Sturgeon were still performing as Prime Minister and First Minister respectively after they both left.

The BBC has also complained about other technologies providing inaccurate information, like Apple’s Intelligence summarized news alerts providing false statements. The tech giant recently announced a new update to the AI news summary feature after the journal filed a complaint.

Opinion: How Can Workers Keep Up With AI Updates Without Burning Out? - 2

Image generated with DALL·E through ChatGPT

Opinion: How Can Workers Keep Up With AI Updates Without Burning Out?

  • Written by Andrea Miliani Former Tech News Expert
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

In 2025, we have seen the arrival of DeepSeek, OpenAI’s new model o3-mini, its integration into Perplexity, and Microsoft adopting R1, while scientists have developed new reasoning AI models for less than $50. It’s hard to keep up with everything, but workers must learn to ask the right questions and find a balance between AI adoption and AI literacy

Is there a new ultra-powerful AI model being released every week? I’ve been under the impression that there’s a brand-new reasoning model or an updated version or a chatbot coming out every week, and we still haven’t fully learned to use the previous one.

“I’ve heard about that DeepSeek chatbot, but I just started to use ChatGPT,” said my friend Sebastian, from Canada, in a recent voice message after I told him about the latest updates in the AI world with DeepSeek’s revolution.

He’s not alone. Learning about AI, how to write prompts, and make the most out of these advanced models is not the top priority of many workers at the moment, especially for those who are struggling with life, parenting, multiple tasks, or dealing with that ultimate fear that AI will replace them.

These past few weeks we’ve seen impressive developments, and the popular and powerful ChatGPT even got dethroned—at least temporarily—after that tsunami in the AI industry produced by a small-yet-striking whale called DeepSeek.

i cant believe ChatGPT lost its job to AI — terminally onλine εngineer 🇺🇦 (@tekbog) January 27, 2025

If this superintelligence, with the potential to replace humans in multiple tasks, was itself replaced by another technology that is cheaper and more efficient, what does that mean for the average worker? AI adoption and AI literacy are both crucial for the changing landscape of work, but it seems like an uphill battle, with so many updates, changes, and new AI models launched every week.

AI Adoption vs. AI Literacy

Once you use a chatbot a couple of times, there’s no going back. At first, users might feel that it is not “necessary,” but after they learn how to use it, it becomes addictive. That’s pretty much how AI is officially installed in our day-to-day lives. Some people even think of it as “magical” and—spoiler alert—this is actually a sign of low AI literacy.

AI Adoption is increasing significantly. According to a study performed by Microsoft and LinkedIn last May, the 2024 Work Trend Index Annual Report , 75% of workers across 31 countries were using AI at work , and 46% started doing so just a few months before . These figures will be even higher this year.

McKinsey’s recent survey, Superagency in the workplace: Empowering people to unlock AI’s full potential , gives us clues about the current landscape. The document states that 94% of employees—and 99% of C-Suite leaders—are familiar with AI tools, but the biggest barriers are actually leaders not moving fast enough.

However, AI literacy—an area that requires more understanding and critical thinking to evaluate and reflect on AI technologies—isn’t going as fast.

“ People with less knowledge about AI are actually more open to using the technology, ” states an article published in The Conversation . “ We call this difference in adoption propensity the ‘lower literacy-higher receptivity’ link. ”

The problem with lagging AI literacy is that people who only adopt AI tools without understanding them tend to be less critical, believe in their “supremacy,” and are more likely to fall for misinformation. This is especially concerning now that big tech companies like Meta are eliminating fact-checking programs , while powerful AI models like DeepSeek are censoring information and making up content.

Self-learning: An Essential Skill For Workers in the AI Era

In a recent interview with journalist Cleo Abram for her show, Huge Conversations , Nvidia’s CEO Jensen Huang explained his vision for the future with AI and shared a few recommendations on what workers should do.

“The effort of drudgery will go to zero,” said Huang to Abram, and expanded his optimistic takes with Cleo. “ You probably use ChatGPT and AI. I feel more empowered today, and more confident to learn something today. The knowledge of almost any particular field and the barriers to that understanding have been reduced. Now I have a personal tutor with me all of the time. ”

But, most importantly, he shared one question he believed every worker should be asking themselves right now:

“How can I use AI to do my job better?” And the fascinating thing about this new technology, as explained by Huang, is that you can use it to help you answer this question.

A Small Effort in Adaptation Could Be the Key to Survival

So far in 2025, we have witnessed significant advancements in the artificial intelligence industry. The arrival of the cheap yet powerful DeepSeek prompted OpenAI to release its latest model, o3-mini , even for free. Meanwhile, scientists have discovered new strategies to develop advanced reasoning models for less than $50 , and now European AI companies like Mistral are positioning themselves in the race to offer the best AI for users today.

AI adoption has been massive. While some are experimenting with Claude for creative text generation, others are trying to make the most of DeepSeek’s R1—especially now that Microsoft and Perplexity integrated it into their platforms . Meanwhile, some users are just getting the hang of Copilot coding —and let’s not even dive into the latest AI video-generation tools.

New AI technologies seem endless, but starting with just one and making use of it is already a big step. AI adoption is easier and more common than it seems, but those who achieve the best long-term results will be the ones who combine their learning with AI literacy—asking the right questions about how to optimize their work with AI and how their field might evolve under a critical and forward-thinking approach.