AI can’t accurately summarize news: BBC


Full story

  • The BBC asked four top AI platforms to summarize news stories. The British public broadcaster found that more than 51% of their responses had “significant issues.” They included factual errors and altered quotes.
  • Researchers are concerned about the potential harm caused by inaccuracies and distortions in AI-generated news summaries. They fear it could undermine trust in news.
  • Nineteen percent of AI-generated summaries introduced factual errors, and 13% altered quotes from BBC stories, highlighting specific areas of concern.

Full Story

Despite billions of dollars in research and technology invested in artificial intelligence, the programs still struggle to give users an executive summary of the news. That’s according to testing done by the BBC. 

The British public broadcaster ran 100 of its stories through four of the most advanced AI platforms. They are OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini and Perplexity AI. It asked them all to give summaries of the stories it had entered.

The result left researchers unimpressed at best and, at worst, worried about the potential for harm.

It found that 51% of all AI answers to questions about the news had “significant issues.” Nineteen percent of the AI answers that cited BBC content introduced factual errors, including incorrect factual statements, numbers and dates. Thirteen percent of the quotes the platforms sourced from BBC stories had been altered from the original source or not present in the article.

In one test, Google’s Gemini produced a summation that included “The [National Health Service] advises people not to start vaping, and recommends that smokers who want to quit should use other methods.” 

The UK’s NHS says vaping is one of the most effective tools to quit smoking. 

“This matters because it is essential that audiences can trust the news to be accurate, whether on TV, radio, digital platforms, or via an AI assistant,” the researchers wrote. “It matters because society functions on a shared understanding of facts, and inaccuracy and distortion can lead to real harm.”

The report notes how much damage an incorrect or misleading headline could cause should it go viral via social media.

In response to the study, a spokesperson for OpenAI told BBC, “We support publishers and creators by helping 300 million weekly ChatGPT users discover quality content through summaries, quotes, clear links, and attribution.”

The BBC didn’t test DeepSeek, which news rating agency NewsGuard found it to have only provided accurate responses to news and information prompts 17% of the time. In 30% of the cases, it repeated false claims. Additionally, 52% of the time, it gave “vague or unhelpful responses” to news-related questions.

Tags: , , , , , ,

Bias comparison

  • Media outlets on the left emphasize the severity of misinformation by highlighting BBC News CEO Deborah Turness' warning about tech companies "playing with fire."
  • Media outlets in the center present a statistic that nine out of 10 AI responses contained issues, offering a broad view of inaccuracies with less critical framing.
  • Media outlets on the right call for collaboration among tech companies, showcasing a slightly more solution-oriented tone, which is not emphasized in the center.

Media landscape

Click on bars to see headlines

19 total sources

Key points from the Left

  • A BBC study found that four major AI chatbots, including OpenAI's ChatGPT and Google's Gemini, provided inaccurate news summaries, with over half of their responses containing significant errors.
  • Deborah Turness, CEO of BBC News, expressed concerns about AI misinformation and stated that tech companies are "playing with fire" regarding this issue.
  • The study revealed that 51% of AI-generated answers had flaws, and 19% of responses citing BBC content introduced errors, including incorrect facts and dates.
  • Apple paused its AI news alerts after generating several inaccurate headlines, which Turness praised as a "bold and responsible decision."

Report an issue with this summary

Key points from the Center

  • Four major artificial intelligence chatbots inaccurately summarize news stories, according to research carried out by the BBC.
  • The BBC report stated that the chatbots contained "significant inaccuracies" and distortions.
  • Nine out of 10 AI chatbot responses about news queries contained at least some issues, BBC research has claimed.
  • The research indicated that the "companies developing Gen AI tools are playing with fire."

Report an issue with this summary

Key points from the Right

  • A BBC study found that four AI chatbots, including ChatGPT and Google's Gemini, produced over half of their news summaries with significant inaccuracies and distortions.
  • The study reported that 51% of AI-generated answers contained flaws, and 19% of responses citing BBC content introduced factual errors.
  • Deborah Turness, CEO of BBC News, emphasized the importance of accurate news and called for tech companies to address these issues together.
  • Apple paused its AI news alerts after being notified of significant inaccuracies, highlighting the need for responsible AI use in news reporting.

Report an issue with this summary

Powered by Ground News™

Full story

  • The BBC asked four top AI platforms to summarize news stories. The British public broadcaster found that more than 51% of their responses had “significant issues.” They included factual errors and altered quotes.
  • Researchers are concerned about the potential harm caused by inaccuracies and distortions in AI-generated news summaries. They fear it could undermine trust in news.
  • Nineteen percent of AI-generated summaries introduced factual errors, and 13% altered quotes from BBC stories, highlighting specific areas of concern.

Full Story

Despite billions of dollars in research and technology invested in artificial intelligence, the programs still struggle to give users an executive summary of the news. That’s according to testing done by the BBC. 

The British public broadcaster ran 100 of its stories through four of the most advanced AI platforms. They are OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini and Perplexity AI. It asked them all to give summaries of the stories it had entered.

The result left researchers unimpressed at best and, at worst, worried about the potential for harm.

It found that 51% of all AI answers to questions about the news had “significant issues.” Nineteen percent of the AI answers that cited BBC content introduced factual errors, including incorrect factual statements, numbers and dates. Thirteen percent of the quotes the platforms sourced from BBC stories had been altered from the original source or not present in the article.

In one test, Google’s Gemini produced a summation that included “The [National Health Service] advises people not to start vaping, and recommends that smokers who want to quit should use other methods.” 

The UK’s NHS says vaping is one of the most effective tools to quit smoking. 

“This matters because it is essential that audiences can trust the news to be accurate, whether on TV, radio, digital platforms, or via an AI assistant,” the researchers wrote. “It matters because society functions on a shared understanding of facts, and inaccuracy and distortion can lead to real harm.”

The report notes how much damage an incorrect or misleading headline could cause should it go viral via social media.

In response to the study, a spokesperson for OpenAI told BBC, “We support publishers and creators by helping 300 million weekly ChatGPT users discover quality content through summaries, quotes, clear links, and attribution.”

The BBC didn’t test DeepSeek, which news rating agency NewsGuard found it to have only provided accurate responses to news and information prompts 17% of the time. In 30% of the cases, it repeated false claims. Additionally, 52% of the time, it gave “vague or unhelpful responses” to news-related questions.

Tags: , , , , , ,

Bias comparison

  • Media outlets on the left emphasize the severity of misinformation by highlighting BBC News CEO Deborah Turness' warning about tech companies "playing with fire."
  • Media outlets in the center present a statistic that nine out of 10 AI responses contained issues, offering a broad view of inaccuracies with less critical framing.
  • Media outlets on the right call for collaboration among tech companies, showcasing a slightly more solution-oriented tone, which is not emphasized in the center.

Media landscape

Click on bars to see headlines

19 total sources

Key points from the Left

  • A BBC study found that four major AI chatbots, including OpenAI's ChatGPT and Google's Gemini, provided inaccurate news summaries, with over half of their responses containing significant errors.
  • Deborah Turness, CEO of BBC News, expressed concerns about AI misinformation and stated that tech companies are "playing with fire" regarding this issue.
  • The study revealed that 51% of AI-generated answers had flaws, and 19% of responses citing BBC content introduced errors, including incorrect facts and dates.
  • Apple paused its AI news alerts after generating several inaccurate headlines, which Turness praised as a "bold and responsible decision."

Report an issue with this summary

Key points from the Center

  • Four major artificial intelligence chatbots inaccurately summarize news stories, according to research carried out by the BBC.
  • The BBC report stated that the chatbots contained "significant inaccuracies" and distortions.
  • Nine out of 10 AI chatbot responses about news queries contained at least some issues, BBC research has claimed.
  • The research indicated that the "companies developing Gen AI tools are playing with fire."

Report an issue with this summary

Key points from the Right

  • A BBC study found that four AI chatbots, including ChatGPT and Google's Gemini, produced over half of their news summaries with significant inaccuracies and distortions.
  • The study reported that 51% of AI-generated answers contained flaws, and 19% of responses citing BBC content introduced factual errors.
  • Deborah Turness, CEO of BBC News, emphasized the importance of accurate news and called for tech companies to address these issues together.
  • Apple paused its AI news alerts after being notified of significant inaccuracies, highlighting the need for responsible AI use in news reporting.

Report an issue with this summary

Powered by Ground News™