![AI programs struggle to give users an executive summary of the news, according to testing done by the BBC.](https://straightarrownews-preprod.go-vip.net/wp-content/uploads/2025/01/Omaha-34-AI-Usage-Featured-Image-Getty.jpg?w=1000)
- The BBC asked four top AI platforms to summarize news stories. The British public broadcaster found that more than 51% of their responses had “significant issues.” They included factual errors and altered quotes.
- Researchers are concerned about the potential harm caused by inaccuracies and distortions in AI-generated news summaries. They fear it could undermine trust in news.
- Nineteen percent of AI-generated summaries introduced factual errors, and 13% altered quotes from BBC stories, highlighting specific areas of concern.
Full Story
Despite billions of dollars in research and technology invested in artificial intelligence, the programs still struggle to give users an executive summary of the news. That’s according to testing done by the BBC.
Media Landscape
See how news outlets across the political spectrum are covering this story. Learn moreBias Summary
- A BBC study found that four major AI chatbots, including OpenAI's ChatGPT and Google's Gemini, provided inaccurate news summaries, with over half of their responses containing significant errors.
- Deborah Turness, CEO of BBC News, expressed concerns about AI misinformation and stated that tech companies are "playing with fire" regarding this issue.
- The study revealed that 51% of AI-generated answers had flaws, and 19% of responses citing BBC content introduced errors, including incorrect facts and dates.
- Apple paused its AI news alerts after generating several inaccurate headlines, which Turness praised as a "bold and responsible decision."
- Four major artificial intelligence chatbots inaccurately summarize news stories, according to research carried out by the BBC.
- The BBC report stated that the chatbots contained "significant inaccuracies" and distortions.
- Nine out of 10 AI chatbot responses about news queries contained at least some issues, BBC research has claimed.
- The research indicated that the "companies developing Gen AI tools are playing with fire."
- A BBC study found that four AI chatbots, including ChatGPT and Google's Gemini, produced over half of their news summaries with significant inaccuracies and distortions.
- The study reported that 51% of AI-generated answers contained flaws, and 19% of responses citing BBC content introduced factual errors.
- Deborah Turness, CEO of BBC News, emphasized the importance of accurate news and called for tech companies to address these issues together.
- Apple paused its AI news alerts after being notified of significant inaccuracies, highlighting the need for responsible AI use in news reporting.
Bias Comparison
Bias Distribution
Left
Right
Untracked Bias
The British public broadcaster ran 100 of its stories through four of the most advanced AI platforms. They are OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini and Perplexity AI. It asked them all to give summaries of the stories it had entered.
The result left researchers unimpressed at best and, at worst, worried about the potential for harm.
![QR code for SAN app download](https://straightarrownews-preprod.go-vip.net/wp-content/themes/straightarrow-2023/assets/images/app-download-block-qr-code.png)
Download the SAN app today to stay up-to-date with Unbiased. Straight Facts™.
Point phone camera here
It found that 51% of all AI answers to questions about the news had “significant issues.” Nineteen percent of the AI answers that cited BBC content introduced factual errors, including incorrect factual statements, numbers and dates. Thirteen percent of the quotes the platforms sourced from BBC stories had been altered from the original source or not present in the article.
In one test, Google’s Gemini produced a summation that included “The [National Health Service] advises people not to start vaping, and recommends that smokers who want to quit should use other methods.”
The UK’s NHS says vaping is one of the most effective tools to quit smoking.
“This matters because it is essential that audiences can trust the news to be accurate, whether on TV, radio, digital platforms, or via an AI assistant,” the researchers wrote. “It matters because society functions on a shared understanding of facts, and inaccuracy and distortion can lead to real harm.”
The report notes how much damage an incorrect or misleading headline could cause should it go viral via social media.
Get up to speed on the stories leading the day every weekday morning. Sign up for the newsletter today!
Learn more about our emails. Unsubscribe anytime.
By entering your email, you agree to the Terms & Conditions and acknowledge the Privacy Policy.
In response to the study, a spokesperson for OpenAI told BBC, “We support publishers and creators by helping 300 million weekly ChatGPT users discover quality content through summaries, quotes, clear links, and attribution.”
The BBC didn’t test DeepSeek, which news rating agency NewsGuard found it to have only provided accurate responses to news and information prompts 17% of the time. In 30% of the cases, it repeated false claims. Additionally, 52% of the time, it gave “vague or unhelpful responses” to news-related questions.
Media Landscape
See how news outlets across the political spectrum are covering this story. Learn moreBias Summary
- A BBC study found that four major AI chatbots, including OpenAI's ChatGPT and Google's Gemini, provided inaccurate news summaries, with over half of their responses containing significant errors.
- Deborah Turness, CEO of BBC News, expressed concerns about AI misinformation and stated that tech companies are "playing with fire" regarding this issue.
- The study revealed that 51% of AI-generated answers had flaws, and 19% of responses citing BBC content introduced errors, including incorrect facts and dates.
- Apple paused its AI news alerts after generating several inaccurate headlines, which Turness praised as a "bold and responsible decision."
- Four major artificial intelligence chatbots inaccurately summarize news stories, according to research carried out by the BBC.
- The BBC report stated that the chatbots contained "significant inaccuracies" and distortions.
- Nine out of 10 AI chatbot responses about news queries contained at least some issues, BBC research has claimed.
- The research indicated that the "companies developing Gen AI tools are playing with fire."
- A BBC study found that four AI chatbots, including ChatGPT and Google's Gemini, produced over half of their news summaries with significant inaccuracies and distortions.
- The study reported that 51% of AI-generated answers contained flaws, and 19% of responses citing BBC content introduced factual errors.
- Deborah Turness, CEO of BBC News, emphasized the importance of accurate news and called for tech companies to address these issues together.
- Apple paused its AI news alerts after being notified of significant inaccuracies, highlighting the need for responsible AI use in news reporting.
Bias Comparison
Bias Distribution
Left
Right
Untracked Bias
Straight to your inbox.
By entering your email, you agree to the Terms & Conditions and acknowledge the Privacy Policy.
MOST POPULAR
-
Getty Images
Trump orders Treasury Department to stop minting new pennies
Watch 0:49Monday -
DVIDS
Judge blocks Venezuelan migrants from being sent to Guantánamo Bay
Watch 1:03Monday -
AP Images
Trump to announce new 25% tariffs on steel, aluminum
Watch 9:25Monday -
Getty Images
Philadelphia Eagles win Super Bowl 59, dominate Kansas City Chiefs, 40-22
Watch 3:03Sunday