Get your news from AI? Watch out - it's wrong almost half the time
zdnet.comNew research from the European Broadcasting Union and the BBC has found that four leading chatbots routinely generate flawed summaries of news stories.

Follow ZDNET: Add us as a preferred source on Google.
ZDNET's key takeaways
- New research shows that AI chatbots often distort news stories.
- 45% of the AI responses analyzed were found to be problematic.
- The authors warn of serious political and social consequences.
A new study conducted by the European Broadcasting Union (EBU) and the BBC has found that leading AI chatbots routinely distort and misrepresent news stories. The consequence could be large-scale erosion in public trust towards news organizations and in the stability of democracy itself, the organizations warn.
Spanning 18 countries and 14 languages, the study involved professional journalists evaluating thousands of responses from ChatGPT, Copilot, Gemini, and Perplexity about recent news stories based on criteria like accuracy, sourcing ...
Copyright of this story solely belongs to zdnet.com . To see the full text click HERE

