A new investigation reveals that the most popular news app in the U.S. published over three dozen inaccurate, AI-lifted, or AI-bylined stories in the past three years — with real-world effects.
NewsBreak, the most popular news app in the U.S., advertises itself as a local news source. It tops the Google Play store, with over 50 million downloads, and dominates the Apple App Store news charts, outperformed only by X and Reddit.
The app only operates in the U.S. and works as an aggregator, pooling news from different outlets, like Fox, Reuters, and CNN, onto one platform.
A Wednesday Reuters report found that NewsBreak used AI at least 40 times since 2021 to publish inaccurate stories, post stories from other sources under fake bylines, and take content from competitors.
For example, two AI-based stories on NewsBreak incorrectly stated that Pennsylvania-based charity Harvest912 was hosting a 24-hour health clinic for the homeless.
“You are doing HARM by publishing this misinformation – homeless people will walk to these venues to attend a clinic that is not happening,” Harvest912 wrote in a January email to NewsBreak.
Another email to NewsBreak from Colorado-based food bank Food to Power detailed how NewsBreak incorrectly stated when food would be distributed three separate times — in January, February, and March.
The food bank had to explain the issue to people who showed up in response to the NewsBreak articles, and send them home without the food they expected.
NewsBreak told Reuters that it took down the five articles with inaccurate information.
Related: Microsoft Replaced Its News Editors With AI. It’s Brought One Disaster After Another
When it comes to AI tools and fake bylines, NewsBreak appears to have used five fake names as bylines for AI-generated repostings of stories from other sites.
Past NewsBreak consultant and former Wall Street Journal executive editor Norm Pearlstine flagged the issue in a May 2022 company memo to NewsBreak CEO Jeff Zheng, writing “I cannot think of a faster way to destroy the NewsBreak brand.”
Zheng responded to the memo, acknowledging the problem and asking the team to fix it.
Related: OpenAI Can Now Access Financial Times Articles to Train AI
NewsBreak isn’t the only news outlet facing scrutiny over AI content. Bloomberg reported earlier this month that local San Francisco newspaper Hoodline was relying on AI to churn out stories — and, at one point, attributing those stories to unique AI personas complete with their own bios.
AI has also been known to generate inaccurate content. News outlet CNET used AI to write over 70 articles last year and had to issue corrections for many due to fact errors.
Meanwhile, last week, Google announced “more than a dozen technical improvements” after users found that AI overviews in its search engine gave some inaccurate answers.
Related: Google’s AI Overviews Are Already Getting Major Things Wrong
Read the full article here