More than two years after the launch of ChatGPT, newsrooms are still grappling with the dual nature of artificial intelligence: a powerful tool for innovation and a potential source of high-profile errors. Recent incidents, such as an AI-produced summer reading list with mostly fake books published by two newspapers, highlight the ongoing challenges and the undiminished need for human judgment in journalism.
The insert, provided by King Features (a Hearst Newspapers subsidiary), was reportedly created by a human writer using ChatGPT who failed to fact-check the output. This exemplifies how easily AI-generated inaccuracies, or "AI slop," can circulate if not properly vetted.
"This time, I did not [check the material] and I can't believe I missed it because it's so obvious. No excuses." - Writer of the AI-assisted reading list, via 404 Media.
The AI Adoption Dilemma
News organizations have approached AI tools like ChatGPT with a mix of enthusiasm for their potential and trepidation regarding pitfalls. AI can assist in combing through large datasets, generating ideas, and helping readers understand complex topics. However, the risk of AI chatbots providing incorrect or speculative responses remains a significant concern. This is compounded by fears of job losses and impacts on revenue streams.
Despite many major newsrooms establishing AI guidelines, the complexity of large staff sizes and multiple external partnerships means that AI-related errors can still slip through the cracks. Past incidents at publications like Sports Illustrated and CNET, which published AI-assisted articles with inaccuracies, serve as cautionary tales.
Key Challenges for AI in Newsrooms:
- Ensuring accuracy and avoiding AI "hallucinations."
- Maintaining editorial standards with AI-assisted content.
- Managing external partnerships and syndicated content that may use AI.
- Addressing fears of job displacement among journalists.
- Communicating the limitations of AI to users and staff.
The Path Forward: Human Oversight is Key
Experts like Felix Simon from the University of Oxford's Reuters Institute note that truly egregious cases of AI errors in news have been relatively few. Technological advancements have also helped reduce AI hallucinations. However, as Chris Callison-Burch, a professor at the University of Pennsylvania, points out, these systems are not infallible, and AI companies must better communicate the potential for errors.
The Chicago Sun-Times, one of the papers that published the erroneous reading list, stated that all its editorial content is produced by humans and that it will ensure editorial partners uphold the same standards. This highlights a growing consensus: human oversight is indispensable.
The "real takeaway," as emphasized by one commentator, isn't just that humans are needed to clean up after AI, but "to do the things AI fundamentally can't... make moral calls, challenge power, understand nuance and decide what actually matters."
As newsrooms continue to integrate AI, the focus remains on leveraging its strengths while safeguarding journalistic integrity through rigorous fact-checking, clear ethical guidelines, and an unwavering commitment to the human element in storytelling and news dissemination.