Skip to content

Shelly Palmer - The LA Times’ AI experiment sparks backlash

Think about this: Will readers trust AI-generated analysis?
newspapers-1124
Regardless of AI, no business leader should allow unvetted content to be published under their brand — especially a news organization.

The Los Angeles Times has introduced an AI-driven labeling system to flag articles that take a stance or are written from a personal perspective. Announced in a letter from billionaire owner Patrick Soon-Shiong, the “Voices” label applies not only to opinion pieces but also to news commentary, criticism and reviews. Some articles will also include AI-generated “Insights,” which summarize key points and present alternative viewpoints.

“We don’t think this approach — AI-generated analysis unvetted by editorial staff — will do much to enhance trust in the media,” said Matt Hamilton, vice chair of the LA Times Guild, in a statement to The Hollywood Reporter.

Early results have raised concerns. The Guardian highlighted an LA Times opinion piece about AI-generated historical documentaries, where the AI tool claimed the article had a “Center Left” bias and suggested that AI “democratizes historical storytelling.” Another flagged article covered California cities that elected Ku Klux Klan members in the 1920s. The AI-generated “counterpoint” stated that some historical accounts framed the Klan as a cultural response to societal change rather than a hate-driven movement—an accurate historical note, but awkwardly positioned as an opposing view.

AI-generated editorial oversight has led to similar missteps elsewhere. Microsoft’s AI-powered news aggregator once recommended an Ottawa food bank as a tourist lunch spot. Gizmodo’s AI-produced article listing Star Wars films in “chronological order” failed to follow chronology. Apple recently adjusted its Apple Intelligence summaries after a garbled AI-generated notification falsely suggested a CEO shooting suspect had shot himself.

Major outlets like Bloomberg, USA Today, The Wall Street Journal, The New York Times, and The Washington Post use AI in their operations, but few trust it with editorial judgments. The LA Times’ rollout should have remained an experiment. Regardless of AI, no business leader should allow unvetted content to be published under their brand — especially a news organization.

As always, your thoughts are welcome. Just reply to this email. -s

P.S. AI transformation is not about technology; it's about leadership. We'll explore this and other AI challenges at the MMA CMO AI Transformation Summit (March 18, 2025 | NYC). I'm facilitating and co-producing this half-day invitation-only event which will provide insights into the strategies, technologies, and leadership practices for CMOs who are driving successful AI transformations across the world’s best marketing organizations. Request your invitation.

Shelly Palmer is the Professor of Advanced Media in Residence at Syracuse University’s S.I. Newhouse School of Public Communications and CEO of The Palmer Group, a consulting practice that helps Fortune 500 companies with technology, media and marketing. Named LinkedIn’s “Top Voice in Technology,” he covers tech and business for Good Day New York, is a regular commentator on CNN and writes a popular daily business blog. He's a bestselling author, and the creator of the popular, free online course, Generative AI for Execs. Follow @shellypalmer or visit shellypalmer.com

push icon
Be the first to read breaking stories. Enable push notifications on your device. Disable anytime.
No thanks