Gannett Puts AI Experiment on Hold After Unsuccessful Attempt with High School Sports Articles
Gannett, a prominent newspaper chain, has decided to halt the utilization of an artificial intelligence tool designed to generate high school sports reports. The decision comes after the AI technology produced a series of significant errors in articles, particularly noticeable in one of its papers.
Recent high school sports dispatches crafted by an AI service called LedeAI and published by the Columbus Dispatch quickly gained attention on social media platforms – but for all the wrong reasons.
One particularly memorable example, captured by the Internet Archive's Wayback Machine, began: "The Worthington Christian [[WINNING_TEAM_MASCOT]] defeated the Westerville North [[LOSING_TEAM_MASCOT]] 2-1 in an Ohio boys soccer game on Saturday." Subsequently, the page content was revised.
These reports attracted ridicule on social media for their repetitive nature, omission of essential details, usage of peculiar language, and an overall computer-generated tone devoid of genuine sports knowledge.
They consistently described "high school football action," noted instances where one team "snatched victory from" another, and portrayed "cruise-control" wins. Multiple instances involved the repeated mention of game dates within just a few paragraphs. Gannett has suspended its testing of LedeAI in all local markets where the service was being employed. Axios previously reported on this suspension.
A Gannett spokesperson stated, "In addition to adding hundreds of reporting jobs across the country, we are experimenting with automation and AI to build tools for our journalists and add content for our readers. We are continually evaluating vendors as we refine processes to ensure all the news and information we provide meets the highest journalistic standards."
As of Wednesday, numerous sports stories from the Dispatch that were created by the AI tool had been updated and accompanied by a note: "This AI-generated story has been updated to correct errors in coding, programming, or style."
This AI incident follows Gannett's decision in December to lay off 6% of its news division, which resulted in the loss of hundreds of jobs. It also coincides with the broader challenge faced by news outlets as they navigate the rapid evolution of AI technology.
CNET had previously suspended its AI-generated story experiment earlier in the year, after having to issue multiple corrections for inaccuracies. Additionally, certain outlets have taken steps to block access to OpenAI's ChatGPT software to prevent their content from being used to train AI models.