Media Organizations Call for Enhanced Regulations in AI Training Data
Numerous media outlets have united in a collective call for stringent copyright protections governing the use of data in training generative AI models. In an open letter addressed to lawmakers worldwide, these organizations have advocated for regulatory measures that enforce transparency in the utilization of training datasets and the acquisition of consent from rights holders before incorporating data into AI training processes.
Additionally, they have urged for provisions allowing media entities to engage in negotiations with AI model operators, facilitating the identification of AI-generated content, and mandating AI companies to eradicate bias and misinformation from their services.
Prominent signatories of this impassioned appeal encompass esteemed entities such as Agence France-Presse, the European Pressphoto Agency, the European Publishers’ Council, Gannett, Getty Images, the National Press Photographers Association, the National Writers Union, News Media Alliance, The Associated Press, and The Authors Guild.
These signatories have underscored that foundation models trained on media content often disseminate information "without any consideration of, remuneration to, or attribution to the original creators." Such practices have been vehemently criticized for undermining the fundamental business models of the media industry, which are reliant on revenue sources like subscriptions, licensing agreements, and advertising.
The open letter posits that not only do these practices potentially violate copyright law, but they also pose a significant threat to media diversity and erode the financial viability of media companies to invest in quality journalism, consequently diminishing public access to reliable and trustworthy information.
This collective stance from media organizations comes in the wake of reports that Google showcased its generative AI news writing tool, Genesis, to leading publications such as The New York Times, The Washington Post, and News Corp, owner of The Wall Street Journal. Subsequently, various news outlets have identified multiple inaccuracies in articles generated by AI systems.
However, it's crucial to note that concerns regarding AI models training on copyrighted material extend beyond media organizations. The legal status of this practice remains untested, with the Senate conducting multiple hearings on the matter, and a lawsuit alleging copyright infringement by generative AI art platforms Midjourney and Stable Diffusion currently progressing through the courts. Notably, comedian Sarah Silverman and two authors have also taken legal action against OpenAI, alleging copyright infringement.
The signatories of the letter have expressed their belief that generative AI holds potential benefits for both organizations and the public. They are actively seeking a seat at the table to participate in discussions concerning the protection of media companies' rights and the responsible use of AI technology.
Reports suggest that some of the signatory organizations have already entered into agreements allowing AI companies to use their content for training purposes. For instance, The Associated Press has authorized OpenAI to license a portion of its archive and explore the use of generative AI in news writing.