Google and YouTube to Require Disclosure for AI-Modified Political Ads
As the election season approaches, Google and YouTube are taking steps to address concerns related to AI-altered political advertisements. In a recent update to Google's political content policy, any advertising content featuring "synthetic" or artificially manipulated elements, including people, voices, or events, will now be required to prominently disclose its use of AI alterations within the advertisement itself.
While Google has already prohibited the use of deepfake content in advertising, these enhanced disclosure rules expand to cover any AI modifications beyond minor edits. The policy does make exceptions for synthetic content that is altered or generated in ways deemed "inconsequential to the ad's claims." Additionally, AI can still be employed for certain video and photo editing purposes, such as resizing, cropping, color correction, defect correction, or background edits.
The intersection of political ads and technology platforms has become a significant aspect of the upcoming 2024 election. This development follows Elon Musk's recent announcement that X (formerly Twitter) will once again allow political ads from candidates and political parties, reversing a four-year-old ban on all political ads. Meanwhile, reports have emerged of unlabeled advertisements appearing in users' feeds.
A September report by Media Matters for America revealed issues with Meta platforms not adequately enforcing the company's political ads policy, citing unlabeled right-wing advertisements on Facebook and Instagram. Google's updated policy is scheduled to take effect in November and will apply to election-related ads across Google's platforms, including YouTube and third-party sites within the company's ad network.