Google is gearing up to implement a crucial change in its advertising policies. Starting in November, political ads on Google platforms will be required to explicitly disclose when images and audio have been created using artificial intelligence (AI). This move is in direct response to the rising use of tools that produce synthetic content.
These new rules are being introduced about a year ahead of the next US presidential election. The concern at hand is the potential for AI to significantly amplify disinformation.
Google’s existing ad policies already prohibit any form of manipulation of digital media for the purpose of deceiving or misleading people, especially in matters of politics, social issues, or public concerns.
However, this latest update will specifically mandate election-related ads to clearly and prominently indicate if they contain “synthetic content” depicting real or realistic-looking individuals or events.
Google has recommended labels such as “this image does not depict real events” or “this video content was synthetically generated” to serve as noticeable flags.
In addition, Google’s ad policy strictly forbids demonstrably false claims that could erode trust in the electoral process. Transparency is key, as political ads are required to disclose who funded them, and information about the messages is made available in an online ads library.
Any disclosures related to digitally altered content in election ads must be both “clear and conspicuous” and placed in positions where they are likely to be easily spotted.
Examples of content that would necessitate a label include synthetic imagery or audio showing a person saying or doing something they did not actually do, or depicting an event that never took place.
Instances of AI-generated content causing concern have emerged recently. These include a fabricated image of former US President Donald Trump being falsely shown as being arrested, and a deepfake video featuring Ukrainian President Volodymyr Zelensky seemingly discussing surrendering to Russia.
In June, a campaign video for Ron DeSantis featured images that bore markings indicating they were created using AI.
While fake imagery is not a new phenomenon, the rapid progress in the field of generative AI and its potential for misuse have raised valid concerns.
Google has emphasized its ongoing investment in technology to identify and remove such content, underlining its commitment to combat misinformation and maintain the integrity of its platforms.