Google To Identify AI-Generated Images In Search And Ads

Google plans to make changes to its search engine and advertising products later this year that identify AI-generated images or ones that have been edited by AI tools.

The company announced the change as California state lawmakers approved SB 1047, a bill that would hold artificial intelligence companies legally liable if they don’t take required safety measures and their technology later causes major harm.

AI creatives are pushing the boundaries of reality, with many types of images being used to catch and keep the attention of viewers and consumers. 

The goal is to flag AI-related images in the “About this image” window in advertising, and across Search and Google Lens, as well as the Circle to Search feature on Android.

In the future, Google expects similar disclosures to make their way to other Google properties such as YouTube when content is captured with a camera. The company did not disclose a timeline.

advertisement

advertisement

Earlier this year, the company joined the Coalition for Content Provenance and Authority (C2PA) as a steering committee member. Other members include Adobe, BBC, Intel, Microsoft, Publicis Groupe, Sony and Truepic. LinkedIn began using C2PA to label AI-generated content around May.

Google said it will further the adoption of Content Credentials, the C2PA’s technical standard for tamper-resistant metadata that can be attached to digital content, showing how and when it was created or modified.

In Search, if an image contains C2PA metadata, people will be able to use our "About this image" feature to see whether it was created or edited with AI tools. "About this image" helps provide people with context about the images they see online and is accessible in Google Images, Lens and Circle to Search.

Google's ad systems have begun to integrate C2PA metadata, with a goal to ramp up the use of C2PA signals in advertising to inform how the company enforces key policies.

Only images containing C2PA metadata will have a flag indicating they have been manipulated in search. The coalition’s standards have not seen widespread adoption. 

The process will validate content against the C2PA Trust list, which is forthcoming. The list allows platforms to confirm the content’s origin. For example, if the data shows an image was taken by a specific camera model, the trust list helps validate that this piece of information is accurate.

Richardson explained that establishing and signaling content remains a complex challenge, with a range of considerations based on the product or service. "We’re also encouraging more services and hardware providers to consider adopting the C2PA’s Content Credentials," she wrote.

Google's work with C2PA continues to expand and offer transparency in AI-created images. The company is bringing SynthID — embedded watermarking created by Google DeepMind — to additional generation AI tools for content creation and more forms of media including text, audio, visual and video. 

Next story loading loading..