Meta Expands Labeling Of Misleading Images Created By AI Tools

To combat the spread of misinformation in upcoming major elections, Meta has announced this week it will expand its labeling practices of AI-generated imagery across its family of apps, including Facebook, Instagram and its new microblogging platform Threads.

Meta will focus on synthetic images created by competitors’ generative AI tools.

“As the difference between human and synthetic content gets blurred, people want to know where the boundary lies,” stated Meta’s president of global affairs Nick Clegg in a recent statement.

To help social-media users detect which images have been created with AI, Meta currently applies “Imagined with AI” labels to “photorealistic images” created using the company’s own Meta AI creation feature.

Now, however, the company is committed to labeling potentially misleading images created with other companies’ tools as well, like those made by Google, OpenAI, Microsoft, Adobe, Midjourney and Shutterstock.

advertisement

advertisement

“We’re building this capability now, and in the coming months we’ll start applying labels in all languages supported by each app,” Clegg added. “We’re taking this approach through the next year, during which a number of important elections are taking place around the world.”

According to Clegg, Meta expects to learn more about the creation and sharing tendencies around AI content, including “what sort of transparency people find most valuable, and how these technologies evolve.”

Meta’s own AI feature applies visible markers that users can see on images, as well as invisible watermarks and metadata embedded within the image files, which Clegg said improves the efficacy of the invisible watermarks while helping other platforms identify them when they are shared.

These watermarks, according to Clegg, are in line with an assemblage of AI experts called the Partnership on AI (PAI).

In addition, Meta is adding a feature for users to disclose when they share AI-generated video or audio in order to help Meta label it.

“We'll require people to use this disclosure and label tool when they post organic content with a photorealistic video or realistic-sounding audio that was digitally created or altered, and we may apply penalties if they fail to do so,” Clegg said. “If we determine that digitally created or altered image, video or audio content creates a particularly high risk of materially deceiving the public on a matter of importance, we may add a more prominent label if appropriate, so people have more information and context.”

Finally, the company is working to develop classifiers that will automatically detect unmarked AI-generated content, while making it more difficult for users to remove watermarks.

Meta will be rolling out expanded labeling “in the coming months” and applying labels in “all languages supported by each app,” but did not provide more specific details surrounding the company's timeline.

Next story loading loading..