Meta has updated its approach to handling manipulated media, requiring labels for artificial intelligence (AI)-generated content and only removing AI content that violates its policies, the company said on Friday.
The tech giant said it will start labeling video, audio, and image content as “Made with AI” when it detects industry-standard AI image indicators or when users disclose that they are uploading AI-generated content.
Ms. Bickert said the timeline aims to allow people time “to understand the self-disclosure process” before Meta stops removing “the smaller subset of manipulated media.”
Meta said that AI-generated content posted on Facebook, Instagram, and Threads will be kept when no other policy violation is present to avoid “the risk of unnecessarily restricting freedom of speech.”
The platforms will include informational labels and context to any content generated with AI. The company already added “Imagined with AI” to photorealistic images created using its own AI tools.
“If we determine that digitally-created or altered images, video or audio create a particularly high risk of materially deceiving the public on a matter of importance, we may add a more prominent label so people have more information and context.
“This overall approach gives people more information about the content so they can better assess it and so they will have context if they see the same content elsewhere,” Ms. Bickert stated.
She added that Meta will remove content that violates its policies against voter interference, bullying and harassment, violence and incitement, or any other policy in its Community Standards.
The company has a network of nearly 100 independent fact-checkers tasked with reviewing false and misleading AI-generated content.
“When fact-checkers rate content as False or Altered, we show it lower in Feed so fewer people see it, and add an overlay label with additional information.
Meta’s Existing Approach Is ‘Too Narrow’
This update follows Meta’s announcement in February that it was working with industry partners to identify AI content and the Oversight Board’s recommendation to revise its Manipulated Media policy.The Oversight Board, a quasi-independent body in charge of reviewing content moderation decisions, has previously urged Meta to promptly reconsider its policy as the U.S. elections approach.
“Meta should extend the policy to cover audio as well as to content that shows people doing things they did not do,” the board stated.
The board also advised Meta to stop removing manipulated media when no other policy violation is present and instead apply a label indicating the content is significantly altered.
Meta said it agreed with the board’s argument that its policy was too narrow.
Ms. Bickert stated in the blog post that Meta’s manipulated media policy was established in 2020 “when realistic AI-generated content was rare and the overarching concern was about videos.”
“In the last four years, and particularly in the last year, people have developed other kinds of realistic AI-generated content like audio and photos, and this technology is quickly evolving.
“As the Board noted, it’s equally important to address manipulation that shows a person doing something they didn’t do,” she added.