News

Meta expands AI content labeling requirements on Instagram, Facebook, and Threads

Meta is expanding its AI content labeling requirements on Instagram, Facebook, and Threads. The company will start labeling AI-generated content in May 2024, following feedback from its Oversight Board.

The new policy will initially be limited to images, but will eventually be expanded to include videos and audio. Meta will use two methods to identify AI-generated content:

Users can self-disclose that they used AI tools to create the content.
Meta will use a “technical standards industry consortium” to detect AI-generated images.
Meta’s old policy prohibited the sharing of videos that were manipulated to make it appear as if someone was saying something they did not actually say. The new policy will cover a wider range of AI-generated content.

Meta says that the proliferation of AI tools in recent years has made it easier to create realistic-looking images and videos. The company says that it is important to be able to identify these types of content, as they can be used to spread misinformation.

Meta will not remove AI-generated content that does not violate its community standards, starting in July 2024. However, the company will continue to remove content that is bullying or harassing, regardless of whether it was created using AI.

 

Additional Information:

Meta’s Oversight Board is an independent body that reviews Meta’s content moderation decisions.

The “technical standards industry consortium” that Meta will use to detect AI-generated images is still under development.

Meta’s new policy is part of a broader effort by the company to combat misinformation.

Back to top button