Social media giant Meta, the parent company of Facebook and Instagram, announced on Tuesday its plans to increase the usage of labels on artificial intelligence (AI)-generated images in preparation for the upcoming November election. While Meta acknowledges the challenge of detecting audio and video created with AI, it aims to provide transparency regarding the use of generative AI technology.

According to a blog post by Nick Clegg, Meta’s President for Global Affairs, the company will introduce a label stating “Imagined with AI” whenever feasible. Clegg emphasizes the importance of informing users when they come across photorealistic content produced with AI.

The implementation of these labels will commence in the coming months across Facebook, Instagram, and Threads. Meta will ensure that the labels appear in all supported languages for each app. The timing of this initiative aligns with the U.S. presidential race in November, as well as elections taking place in more than 50 other countries, including India and Mexico.

Clegg states, “We’re taking this approach through the next year, during which a number of important elections are taking place around the world. During this time, we expect to learn much more about how people are creating and sharing AI content, what sort of transparency people find most valuable, and how these technologies evolve.”

Experts have expressed concerns about the unprecedented threat posed by disinformation, including AI-generated audio, video, and images, in the year 2024. In a recent incident, a robocall impersonating President Joe Biden discouraged New Hampshire residents from voting.

Meta remains optimistic about detecting AI-generated images, even those produced using software developed by other companies. Clegg mentions that industrywide technical standards will enable Meta to identify images created using AI software from prominent companies such as Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock. However, the same level of detection for AI-generated video and audio is not yet attainable.

Clegg explains, “While companies are starting to include signals in their image generators, they haven’t started including them in AI tools that generate audio and video at the same scale, so we can’t yet detect those signals and label this content from other companies.” Nevertheless, Meta plans to address this issue by asking users to disclose when sharing AI-generated videos or audio, allowing Meta to add appropriate labels. Noncompliance may result in penalties.

As an industry leader, Meta’s commitment to labeling AI-generated images sets an example for other platforms. By providing transparency, Meta aims to ensure that users can distinguish between authentic and AI-created content. Stay informed and mindful as the election season approaches.

Source: F5 Magazine

By f5mag

Leave a Reply

Your email address will not be published. Required fields are marked *