Meta Warns Users: Disclose AI Videos Or Be Punished, In Effort To Tame Misinformation In Election Year

Zinger Key Points
  • Meta has rolled out "Imagined with AI" labels to distinguish AI-created images from human-generated content.
  • Feature launch follows concerns over AI's role in misinformation and its impact on future elections.

With 2024 being one of the biggest election years in history, Meta Platforms Inc. META has announced plans to introduce labels for AI-generated images across its platforms, including Facebook, Instagram and Threads.

What Happened: Meta will apply “Imagined with AI” labels to photorealistic images created using its AI features. The goal is to help users distinguish between human and AI-generated content.

The company also intends to extend this labeling to content created with other companies’ tools.

Meta is currently developing the ability to label AI-generated images posted by users on its platforms, in collaboration with industry partners.

Meta’s plan is to start applying these labels in all supported languages in the coming months.

See Also: Tesla Driver Pulled Over For Wearing Apple Vision Pro While Driving

Additionally, Meta is developing tools that can identify invisible markers at scale. These tools will be able to label images from Google's Gemini, OpenAI, Microsoft Corp., Adobe, Midjourney and Shutterstock as they implement their plans for adding metadata to images created by their tools.

Meta is also introducing a feature that allows users to disclose when they share AI-generated video or audio, so a label can be added to it.

Users will be required to use this disclosure and label tool when they post organic content with a photorealistic video or realistic-sounding audio that was digitally created or altered. In case users don't disclose them appropriately, Meta has plans to "punish" them, including removing the content from its platforms.

Why It Matters: The rise of AI-generated content has sparked concerns about the spread of misinformation and propaganda.

In September, a developer created an AI-powered disinformation machine named CounterCloud, highlighting the ease of crafting convincing fake news. This, along with concerns that AI could spread misinformation in the 2024 elections, has led to increased scrutiny of AI-generated content.

Earlier this year, an AI-generated impersonation of President Joe Biden caused a scandal during the New Hampshire primary. In response to these growing concerns, Vice President Kamala Harris announced the establishment of the United States AI Safety Institute in November.

Check out more of Benzinga's Consumer Tech coverage by following this link.

Read Next: From Silicon Valley To Sandy Beaches: Google Cofounder Larry Page Reportedly Makes Silent $32M Purchase

Disclaimer: This content was partially produced with the help of Benzinga Neuro and was reviewed and published by Benzinga editors.

Photo courtesy: Pixabay

Market News and Data brought to you by Benzinga APIs
Posted In: NewsSocial MediaPoliticsTechGeneral2024 electionAIartificial intelligencebenzinga neuroConsumer TechFacebookInstagramsocial mediaStories That MatterThreads
Benzinga simplifies the market for smarter investing

Trade confidently with insights and alerts from analyst ratings, free reports and breaking news that affects the stocks you care about.

Join Now: Free!

Loading...