Facebook and Instagram users will soon notice labels on AI-generated images appearing in their social media feeds, as part of a broader effort within the tech industry to distinguish between real and fabricated content.
Meta (NASDAQ:META), the parent company of Facebook and Instagram, revealed on Tuesday its collaboration with industry partners to establish technical standards for identifying AI-generated images, with plans to extend this initiative to include video and audio content produced by artificial intelligence tools.
The effectiveness of this measure remains to be seen amidst the growing concern over the proliferation of AI-generated content, which poses various risks such as spreading misinformation during elections and creating nonconsensual fake images of public figures.
Gili Vidan, an assistant professor of information science at Cornell University, commented that while Meta’s move signifies acknowledgment of the issue of fake content online, it may not capture all instances of AI-generated content, particularly those created using alternative tools.
Nick Clegg, Meta’s president of global affairs, mentioned in a blog post that the labeling of AI-generated images will roll out “in the coming months” across different languages, emphasizing the importance of transparency as the distinction between human-generated and synthetic content becomes increasingly blurred.
Although Meta already applies an “Imagined with AI” label to photorealistic images generated by its own tool, the majority of AI-generated content on its platforms originates elsewhere. The collaboration with industry leaders aims to establish consistent standards for identifying such content.
Various tech industry initiatives, including the Content Authenticity Initiative led by Adobe, have been advocating for digital watermarking and labeling of AI-generated content. This effort has also received support from the U.S. government, as evident in an executive order signed by President Joe Biden in October.
Meta’s commitment extends beyond its own platforms, as it plans to label AI-generated images from major providers such as Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock as they implement metadata standards for their tools.
Similarly, YouTube CEO Neal Mohan announced plans to introduce labels on synthetic content across YouTube and other Google platforms in the coming months, aiming to provide users with transparency regarding the nature of the content they consume.
While these labeling initiatives are a step forward in combating the spread of fake content, concerns remain regarding their effectiveness in capturing content generated by lesser-known tools, underscoring the importance of clear communication between platforms and users regarding the significance and reliability of these labels.
Featured Image: Unsplash