Meta Working with Tech Firms to Identify and Label AI-Generated Images on Social Media
Category Business Monday - February 12 2024, 07:30 UTC - 9 months ago Meta is collaborating with other tech companies to develop standards for identifying and labeling AI-generated images on social media, in an effort to increase transparency and address concerns over disinformation. The company already has systems in place for tagging images created with their own AI tools and hopes to expand this to include audio and video content in the future. The rise of generative AI has raised fears of political chaos through disinformation campaigns and the spread of fake images and videos. While labeling may not fully eliminate the risk, it is a step in the right direction to minimize its impact and promote critical assessment of online content.
On Tuesday, Meta (formerly known as Facebook) announced that it is teaming up with other tech firms to develop standards for identifying and labeling AI-generated images on social media platforms. The company expects to have these systems in place in the coming months, in an effort to increase transparency and give users a better understanding of the origin of the content they see. This initiative comes at a critical time, as concerns over the spread of disinformation continue to grow, particularly in the lead up to major elections happening in several countries this year.
Meta has already implemented systems to tag images created using its own AI tools since December, but recognizes the need to work with other companies to ensure consistency and thoroughness. Their goal is to create a unified approach to labeling AI-generated content, as the technology continues to evolve and become more prevalent. In addition to its current partners, including OpenAI, Google, and Microsoft, Meta hopes to involve other firms involved in the competitive race to lead the nascent AI industry.
While progress has been made in labeling AI-generated images, the industry has been slower to address audio and video content. Nick Clegg, head of global affairs at Meta, admits that this is a challenge, but asserts that the company is committed to finding solutions. In the meantime, Meta recommends that users critically assess online content, checking for authenticity and looking for details that seem unnatural or out of place.
Deepfake content, particularly targeting politicians and women, has been a major concern in recent years, with AI-created nudes of Taylor Swift going viral on social media. The rise of generative AI has raised fears that it could be used to sow political chaos through disinformation or AI clones. In response, OpenAI has prohibited any use of their platform by political organizations or individuals.
Meta's Oversight Board, which reviews content moderation decisions, recently warned that the company's policy on deepfakes is in urgent need of updating. In a decision about a manipulated video of US President Joe Biden, which was not created using AI, the Board criticized the current policy for being incoherent, lacking justification, and focusing too much on the creation of content rather than its potential harm. While there is no perfect solution, labeling AI-generated content is a step in the right direction to minimize the risk of disinformation and false information proliferating online. In the future, industries and platforms must continue to work together to adapt and improve their policies to address the challenges posed by rapidly advancing AI technology.
Share