The Rise Of C2PA: A Protocol For Identifying AI Generated Content

Category Technology

tldr #

The White House and soon the EU will be requiring tech platforms to label their AI-generated images, audio, and video with "prominent markings" disclosing their synthetic origins. To achieve this, the Content Authenticity Initiative (CAI) have developed a protocol called C2PA which encodes details about the origins of the content. Major tech companies such as Microsoft, Intel, Adobe and Shutterstock are supporting the project and integrating C2PA into their products. Labels can be attached to content with C2PA in a similar way to nutrition labels, providing users with information about where the content came from and who or what created it.


content #

The White House wants big AI companies to disclose when content has been created using artificial intelligence, and very soon the EU will require some tech platforms to label their AI-generated images, audio, and video with "prominent markings" disclosing their synthetic origins. There’s a big problem, though: identifying material that was created by artificial intelligence is a massive technical challenge. The best options currently available—detection tools powered by AI, and watermarking—are inconsistent, impermanent, and sometimes inaccurate. (In fact, just this week OpenAI shuttered its own AI-detecting tool because of high error rates.) .

Photoshop uses C2PA to label any content that has been designed with AI

But another approach has been attracting attention lately: C2PA. Launched two years ago, it’s an open-source internet protocol that relies on cryptography to encode details about the origins of a piece of content, or what technologists refer to as "provenance" information. The developers of C2PA often compare the protocol to a nutrition label, but one that says where content came from and who—or what—created it. The project, part of the nonprofit Joint Development Foundation, was started by Adobe, Arm, Intel, Microsoft, and Truepic, which formed the Coalition for Content Provenance and Authenticity (from which C2PA gets its name). Over 1,500 companies are now involved in the project through the closely affiliated open-source community, Content Authenticity Initiative (CAI), including ones as varied and prominent as Nikon, the BBC, and Sony.

The open-source code for C2PA is accessible and free for anyone to use

Recently, as interest in AI detection and regulation has intensified, the project has been gaining steam; Andrew Jenks, the chair of C2PA, says that membership has increased 56% in the past six months. The major media platform Shutterstock has joined as a member and announced its intention to use the protocol to label all its AI-generated content, including its DALL-E-powered AI image generator. Sejal Amin, chief technology officer at Shutterstock, told MIT Technology Review in an email that the company is protecting artists and users by "supporting the development of systems and infrastructure that create greater transparency to easily identify what is an artist’s creation versus AI-generated or modified art." .

The Coalition for Content Provenance and Authenticity (C2PA) was started by Adobe, Arm, Intel, Microsoft, and Truepic

Microsoft, Intel, Adobe, and other major tech companies started working on C2PA in February 2021, hoping to create a universal internet protocol that would allow content creators to opt in to labeling their visual and audio content with information about where it came from. (At least for the moment, this does not apply to text-based posts.) Crucially, the project is designed to be adaptable and functional across the internet, and the base computer code is accessible and free to anyone.

The Content Authenticity Initiative (CAI) is an open-source community affiliated with C2PA, with over 1500 companies

Truepic, which sells content verification products, has demonstrated how the protocol works with a deepfake video with Revel.ai. When a viewer hovers over a little icon at the top right corner of the screen, a box of information about the video appears that includes the disclosure that it "contains AI-generated content." Adobe has also already integrated C2PA, which it calls content credentials, into several of its products, including Photoshop—sort of a warning labelled to every piece of content that was designed by AI.

OpenAI recently closed down its own AI detecting tool due to high error rates

hashtags #
worddensity #

Share