C2PA: The Coalition For Content Provenance And Authenticity
Category Technology Thursday - August 3 2023, 09:40 UTC - 1 year ago C2PA is a freely available protocol to securelly label content with information from where it came from, based on an opt-in approach. The protocol is estimated to be released before the US election season of 2023 to help stop machine-generated muck and misinformation. Companies like Truepic, Revel AI, and Adobe are founding members of this project.
I recently wrote a short story about a project backed by some major tech and media companies trying to help identify content made or altered by AI. As my colleague Melissa Heikkilä has written, most of the current technical solutions "don’t stand a chance against the latest generation of AI language models." Nevertheless, the race to label and detect AI-generated content is on.That’s where this protocol comes in .
Started in 2021, C2PA (named for the group that created it, the Coalition for Content Provenance and Authenticity) is a set of new technical standards and freely available code that securely labels content with information clarifying where it came from. This means that an image, for example, is marked with information by the device it originated from (like a phone camera), by any editing tools (such as Photoshop), and ultimately by the social media platform that it gets uploaded to .
Over time, this information creates a sort of history, all of which is logged.The tech itself—and the ways in which C2PA is more secure than other AI-labeling alternatives—is pretty cool, though a bit complicated. I get more into it in my piece, but it’s perhaps easiest to think about it like a nutrition label (which is the preferred analogy of most people I spoke with). You can see an example of a deepfake video here with the label created by Truepic, a founding C2PA member, with Revel AI .
"The idea of provenance is marking the content in an interoperable and tamper-evident way so it can travel through the internet with that transparency, with that nutrition label," says Mounir Ibrahim, the vice president of public affairs at Truepic.It’s based on an opt-in approach, so groups that want to verify and disclose where content came from, like a newspaper or an advertiser, will choose to add the credentials to a piece of media .
One of the project’s leads, Andy Parsons, who works for Adobe, attributes the new interest in and urgency around C2PA to the proliferation of generative AI and the expectation of legislation, both in the US and the EU, that will mandate new levels of transparency.The vision is grand—people involved admitted to me that real success here depends on widespread, if not universal, adoption. They said they hope all major content companies adopt the standard .
For that, Ibrahim says, usability is key: "You wanna make sure no matter where it goes on the internet, it’ll be read and ingested in the same way, much like SSL encryption. That’s how you scale a more transparent ecosystem online."This could be a critical development as we enter the US election season of 2023, when all eyes will be watching for AI-generated misinformation. Researchers on the project say they are racing to release new functionality and court more social media platforms before the expected onslaught .
Currently, C2PA works primarily on images and video, though members say that they are working on ways to handle text-based content. I get into some of the other shortcomings of the protocol in the piece, but what’s really important to understand is that even when the use of AI is disclosed, it might not stem the harm of machine-generated muck.
Share