Watermarking and Content Authentication: The White House Makes a Big Bet
Category Technology Tuesday - November 7 2023, 13:02 UTC - 1 year ago The executive order of the White House is a big bet on watermarking and content authentication methods as a way to fight AI-generated misinformation. The Department of Commerce will be establishing standards and best practices for detecting AI-generated content and authenticating official content. Researchers have recently published a paper outlining principles for deploying AI-content authentication. The order doesn’t require companies to adopt these technologies, but could have a big impact on how it’s used in the future.
For me, one of the most interesting parts of the executive order was the emphasis on watermarking and content authentication. I’ve previously written a bit about these technologies, which aim to label content to determine whether it was made by a machine or a human.
The order says that the government will be promoting these tools, the Department of Commerce will establish guidelines for them, and federal agencies will use such techniques in the future. In short, the White House is making a big bet on these methods as a way to fight AI-generated misinformation.
In contrast, watermarking and content authentication, also called provenance technologies, operate on an opt-in model, where content creators can append information up front about the origins of a piece of contentand how it may have changed as it travels online. The hope is that this increases the level of trust for viewers of that information.
Most current watermarking technologies embed an invisible mark in a piece of content to signal that the material was made by an AI. Then a watermark detector identifies that mark. Content authentication is a broader methodology that entails logging information about where content came from in a way that is visible to the viewer, sort of like metadata.
C2PA focuses primarily on content authentication through a protocol it calls Content Credentials, though the group says its technology can be coupled with watermarking. It is "an open-source protocol that relies on cryptography to encode details about the origins of a piece of content," as I wrote back in July. "This means that an image, for example, is marked with information by the device it originated from (like a phone camera), by any editing tools (such as Photoshop), and ultimately by the social media platform that it gets uploaded to. Over time, this information creates a sort of history, all of which is logged." .
The result is verifiable information, collected in what C2PA proponents compare to a "nutrition label," about where a piece of content came from, whether it was machine generated or not. The initiative and its affiliated open-source community have been growing rapidly in recent months as companies rush to verify their content.
The key part of the EO notes that the Department of Commerce will be "establishing standards and best practices for detecting AI-generated content and authenticating official content" and notes that "federal agencies will use these tools to make it easy for Americans to know that the communications they receive from their government are authentic—and set an example for the private sector and governments around the world." .
Crucially, as Melissa and I reported in our story, the executive order falls short of requiringindustry players or government agencies to use this technology.
Soheil Feizi, at the University of Maryland, has conducted two studies of watermarking technologies and found them "unreliable." He says the risk of false positives and negatives is so extensive that watermarks provide "basically zero information." .
"Imagine if there is a tweet or a text with a hidden official White House watermark, but that twee or text doesn't say anything related to the White House,"said Feizi.
In response to the executive order, Feizi and several other researchers recently published a joint paper outlining a set of principles for deploying AI-powered content authentication technologies: first and foremost, government agencies must lead by example. Companies must also have the ability to be transparent about how their technology works. The paper also advocates for government transparency around implementing content authentication, including honest and transparent assessments of how it works and what kind of impact it might have on viewers. Finally, the document suggests that companies and government agencies entering into partnerships to deploy content authentication and other related AI technologies should aim for equitable outcomes.
The executive order is the latest in a string of high-profile actions by the White House to put down digital markers on AI-generated content. President Trump, who has emphasized the importance of digitally verified content in the past, has previously argued that its use is a crucial tool in the effort to combat online misinformation. On top of that, the order doesn’t require anyone to adopt these technologies, but it could have a big impact on how content authentication and watermarking are used in the future. It will be interesting to see how this issue plays out in the coming months and years.
Share