The Pioneers of AI and Consent - The Inevitable Tension of Personal Autonomy

Category Artificial Intelligence

tldr #

This week's big news is that Geoffrey Hinton is leaving Google after 10 years and OpenAI is launching an 'incognito' mode for its AI chatbot. Meanwhile, Spawning has been developing an opt-out feature for images from the LAION data set. These developments show the importance of AI consent and the need for more control over data in an increasingly automated world.


content #

This week's big news is that Geoffrey Hinton, a VP and Engineering Fellow at Google, and a pioneer of deep learning who developed some of the most important techniques at the heart of modern AI, is leaving the company after 10 years.

But first, we need to talk about consent in AI. Last week, OpenAI announced it is launching an "incognito" mode that does not save users’ conversation history or use it to improve its AI language model ChatGPT. The new feature lets users switch off chat history and training and allows them to export their data. This is a welcome move in giving people more control over how their data is used by a technology company.In an interview last week with my colleague Will Douglas Heaven, OpenAI’s chief technology officer, Mira Murati, said the incognito mode was something that the company had been "taking steps toward iteratively" for a couple of months and had been requested by ChatGPT users. OpenAI told Reuters its new privacy features were not related to the EU’s GDPR investigations.

Geoffrey Hinton was the first AI scientist to win the Turing Award in 2021 for his contributions to deep learning over the course of 40 years.

"We want to put the users in the driver’s seat when it comes to how their data is used," says Murati. OpenAI says it will still store user data for 30 days to monitor for misuse and abuse.

But despite what OpenAI says, Daniel Leufer, a senior policy analyst at the digital rights group Access Now, reckons that GDPR—and the EU’s pressure—has played a role in forcing the firm to comply with the law. In the process, it has made the product better for everyone around the world.

ChatGPT's incognito mode works by making sure that the conversation history and training content is not stored to improve its AI language model.

"Good data protection practices make products safer [and] better [and] give users real agency over their data," he said on Twitter.

A lot of people dunk on the GDPR as an innovation-stifling bore. But as Leufer points out, the law shows companies how they can do things better when they are forced to do so. It’s also the only tool we have right now that gives people some control over their digital existence in an increasingly automated world.

Hinton graduated from Cambridge University and has worked at MIT, Carnegie Mellon, Toronto, and now Google.

Since late last year, people and companies have been able to opt out of having their images included in the open-source LAION data set that has been used to train the image-generating AI model Stable Diffusion. Since December, around 5,000 people and several large online art and image platforms, such as Art Station and Shutterstock, have asked to have over 80 million images removed from the data set, says Mat Dryhurst, who cofounded an organization called Spawning that is developing the opt-out feature. This means that their images are not going to be used in the next version of Stable Diffusion.

Stable Diffusion AI uses images from the open-source LAION data set to train its model.

Dryhurst thinks people should have the right to know whether or not their work has been used to train AI models, and that they should be able to say whether they want to be part of the system to begin with.

"Our ultimate goal is to build a consent layer for AI, because it just doesn’t exist," he says.

Now we turn to Geoffrey Hinton. MIT Technology Review's senior AI editor Will Douglas Heaven met Hinton at his house in north London just four days before the bombshell announcement that he is quitting Google. Hinton is a pioneer of deep learning who helped develop some of the most important techniques at the heart of modern artificial intelligence, but over the last year had begun to express doubts about his work and its wider implications, leading him to leave the company.

As of May-2023, an estimated 5,000 people and several art platforms, such as Art Station, have asked to have their images removed from the LAION data set.

The developments in AI consent and opt-out by OpenAI and Spawning are a sign of the times. People now have more control over their data, how it is used and who has access to it. New laws and regulations are making companies treat their customers better and giving them the freedom to exercise their autonomy over their online presence. Although this development doesn't come without its own set of risks, one thing is certain: AI is changing quickly, and it is imperative that we keep up with these changes and understand how to safely interact with it.

The team at Access Now is committed to building a 'consent layer' for AI to provide more agency for users over their data.

hashtags #
worddensity #

Share