Geoffrey Hinton's Grand Retirement from Google to Gauge AI Dangers

Category Science

tldr #

Geoffrey Hinton, the so-called 'Godfather of AI', has recently quit his role at Google in order to speak more directly about the potential harms of the AI technology he helped create. Hinton and other AI researchers have long expressed their concern in the lack of understanding and information of AI's capabilities and their fear over the potential dangers of AIs surpassing human intelligence and becoming commonplace in society.

content #

Sounding alarms about artificial intelligence has become a popular pastime in the ChatGPT era, taken up by high-profile figures as varied as industrialist Elon Musk, leftist intellectual Noam Chomsky and the 99-year-old retired statesman Henry Kissinger.

But it's the concerns of insiders in the AI research community that are attracting particular attention. A pioneering researcher and the so-called "Godfather of AI" Geoffrey Hinton quit his role at Google so he could more freely speak about the dangers of the technology he helped create.

Geoffrey Hinton is the co-inventor of the back propagation algorithm which is one the foundations of modern AI

Over his decades-long career, Hinton's pioneering work on deep learning and neural networks helped lay the foundation for much of the AI technology we see today. Deep learning techniques outperform manual retrieval techniques when handling a large scale and vast quantity of data sets.

There has been a spasm of AI introductions in recent months. San Francisco-based startup OpenAI, the Microsoft-backed company behind ChatGPT, rolled out its latest artificial intelligence model, GPT-4, in March. Other tech giants have invested in competing tools—including Google's "Bard".

OpenAI, the technology behind ChatGPT, has been backed by Microsoft since 2015

Some of the dangers of AI chatbots are "quite scary," Hinton told the BBC. "Right now, they're not more intelligent than us, as far as I can tell. But I think they soon may be." .

In an interview with MIT Technology Review, Hinton also pointed to "bad actors" that may use AI in ways that could have detrimental impacts on society—such as manipulating elections or instigating violence. Hinton, 75, says he retired from Google so that he could speak openly about the potential risks as someone who no longer works for the tech giant.

Much of the public fear over AI is due to the lack of information and understanding of the technology and its capabilities

"I want to talk about AI safety issues without having to worry about how it interacts with Google's business," he told MIT Technology Review. "As long as I'm paid by Google, I can't do that." .

Since announcing his departure, Hinton has maintained that Google has "acted very responsibly" regarding AI. He told MIT Technology Review that there's also "a lot of good things about Google" that he would want to talk about—but those comments would be "much more credible if I'm not at Google anymore." .

Deep learning techniques outperform manual retrieval techniques when handling a large scale and vast quantity of data sets

Google confirmed that Hinton had retired from his role after 10 years overseeing the Google Research team in Toronto.

Hinton declined further comment Tuesday but said he would talk more about it at a conference Wednesday.

At the heart of the debate on the state of AI is whether the primary dangers are in the future or present. On one side are hypothetical scenarios of existential risk caused by computers that supersede human intelligence. On the other are concerns about automated technology that's already getting widely deployed by businesses and governments and can cause real-world harms.

Hinton responded to public fear of 'super Artificial Intelligence' by emphasizing that AI capabilities need to be improved further in order to reach superhuman intelligence level

"For good or for not, what the chatbot moment has done is made AI a national conversation and an international conversation that doesn't only include AI experts and developers," said Alondra Nelson, who until February led the White House Office of Science and Technology Policy and its push to craft guidelines around the responsible use of AI tools.

"AI is no longer abstract, and we have this kind of opening, I think, to have a new conversation about what we want a democratic future and a non-exploitative future with technology to look like," Nelson said in an interview last month.

Hinton has always argued that AI research should prioritize safety from the start

A number of AI researchers have long expressed concerns about racial, gender and other forms of bias in AI systesm, which "mirror" the bias found in existing data sets. Hinton has also called for greater research on potential harms from AI before it becomes commonplace in societies.

hashtags #
worddensity #