Geoffrey Hinton: Stepping Down from Google, but deep learning pioneer still 'scared' of AI's potential

Category Artificial Intelligence

tldr #

Geoffrey Hinton, a pioneer of deep learning and joint recipient of the 2018 Turing Award, is stepping down from Google to focus on more philosophical work related to AI's potential. Hinton's work on backpropagation and his collaboration with graduate student Ilya Sutskever and a team at Google were key for the development of large language models such as GPT-4. These developments, alongside Google's immense computing resources and team of researchers, reached levels of AI power that exceeded expectations and scared Hinton.


content #

I met Geoffrey Hinton at his house on a pretty street in north London just four days before the bombshell announcement that he is quitting Google. Hinton is a pioneer of deep learning who helped develop some of the most important techniques at the heart of modern artificial intelligence, but after a decade at Google, he is stepping down to focus on new concerns he now has about AI.

At the start of our conversation, I took a seat at the kitchen table, and Hinton started pacing. Plagued for years by chronic back pain, Hinton almost never sits down. For the next hour I watched him walk from one end of the room to the other, my head swiveling as he spoke. And he had plenty to say.

Geoffrey Hinton received received the 2018 Turing Award with Yann LeCun and Yoshua Bengio

The 75-year-old computer scientist, who was a joint recipient with Yann LeCun and Yoshua Bengio of the 2018 Turing Award for his work on deep learning, says he is ready to shift gears. "I'm getting too old to do technical work that requires remembering lots of details," he told me. "I’m still okay, but I’m not nearly as good as I was, and that’s annoying."But that’s not the only reason he’s leaving Google. Hinton wants to spend his time on what he describes as "more philosophical work." And that will focus on the small but—to him—very real danger that AI will turn out to be a disaster.

Geoffrey Hinton's most renowned work is on backpropagation

Leaving Google will let him speak his mind, without the self-censorship a Google executive must engage in. "I want to talk about AI safety issues without having to worry about how it interacts with Google’s business," he says. "As long as I’m paid by Google, I can’t do that."That doesn’t mean Hinton is unhappy with Google by any means. "It may surprise you," he says. "There’s a lot of good things about Google that I want to say, and they’re much more credible if I’m not at Google anymore." .

Hinton was one of Ilya Sutskever's supervisors and was key for the development of ChatGPT

Hinton says that the new generation of large language models—especially GPT-4, which OpenAI released in March—has made him realize that machines are on track to be a lot smarter than he thought they’d be. And he’s scared about how that might play out.

"These things are totally different from us," he says. "Sometimes I think it’s as if aliens had landed and people haven’t realized because they speak very good English." .

Backpropagation allows machines to learn and is fundamental for modern AI

--- Foundations --- .

Hinton is best known for his work on a technique called backpropagation, which he proposed (with a pair of colleagues) in the 1980s. In a nutshell, this is the algorithm that allows machines to learn. It underpins almost all neural networks today, from computer vision systems to large language models.

It took until the 2010s for the power of neural networks trained via backpropagation to truly make an impact. Working with a couple of graduate students, Hinton showed that his technique was better than any others at getting a computer to identify objects in images. They also trained a neural network to predict the next letters in a sentence, a precursor to today’s large language models.

Large language models such as GPT-4 are the development that has made Hinton worry about the potential of AI

One of these graduate students was Ilya Sutskever, who went on to cofound OpenAI and lead the development of ChatGPT. "We got the first inklings that this stuff could be amazing," says Hinton. "But it’s taken a long time to sink in that it needs to be done at a huge scale to be good." Back in the 1980s, neural networks were a joke. The dominant idea at the time, known as symbolic AI, was that intelligence is something that can only be expressed in computer code and shouldn’t be left in the hands of machines.

Symbolic AI, the dominant idea of the 80's, disagreed with the idea of learning through backpropagation

But in the 2000s, Hinton’s backpropagation algorithms started to make a comeback. Led by a team at Google, researchers showed that enormous datasets and powerful computers could support learning algorithms that far outperformed manual coding. Those algorithms, based on the same backpropagation idea Hinton had proposed 20 years earlier, turned out to be incredibly powerful for many challenging problems in speech, language, and vision.

A decade of work followed, as Hinton helped develop new techniques, such as capsule networks and radial basis function networks, to get better results from deep learning models. This culminated in his work at Google, including the development of Transformer models and the BERT language model. Those technical breakthroughs, together with the soaring computing resources of Google and the thousands of researchers at the company, meant that AI was more powerful than ever before.


hashtags #
worddensity #

Share