The Weird and Wild World of AI Risk Discourse

Category Artificial Intelligence

tldr #

In the past six months, the public discourse around AI has shifted towards the risk of AI caused human extinction. My colleague Will Douglas Heaven asked AI experts why? Majority concurred that it was extremely exciting to be scared of AI. Despite Pop-tech giants such as Google and Microsoft warning about the risks, Meta has taken the opposite side and their chief AI scientist Yann LeCun has called the idea of a superintelligence taking over the world 'preposterously ridiculous'. Joelle Pineau, Meta’s vice president of AI research says the extreme focus on future risks does not leave much room for discourse on current AI harms. LeCun believes the right thing to do is to 'slam the door behind' current leaders in AI such as OpenAI.


content #

It’s a really weird time in AI. In just six months, the public discourse around the technology has gone from "Chatbots generate funny sea shanties" to "AI systems could cause human extinction." Who else is feeling whiplash? My colleague Will Douglas Heaven asked AI experts why exactly people are talking about existential risk, and why now. Meredith Whittaker, president of the Signal Foundation (which is behind the private messaging app Signal) and a former Google researcher, sums it up nicely: "Ghost stories are contagious. It’s really exciting and stimulating to be afraid." .

In the past 25 years, AI has been hyped time and time again and recently there has been an shift in discourse towards the risks of AI

We’ve been here before, of course: AI doom follows AI hype. But this time feels different. The Overton window has shifted in discussions around AI risks and policy. What was once an extreme view is now a mainstream talking point, grabbing not only headlines but the attention of world leaders.Whittaker is not the only one who thinks this. While influential people in Big Tech companies such as Google and Microsoft, and AI startups like OpenAI, have gone all in on warning people about extreme AI risks and closing up their AI models from public scrutiny, Meta is going the other way.

Meredith Whittaker, president of the Signal Foundation and former Google researcher, believes 'Ghost stories are contagious' when it comes to AI risks

Last week, on one of the hottest days of the year so far, I went to Meta’s Paris HQ to hear about the company’s recent AI work. As we sipped champagne on a rooftop with views to the Eiffel Tower, Meta’s chief AI scientist, Yann LeCun, a Turing Award winner, told us about his hobbies, which include building electronic wind instruments. But he was really there to talk about why he thinks the idea that a superintelligent AI system will take over the world is "preposterously ridiculous." People are worried about AI systems that "are going to be able to recruit all the resources in the world to transform the universe into paper clips," LeCun said. "That’s just insane." (He was referring to the "paper clip maximizer problem," a thought experiment in which an AI asked to make as many paper clips as possible does so in ways that ultimately harms humans, while still fulfilling its main objective.) .

The Overton window has shifted in discussions around AI risks and policy

He is in stark opposition to Geoffrey Hinton and Yoshua Bengio, two pioneering AI researchers (and the two other "godfathers of AI"), who shared the Turing prize with LeCun. Both have recently become outspoken about existential AI risk.

Joelle Pineau, Meta’s vice president of AI research, agrees with LeCun. She calls the conversation "unhinged." The extreme focus on future risks does not leave much bandwidth to talk about current AI harms, she says.

Meta's chief AI scientist, Yann LeCun and vice president of AI research, Joelle Pineau both believe the extreme focus on future risks does not leave much bandwidth to talk about current AI harms

"When you start looking at ways to have a rational discussion about risk, you usually look at the probability of an outcome and you multiply it by the cost of that outcome. [The existential-risk crowd] have essentially put an infinite cost on that outcome," says Pineau. "When you put an infinite cost, you can’t have any rational discussions about any other outcomes. And that takes the oxygen out of the room for any other discussion, which I think is too bad." .

Geoffrey Hinton, Yoshua Bengio and Yann LeCun are the three Turing Award winning “godfathers of AI” and while two have become outspoken about existential AI risk, LeCun is vocal on the opposite side

"At the moment, OpenAI is in a position where they are ahead, so the right thing to do is to slam the door behind you," says LeCun. "Do we want a future in which AI systems are further ahead than they are today, or a future in which they are less advanced, because all players decide to close ranks and bridge the gap between them?" .


hashtags #
worddensity #

Share