Bill Gates on Artificial Intelligence: The Risk of Artificial Intelligence

Category Artificial Intelligence

tldr #

Bill Gates has weighed in on the debate concerning risk around artificial intelligence. He's chosen to focus on the risks that are present or soon will be, rather than on the possibility of a super AI rising in the future. Gates argues that AI is already a threat in many areas of society, and although it is understandable to have fears, steps can be taken to protect ourselves.


content #

Bill Gates has joined the chorus of big names in tech who have weighed in on the question of risk around artificial intelligence. The TL;DR? He’s not too worried, we’ve been here before.

The optimism is refreshing after weeks of doomsaying—but it comes with few fresh ideas.

According to Gates, AI is "the most transformative technology any of us will see in our lifetimes." That puts it above the internet, smartphones, and personal computers, the technology he did more than most to bring into the world. (It also suggests that nothing else to rival it will be invented in the next few decades.)But there’s no fearmongering in today’s blog post. In fact, existential risk doesn’t get a look in. Instead, Gates frames the debate as one pitting "longer-term" against "immediate" risk, and chooses to focus on "the risks that are already present, or soon will be." .

Gates has been discussing the potential risks of artificial intelligence (AI) for over 10 years

"Gates has been plucking on the same string for quite a while," says David Leslie, director of ethics and responsible innovation research at the Alan Turing Institute in the UK. Gates was one of several public figures who talked about the existential risk of AI a decade ago, when deep learning first took off, says Leslie: "He used to be more concerned about superintelligence way back when. It seems like that might have been watered down a bit." .

The effects of AI are already being seen in various areas of society like elections, education, and employment

Gates doesn’t dismiss existential risk entirely. He wonders what may happen "when"—not if —"we develop an AI that can learn any subject or task," often referred to as artificial general intelligence, or AGI.

He writes: "Whether we reach that point in a decade or a century, society will need to reckon with profound questions. What if a super AI establishes its own goals? What if they conflict with humanity’s? Should we even make a super AI at all? But thinking about these longer-term risks should not come at the expense of the more immediate ones." .

One of the main risks of AI is the potential development of artificial general intelligence (AGI)

Gates has staked out a kind of middle ground between deep-learning pioneer Geoffrey Hinton, who quit Google and went public with his fears about AI in May, and others like Yann LeCun and Joelle Pineau at Meta AI (who think talk of existential risk is "preposterously ridiculous" and "unhinged") or Meredith Whittaker at Signal (who thinks the fears shared by Hinton and others are "ghost stories").

It’s interesting to ask what contribution Gates makes by weighing in now, says Leslie: "With everybody talking about it, we’re kind of saturated." .

AI AI applications are currently being used for wide-ranging projects extending from medicine to data analysis

Like Gates, Leslie doesn’t dismiss doomer scenarios outright. "Bad actors can take advantage of these technologies and cause catastrophic harms," he says. "You don't need to buy into superintelligence, apocalyptic robots, or AGI speculation to understand that." .

"But I agree that our immediate concerns should be in addressing the existing risks that derive from the rapid commercialization of generative AI," says Leslie. "It serves a positive purpose to sort of zoom our lens in and say, ‘Okay, well, what are the immediate concerns?’" .

Deep-learning pioneer Geoffrey Hinton quit Google and went public with his fears about AI in May

In his post, Gates notes that AI is already a threat in many fundamental areas of society, from elections to education to employment. Of course, such concerns aren’t news. What Gates wants to tell us is that althouth fear of AI is understandable, there's plenty that could be done to protect ourselves.


hashtags #
worddensity #

Share