The Unsettling Dangers of AI: Pioneers Issue Warnings on Artificial Intelligence
Category Machine Learning Saturday - May 6 2023, 21:49 UTC - 1 year ago Computer scientists who helped build the foundations of today's artificial intelligence technology are warning of its dangers, with AI pioneers Geoffrey Hinton and Yoshua Bengio disagreeing on both the potential risks and solutions. With little regulation in place, some are concerned that the talk of potential future dangers is distracting from the current issues caused by largely unregulated tech products.
Computer scientists who helped build the foundations of today's artificial intelligence technology are warning of its dangers, but that doesn't mean they agree on what those dangers are or how to prevent them.Humanity's survival is threatened when "smart things can outsmart us," so-called Godfather of AI Geoffrey Hinton said at a conference Wednesday at the Massachusetts Institute of Technology.
"It may keep us around for a while to keep the power stations running," Hinton said. "But after that, maybe not." .
After retiring from Google so he could speak more freely, the 75-year-old Hinton said he's recently changed his views about the reasoning capabilities of the computer systems he's spent a lifetime researching.
"These things will have learned from us, by reading all the novels that ever were and everything Machiavelli ever wrote, how to manipulate people," Hinton said, addressing the crowd attending MIT Technology Review's EmTech Digital conference from his home via video. "Even if they can't directly pull levers, they can certainly get us to pull levers." .
"I wish I had a nice simple solution I could push, but I don't," he added. "I'm not sure there is a solution." .
Fellow AI pioneer Yoshua Bengio, co-winner with Hinton of the top computer science prize, told The Associated Press on Wednesday that he's "pretty much aligned" with Hinton's concerns brought on by chatbots such as ChatGPT and related technology, but worries that to simply say "We're doomed" is not going to help.
"The main difference, I would say, is he's kind of a pessimistic person, and I'm more on the optimistic side," said Bengio, a professor at the University of Montreal. "I do think that the dangers—the short-term ones, the long-term ones—are very serious and need to be taken seriously by not just a few researchers but governments and the population." .
There are plenty of signs that governments are listening. The White House has called in the CEOs of Google, Microsoft and ChatGPT-maker OpenAI to meet Thursday with Vice President Kamala Harris in what's being described by officials as a frank discussion on how to mitigate both the near-term and long-term risks of their technology. European lawmakers are also accelerating negotiations to pass sweeping new AI rules.
But all the talk of the most dire future dangers has some worried that hype around superhuman machines—which don't exist—is distracting from attempts to set practical safeguards on current AI products that are largely unregulated and have been shown to cause real-world harms.
Margaret Mitchell, a former leader on Google's AI ethics team, said she's upset that Hinton didn't speak out during his decade in a position of power at Google, especially after the 2020 ouster of prominent Black scientist Timnit Gebru, who had studied the harms of large language models before they were widely commercialized into products such as ChatGPT and Google's Bard.
"It's a privilege that he gets to jump from the realities of the propagation of discrimination now, the propagation of hate language, the toxicity and nonconsensual pornography of women, all of these issues that are actively harming people who are marginalized in tech," said Mitchell, who was also forced out of Google in the aftermath of Gebru's departure. "He's skipping over all of those thorns and talking about the nightmares of the future." .
Share