AI and the American Public: Perceptions, Polls, and the Promise of Medicine
Category Artificial Intelligence Monday - May 22 2023, 03:07 UTC - 1 year ago The recent Reuters poll revealed that more than half of Americans see AI as a potential future threat. The concerns around AI are varying, from lack of understanding to potential negative impacts of its use. AI has a wide range of capabilities, from recommendation algorithms to modeling millions of proteins in the medical field. However, peoples' fear and distrust of AI comes partly from a lack of understanding it. The need for an accurate assessment of the technology and its implications for responsible use is essential.
AI is the talk of the town these days. But despite the technology’s impressive accomplishments—or perhaps because of them—not all of that talk is positive. There was a New York Times tech columnist’s piece about his unsettling interaction with ChatGPT in February; an open letter calling for a moratorium on AI research in March; "godfather of AI" Geoffrey Hinton’s dramatic resignation from Google and warning about the dangers of AI; and just this week, OpenAI CEO Sam Altman’s testimony before Congress, in which he said his "worst fear is we cause significant harm to the world" and encouraged legislation around the technology (though he also argued that generative AI should be treated differently, which would be convenient for his company) .
It seems these warnings (along with all the other media circulating on the topic) have reached the American public loud and clear, and people don’t quite know what to think—but many are getting nervous. A poll carried out last week by Reuters revealed that more than half of Americans believe AI poses a threat to humanity’s future. The poll was conducted online between May 9 and May 15, with 4,415 adults participating, and the results were published yesterday .
More than two-thirds of respondents expressed concern about possible negative impacts of AI, while 61 percent believe it could be a threat to civilization. "It’s telling such a broad swatch of Americans worry about the negative effects of AI," said Landon Klein, director of US policy at the Future of Life Institute, the organization behind the previously mentioned open letter. "We view the current moment similar to the beginning of the nuclear era, and we have the benefit of public perception that is consistent with the need to take action .
"One nebulous aspect of the poll, and of many of the headlines about AI we see on a daily basis, is how the technology is defined. What are we referring to when we say "AI"? The term encompasses everything from recommendation algorithms that serve up content on YouTube and Netflix, to large language models like ChatGPT, to models that can design incredibly complex protein architectures, to the Siri assistant built into many iPhones .
IBM’s definition is simple: "a field which combines computer science and robust datasets to enable problem-solving." Google, meanwhile, defines it as "a set of technologies that enable computers to perform a variety of advanced functions, including the ability to see, understand and translate spoken and written language, analyze data, make recommendations, and more."It could be that peoples’ fear and distrust of AI comes partly from a lack of understanding of it, and a stronger focus on unsettling examples than positive ones .
The AI that can design complex proteins may help scientists discover stronger vaccines and other drugs, and could do so on a vastly accelerated timeline. In fact, biotechnology and medicine are two fields for which AI holds enormous promise, be it by modeling millions of proteins, coming up with artificial enzymes, powering brain implants that help disabled people communicate, or helping diagnose conditions like Alzheimer’s .
Seba Alvarado, a cognitive engineer, described the current pulse around AI as a "moral panic," noting that concerns and fears may be outstripping actual progress. Having an accurate assessment of the technology and its implications is important, because when it comes to regulation, people’s perception matters in policy decisions. Though, of course, it is not the only factor that matters. Overall, AI's promises in the right settings and environments can be extraordinary, particularly in healthcare .
Therefore its importance can not be understated and it's important to promote open and transparent conversations about the responsible use of AI in the coming years.
Share