The Morality of AI: Debating Machine Consciousness and Its Impending Impact
Category Artificial Intelligence Tuesday - July 18 2023, 22:10 UTC - 7 months ago The debate over machine consciousness and AI sentience has taken on new urgency amid the rise of AI technology, posing a dilemma of whether human-AI interactions should be included in social ethical considerations. A solution might be found by better understanding the sentience of animals, a field of research which the European Union is currently funding.
Tuesday - July 18 2023, 22:10 UTC - 7 months ago
The debate over machine consciousness and AI sentience has taken on new urgency amid the rise of AI technology, posing a dilemma of whether human-AI interactions should be included in social ethical considerations. A solution might be found by better understanding the sentience of animals, a field of research which the European Union is currently funding.
Artificial intelligence has progressed so rapidly that even some of the scientists responsible for many key developments are troubled by the pace of change. Earlier this year, more than 300 professionals working in AI and other concerned public figures issued a blunt warning about the danger the technology poses, comparing the risk to that of pandemics or nuclear war.
Lurking just below the surface of these concerns is the question of machine consciousness. Even if there is "nobody home" inside today’s AIs, some researchers wonder if they may one day exhibit a glimmer of consciousness—or more. If that happens, it will raise a slew of moral and ethical concerns, says Jonathan Birch, a professor of philosophy at the London School of Economics and Political Science.
As AI technology leaps forward, ethical questions sparked by human-AI interactions have taken on new urgency. "We don’t know whether to bring them into our moral circle, or exclude them," said Birch. "We don’t know what the consequences will be. And I take that seriously as a genuine risk that we should start talking about. Not really because I think ChatGPT is in that category, but because I don’t know what’s going to happen in the next 10 or 20 years." .
In the meantime, he says, we might do well to study other non-human minds—like those of animals. Birch leads the university’s Foundations of Animal Sentience project, a European Union-funded effort that "aims to try to make some progress on the big questions of animal sentience," as Birch put it. "How do we develop better methods for studying the conscious experiences of animals scientifically? And how can we put the emerging science of animal sentience to work, to design better policies, laws, and ways of caring for animals?" .
Our interview was conducted over Zoom and by email, and has been edited for length and clarity.
(This article was originally published on Undark. Read the original article.) .
Undark: There’s been ongoing debate over whether AI can be conscious, or sentient. And there seems to be a parallel question of whether AI can seem to be sentient. Why is that distinction is so important? .
Jonathan Birch: I think it’s a huge problem, and something that should make us quite afraid, actually. Even now, AI systems are quite capable of convincing their users of their sentience. We saw that last year with the case of Blake Lemoine, the Google engineer who became convinced that the system he was working on was sentient—and that’s just when the output is purely text, and when the user is a highly skilled AI expert.
So just imagine a situation where AI is able to control a human face and a human voice and the user is inexperienced. I think AI is already in the position where it can convince large numbers of people that it is a sentient being quite easily. And it’s a big problem, because I think we will start to see people campaigning for AI welfare, AI rights, and things like that.
And we won’t know what to do about this. Because what we’d like is a really strong knockdown argument that proves that the AI systems they’re talking about are not conscious. And we don’t have that. Our theoretic ideas at the moment are still quite weak. We don’t have a good handle on what sentience is.