Can We Detect Consciousness in Artificial Intelligence?

Category Neuroscience

tldr #

A preprint paper written by neuroscientists, philosophers, and computer scientists proposed a list of indicators of consciousness that could be used for determining whether an AI agent exhibits sentience. Tests based on a comparison between the indicator properties of AI and those of humans revealed that, although artificial consciousness might be achievable in the near future, current AI systems are not conscious.


content #

Recently I had what amounted to a therapy session with ChatGPT. We talked about a recurring topic that I’ve obsessively inundated my friends with, so I thought I’d spare them the déjà vu. As expected, the AI’s responses were on point, sympathetic, and felt so utterly human.

As a tech writer, I know what’s happening under the hood: a swarm of digital synapses are trained on an internet’s worth of human-generated text to spit out favorable responses. Yet the interaction felt so real, and I had to constantly remind myself I was chatting with code—not a conscious, empathetic being on the other end.

The authors of the preprint paper analysed different AI systems with the objective of discovering the likelihood of artificially issued consciousness.

Or was I? With generative AI increasingly delivering seemingly human-like responses, it’s easy to emotionally assign a sort of "sentience" to the algorithm (and no, ChatGPT isn’t conscious). In 2021, Blake Lemoine at Google stirred up a media firestorm by proclaiming that one of the chatbots he worked on, LaMDA, was sentient—and he subsequently got fired.

But most deep learning models are loosely based on the brain’s inner workings. AI agents are increasingly endowed with human-like decision-making algorithms. The idea that machine intelligence could become sentient one day no longer seems like science fiction.

Alan Turing invented the Turing test in order to test the intelligence of AI agents in initiating an indistinguishable conversation with a human judge.

How could we tell if machine brains one day gained sentience? The answer may be based on our own brains.

A preprint paper authored by 19 neuroscientists, philosophers, and computer scientists, including Dr. Robert Long from the Center for AI Safety and Dr. Yoshua Bengio from the University of Montreal, argues that the neurobiology of consciousness may be our best bet. Rather than simply studying an AI agent’s behavior or responses—for example, during a chat—matching its responses to theories of human consciousness could provide a more objective ruler.

Multiple theories concerning the neurobiology of consciousness have been developed in order to better understand the properties of the emergence of consciousness.

It’s an out-of-the-box proposal, but one that makes sense. We know we are conscious regardless of the word’s definition, which is still unsettled. Theories of how consciousness emerges in the brain are plenty, with multiple leading candidates still being tested in global head-to-head trials.

The authors didn’t subscribe to any single neurobiological theory of consciousness. Instead, they derived a checklist of "indicator properties" of consciousness based on multiple leading ideas. There isn’t a strict cutoff—say, meeting X number of criteria means an AI agent is conscious. Rather, the indicators make up a moving scale: the more criteria met, the more likely a sentient machine mind is.

The indicators proposed by the authors of the preprint table make up a moving scale based on which the level of sentience of an AI agent can be determined.

Using the guidelines to test several recent AI systems, including ChatGPT and other chatbots, the team concluded that for now, "no current AI systems are conscious." .

However, "there are no obvious technical barriers to building AI systems that satisfy these indicators," they said. It’s possible that "conscious AI systems could realistically be built in the near future." .

Listening to an Artificial Brain .

The results of the tests conducted by the authors of the preprint paper revealed that AI systems currently don't meet the criterion for sentience.

Since Alan Turing’s famous imitation game in the 1950s, scientists have pondered how to prove whether a machine exhibits intelligence like a human’s.

Better known as the Turing test, the theoretical setup has a human judge conversing with a machine and another human—the jude communicates with the two participants through a text interface and attempts to determine which is the machine. This test has only been partially successful, and its efficacy is still questioned, as AI agents are often trained to pass off as “human” with little understanding of the conversation taking place beyond a deceptive context.

The results of the tests showed that, although artificial consciousness might be achievable in the near future, with current developments there are no technical barriers to achieving that.

The authors of the preprint paper analysed different AI systems with the objective of discovering the likelihood of artificially issued consciousness. Multiple theories concerning the neurobiology of consciousness have been developed in order to better understand the properties of the emergence of consciousness. The indicators proposed by the authors of the preprint table make up a moving scale based on which the level of sentience of an AI agent can be determined. The results of the tests conducted by the authors of the preprint paper revealed that AI systems currently don't meet the criterion for sentience. The results of the tests showed that, although artificial consciousness might be achievable in the near future, with current developments there are no technical barriers to achieving that.


hashtags #
worddensity #

Share