The Emerging Reality of Machine Consciousness

Category Artificial Intelligence

tldr #

David Chalmers, a leading authority on consciousness, was invited to the Conference on Neural Information Processing Systems (NeurIPS) to speak about AI consciousness. He argued that large language models lack too many of the requisites for consciousness to be able to experience the world. However, he estimated the chances of developing any consciousness AI in the next 10 years to be above one in five. AI consciousness means potential for moral responsibility and sentience on the part of machines.


content #

David Chalmers was not expecting the invitation he received in September of last year. As a leading authority on consciousness, Chalmers regularly circles the world delivering talks at universities and academic meetings to rapt audiences of philosophers—the sort of people who might spend hours debating whether the world outside their own heads is real and then go blithely about the rest of their day. This latest request, though, came from a surprising source: the organizers of the Conference on Neural Information Processing Systems (NeurIPS), a yearly gathering of the brightest minds in artificial intelligence. Less than six months before the conference, an engineer named Blake Lemoine, then at Google, had gone public with his contention that LaMDA, one of the company’s AI systems, had achieved consciousness. Lemoine’s claims were quickly dismissed in the press, and he was summarily fired, but the genie would not return to the bottle quite so easily—especially after the release of ChatGPT in November 2022. Suddenly it was possible for anyone to carry on a sophisticated conversation with a polite, creative artificial agent.

In 2021, experts estimated the chance of developing conscious AI within a decade to be above one in five.

Chalmers was an eminently sensible choice to speak about AI consciousness. He’d earned his PhD in philosophy at an Indiana University AI lab, where he and his computer scientist colleagues spent their breaks debating whether machines might one day have minds. In his 1996 book, The Conscious Mind, he spent an entire chapter arguing that artificial consciousness was possible.

If he had been able to interact with systems like LaMDA and ChatGPT back in the ’90s, before anyone knew how such a thing might work, he would have thought there was a good chance they were conscious, Chalmers says. But when he stood before a crowd of NeurIPS attendees in a cavernous New Orleans convention hall, clad in his trademark leather jacket, he offered a different assessment. Yes, large language models—systems that have been trained on enormous corpora of text in order to mimic human writing as accurately as possible—are impressive. But, he said, they lack too many of the potential requisites for consciousness for us to believe that they actually experience the world.At the breakneck pace of AI development, however, things can shift suddenly. For his mathematically minded audience, Chalmers got concrete: the chances of developing any conscious AI in the next 10 years were, he estimated, above one in five.

Many scientists argue that AI consciousness means the potential for moral responsibility and sentience on the part of machines.

Not many people dismissed his proposal as ridiculous, Chalmers says: "I mean, I’m sure some people had that reaction, but they weren’t the ones talking to me." Instead, he spent the next several days in conversation after conversation with AI experts who took the possibilities he’d described very seriously. Some came to Chalmers effervescent with enthusiasm at the concept of conscious machines. Others, though, were horrified at what he had described. If an AI were conscious, they argued—if it could look out at the world from its own personal perspective, not simply processing inputs but also experiencing them—then, perhaps, it could suffer.

NeurIPS, or the Conference on Neural Information Processing Systems, is a yearly gathering of the brightest minds in AI.

AI consciousness isn’t just a devilishly tricky intellectual puzzler for computer scientists. It means nothing less than the potential for moral responsibility on the part of machines.


hashtags #
worddensity #

Share