Can AI Really Think and Understand?
Category Artificial Intelligence Tuesday - May 23 2023, 15:35 UTC - 1 year ago AI systems can produce content that seems like it was written by a person, however, it does not have the ability to think or understand. Alan Turing's Turing test rules out the possible implications of AI to think and understand and Neuroscientist Christof Koch believes that scientists have yet to pin down the "neural correlates of consciousness" and connect it with AI's capabilities. Therefore, AI cannot think or understand.
There has been shock around the world at the rapid rate of progress with ChatGPT and other artificial intelligence created with what's known as large language models (LLMs). These systems can produce text that seems to display thought, understanding, and even creativity. But can these systems really think and understand? This is not a question that can be answered through technological advance, but careful philosophical analysis and argument tell us the answer is no. And without working through these philosophical issues, we will never fully comprehend the dangers and benefits of the AI revolution.
In 1950, the father of modern computing, Alan Turing, published a paper that laid out a way of determining whether a computer thinks. This is now called "the Turing test." Turing imagined a human being engaged in conversation with two interlocutors hidden from view: one another human being, the other a computer. The game is to work out which is which. If a computer can fool 70 percent of judges in a 5-minute conversation into thinking it's a person, the computer passes the test. Would passing the Turing test - something that now seems imminent - show that an AI has achieved thought and understanding? .
Turing dismissed this question as hopelessly vague, and replaced it with a pragmatic definition of "thought," whereby to think just means passing the test. Turing was wrong, however, when he said the only clear notion of "understanding" is the purely behavioral one of passing his test. Although this way of thinking now dominates cognitive science, there is also a clear, everyday notion of "understanding" that's tied to consciousness. To understand in this sense is to consciously grasp some truth about reality.
In 1997, the Deep Blue AI beat chess grandmaster Garry Kasparov. On a purely behavioral conception of understanding, Deep Blue had knowledge of chess strategy that surpasses any human being. But it was not conscious: it didn't have any feelings or experiences.
Humans consciously understand the rules of chess and the rationale of a strategy. Deep Blue, in contrast, was an unfeeling mechanism that had been trained to perform well at the game. Likewise, ChatGPT is an unfeeling mechanism that has been trained on huge amounts of human-made data to generate content that seems like it was written by a person. It doesn't consciously understand the meaning of the words it's spitting out. If "thought" means the act of conscious reflection, then ChatGPT has no thoughts about anything.
How can I be so sure that ChatGPT isn't conscious? In the 1990s, neuroscientist Christof Koch bet philosopher David Chalmers a case of fine wine that scientists would have entirely pinned down the "neural correlates of consciousness" in 25 years. By this, he meant they would have identified the forms of brain activity necessary and sufficient for conscious experience. It's about time Koch paid up, as there is zero consensus that this has happened. This is because consciousness can't be observed by looking inside your head. In their attempts to find a connection between brain activity and experience, neuroscientisits must rely on head-scratching methods, such as looking for patterns in behavior that might suggest the emergence of consciousness.
Share