Treating AI like People: An Analysis of the Pitfalls and Benefits of Anthropomorphizing AI Models
Category Machine Learning Monday - May 22 2023, 17:52 UTC - 1 year ago Geoffrey Hinton recently resigned from Google over fears of AI becoming too powerful. US psychologist Gary Marcus has argued that we should not treat AI models like people due to examples of over-attribution of human-like capabilities, as well as issues stemming from our tendency to anthropomorphise. Alan Turing and Daniel Dennett have contributed insights into AI behavior, and the intentional stance explains why we treat AI as a rational agent.
The artificial intelligence (AI) pioneer Geoffrey Hinton recently resigned from Google, warning of the dangers of the technology "becoming more intelligent than us." His fear is that AI will one day succeed in "manipulating people to do what it wants."There are reasons we should be concerned about AI. But we frequently treat or talk about AIs as if they are human. Stopping this, and realizing what they actually are, could help us maintain a fruitful relationship with the technology.
In a recent essay, the US psychologist Gary Marcus advised us to stop treating AI models like people. By AI models, he means large language models (LLMs) like ChatGPT and Bard, which are now being used by millions of people on a daily basis.
He cites egregious examples of people "over-attributing" human-like cognitive capabilities to AI that have had a range of consequences. The most amusing was the US senator who claimed that ChatGPT "taught itself chemistry". The most harrowing was the report of a young Belgian man who was said to have taken his own life after prolonged conversations with an AI chatbot.
Marcus is correct to say we should stop treating AI like people—conscious moral agents with interests, hopes and desires. However, many will find this difficult to near-impossible. This is because LLMs are designed—by people—to interact with us as though they are human, and we're designed—by biological evolution—to interact with them likewise.
Good mimics .
The reason LLMs can mimic human conversation so convincingly stems from a profound insight by computing pioneer Alan Turing, who realized that it is not necessary for a computer to understand an algorithm in order to run it. This means that while ChatGPT can produce paragraphs filled with emotive language, it doesn't understand any word in any sentence it generates.
The LLM designers successfully turned the problem of semantics—the arrangement of words to create meaning—into statistics, matching words based on their frequency of prior use. Turing's insight echos Darwin's theory of evolution, which explains how species adapt to their surroundings, becoming ever-more complex, without needing to understand a thing about their environment or themselves.
The cognitive scientist and philosopher Daniel Dennett coined the phrase "competence without comprehension," which perfectly captures the insights of Darwin and Turing.
Another important contribution of Dennett's is his "intentional stance". This essentially states that in order to fully explain the behavior of an object (human or non-human), we must treat it like a rational agent. This most often manifests in our tendency to anthropomorphise non-human species and other non-living entities.
But it is useful. For example, if we want to beat a computer at chess, the best strategy is to treat it as a rational agent that "wants" to beat us. We can explain that the reason why the computer castled, for instance, was because "it wanted to protect its king from our attack," without any contradiction in terms.
We may speak of a tree in a forest as "wanting to grow" towards the light. But neither the tree, nor the chess computer represents "wanting" in the same sense as a human.
Share