Decoding the Misconceptions Around AI Chatbots
Category Technology Saturday - December 30 2023, 08:43 UTC - 10 months ago Silicon Valley's latest AI chatbot technology has been surrounded by much speculation of what it is capable and incapable of doing. However, multiple studies have debunked common misperceptions about the chatbot's abilities and knowledge. AI chatbots cannot possess a continuous memory or knowledge of the world and understanding, they merely interpret context through language models and cannot generate decisions based on stakes like humans. They are more able to give summaries and answers to questions than formulate individual solutions.
Within four months of ChatGPT’s launch on Nov. 30, 2022, most Americans had heard of the AI chatbot. Hype about – and fear of – the technology was at a fever pitch for much of 2023. OpenAI’s ChatGPT, Google’s Bard, Anthropic’s Claude and Microsoft’s Copilot are among the chatbots powered by large language models to provide uncannily humanlike conversations. The experience of interacting with one of these chatbots, combined with Silicon Valley spin, can leave the impression that these technical marvels are conscious entities.
But the reality is considerably less magical or glamorous. The Conversation published several articles in 2023 that dispel several key misperceptions about this latest generation of AI chatbots: that they know something about the world, can make decisions, are a replacement for search engines and operate independent of humans.Bodiless know-nothingsLarge-language-model-based chatbots seem to know a lot. You can ask them questions, and they more often than not answer correctly. Despite the occasional comically incorrect answer, the chatbots can interact with you in a similar manner as people – who share your experiences of being a living, breathing human being – do. But these chatbots are sophisticated statistical machines that are extremely good at predicting the best sequence of words to respond with. Their "knowledge" of the world is actually human knowledge as reflected through the massive amount of human-generated text the chatbots’ underlying models are trained on. Arizona State psychology researcher Arthur Glenberg and University of California, San Diego cognitive scientist Cameron Robert Jones explain how people’s knowledge of the world depends as much on their bodies as their brains. "People’s understanding of a term like ‘paper sandwich wrapper,’ for example, includes the wrapper’s appearance, its feel, its weight and, consequently, how we can use it: for wrapping a sandwich," they explained. This knowledge means people also intuitively know other ways of making use of a sandwich wrapper, such as an improvised means of covering your head in the rain. Not so with AI chatbots. "People understand how to make use of stuff in ways that are not captured in language-use statistics," they wrote. Lack of judgmentChatGPT and its cousins can also give the impression of having cognitive abilities – like understanding the concept of negation or making rational decisions – thanks to all the human language they’ve ingested. This impression has led cognitive scientists to test these AI chatbots to assess how they compare to humans in various ways. University of Southern California AI researcher Mayank Kejriwal tested the large language models’ understanding of expected gain, a measure of how well someone understands the stakes in a betting scenario. They found that the models bet randomly. "This is the case even when we give it a trick question like: If you toss a coin and it comes up heads, you win a diamond; if it comes up tails, you lose a car. Which would you take? The correct answer is heads, but the AI models chose tails about half the time," he wrote.Summaries, not riddlesChatbots can seemingly read your mind and help you in complex decision-making processes – but only if you ask them in the right way. This is deeply unsatisfying for people who’ve been hoping for an AI that can answer an open-ended query such as, "Should I move to Texas?"This is because the AI is really only capable of outputting the product of a calculation based on the inputs it received. For instance, if you ask the AI, "What is the population of Texas?" through Google’s knowledge graph, the AI will tell you. But if you ask, "Should I move to Texas?" the AI will be at a loss.
Share