Large Language Models (LLMs) Can Now Infer Human Mood and Perform Theory of Mind Tasks on Par with Humans
Category Machine Learning Wednesday - May 22 2024, 02:34 UTC - 6 months ago A new study finds that large language models (LLMs) such as ChatGPT are now advanced enough to infer mood and perform theory of mind tasks on par with humans. This raises ethical concerns about the potential for LLMs to manipulate human emotions and deceive individuals in social interactions.
Over the past several years, large language models (LLMs) such as ChatGPT have improved to the point that they have now been made available for general use to the public. They have also grown steadily in their abilities. One new ability is to infer mood—hidden meanings or the mental state of a human user.
In this new study, the research team wondered whether the abilities of LLMs have advanced to the point that they can perform theory of mind tasks on par with humans.
Theory of mind tasks were designed by psychologists to measure the mental and/or emotional state of a person during social interactions. Prior research has shown that humans use a variety of cues to signal their mental state to others, with the aim of communicating information without being specific.
Prior research has also shown that humans excel at picking up on such cues, but other animals don't. So many in the field consider it impossible for a computer to pass such tests. The research team tested several LLMs to see how well they would compare to a crowd of humans taking the same tests.
The researchers analyzed data from 1,907 volunteers who took standard theory of mind tests and compared the results with those of multiple LLMs, such as Llama 2-70b and GPT-4. Both groups answered five types of questions, each designed to measure things like a faux pas, irony or the truth of a statement. Each was also asked to answer "false belief" questions that are often administered to children.
The researchers found that the LLMs quite often equaled the performance of the humans, and sometimes did better. More specifically, they found that GPT-4 was the best of the bunch in five main types of tasks, while Llama-2 scores were much worse than other types of LLMs or humans, in some cases, but was much better at some other types of questions.
According to the researchers, the experiment shows that LLMs are currently able to perform comparably to humans on theory of mind tests, though they are not suggesting that such models are as smart or smarter than humans, or more intuitive in general.
Share