Can Artificial Intelligence Think Like Humans?
Category Computer Science Monday - November 6 2023, 23:19 UTC - 1 year ago Human brains have two processing methods, System 1 (fast) and System 2 (slow) thinking. Researchers gave OpenAI LLMs a battery of tests to exhibit cognitive biases to prompt System 1 responses. Later versions, like GPT-3, showed capability of engaging in more strategic, careful problem-solving, suggesting the ability to "slow down" similarly to how System 2 operates in the human brain.
When presented with a problem, your brain has two ways to proceed: quickly and intuitively or slowly and methodically. These two types of processing are known as System 1 and System 2, or as the Nobel Prize-winning psychologist Daniel Kahneman memorably described them, "fast" and "slow" thinking.Large language models like ChatGPT move fast by default. Ask them a question and they will spit out an answer—not necessarily the correct one—suggesting that they are capable of fast, System 1-type processing. Yet, as these models evolve, can they slow down and approach problems in steps, avoiding inaccuracies that result from rapid responses? .
In a new paper published in Nature Computational Science, Michal Kosinski, a professor of organizational behavior at Stanford Graduate School of Business, finds that they can—and that they can outperform humans in basic tests of reasoning and decision-making.
Kosinski and his two co-authors, philosopher Thilo Hagendorff and psychologist Sarah Fabi, presented 10 generations of OpenAI LLMs with a battery of tasks designed to prompt quick System 1 responses. The team was initially interested in whether the LLMs would exhibit cognitive biases like those that trip up people when they rely on automatic thinking.
They observed that early models like GPT-1 and GPT-2 "couldn't really understand what was going on," Kosinski says. Their responses "were very System 1-like" as the tests increased in complexity. "Very similar to responses that humans would have," he says.
It wasn't unexpected that LLMs, which are designed to predict strings of text, could not reason on their own. "Those models do not have internal reasoning loops," Kosinski says. "They cannot just internally slow down themselves and say, 'Let me think about this problem; let me analyze assumptions.' The only thing they can do is intuit the next word in a sentence." .
However, the researchers found that later versions of GPT and ChatGPT could engage in more strategic, careful problem-solving in response to prompts. Kosinski says he was surprised by the emergence of this System 2-like processing. "Suddenly, GPT3 becomes able, from one second to another, without any retraining, without growing any new neural connections, to solve this task," he says. "It shows that those models can learn immediately, like humans." .
One of the problems the researchers gave to the GPT models was a cognitive reflection test. Every day, the number of lilies growing in a lake doubles. If it takes 10 days for the lake to be completely covered, how many days does it take for half of the lake to be covered? This kind of problem requires reasoning rather than intuition, and getting the correct answer requires conscious consideration and calculation-- nine days in this case.
Humans and Artificial Intelligences can both suffer from cognitive biases which can lead to incorrect responses-- including the temptation to offer a rapid, intuitive response when a more mindful, reasoned approach may be more appropriate. In tests of basic reasoning and decision-making, Artificial Intelligence has shown that it can succeed, outperforming humans if given the opportunity to slow down and ponder the issue, similarly to how System 2 operates in the human brain.
Share