AI Models Mimicking Humans in Language Tasks
Category Artificial Intelligence Thursday - November 2 2023, 18:34 UTC - 1 year ago A new study from New York University created an AI model that mimics a toddler’s ability to generalize language learning. The model was trained to reproduce errors from human test results and learn from them. Generalization is a sort of flexible thinking that lets us use newly learned words in new contexts. AI models rely on deep learning, which is a method loosely based on the brain. Humans have the uncanny ability to turn what we learn about the world into concepts.
Prairie dogs are anything but dogs. With a body resembling a Hershey’s Kiss and a highly sophisticated chirp for communications, they’re more hamster than golden retriever.
Humans immediately get that prairie dogs aren’t dogs in the usual sense. AI struggles.
Even as toddlers, we have an uncanny ability to turn what we learn about the world into concepts. With just a few examples, we form an idea of what makes a "dog" or what it means to "jump" or "skip." These concepts are effortlessly mixed and matched inside our heads, resulting in a toddler pointing at a prairie dog and screaming, "But that’s not a dog!" .
Last week, a team from New York University created an AI model that mimics a toddler’s ability to generalize language learning. In a nutshell, generalization is a sort of flexible thinking that lets us use newly learned words in new contexts—like an older millennial struggling to catch up with Gen Z lingo.
When pitted against adult humans in a language task for generalization, the model matched their performance. It also beat GPT-4, the AI algorithm behind ChatGPT.
The secret sauce was surprisingly human. The new neural network was trained to reproduce errors from human test results and learn from them.
"For 35 years, researchers in cognitive science, artificial intelligence, linguistics, and philosophy have been debating whether neural networks can achieve human-like systematic generalization," said study author Dr. Brenden Lake. "We have shown, for the first time, that a generic neural network can mimic or exceed human systematic generalization in a head-to-head comparison." .
A Brainy Feud .
Most AI models rely on deep learning, a method loosely based on the brain.
The idea is simple. Artificial neurons interconnect to form neural networks. By changing the strengths of connections between artificial neurons, neural networks can learn many tasks, such as driving autonomous taxis or screening chemicals for drug discovery.
However, neural networks are even more powerful in the brain. The connections rapidly adapt to ever-changing environments and stitch together concepts from individual experiences and memories. As an example, we can easily identify a wild donkey crossing the road and know when to hit the brakes. A robot car may falter without wild-donkey-specific training.
The pain point is generalization. For example: What is a road? Is it it a paved highway, rugged dirt path, or hiking trail surrounded by shrubbery? .
Back in the 1980s, cognitive scientists Jerry Fodor and Zenon Pylyshyn famously proposed that artificial neural networks aren’t capable of understanding concepts—such as a "road"—much less flexibly using them to navigate new scenarios.
The scientists behind the new study took the challenge head on. Their solution? An artificial neural network that’s fine-tuned on human reactions.
Man With Machine .
As a baseline, the team first asked 25 people to learn a new made-up language. Compared to using an existing one, a fantasy language prevents bias when testing human participants.
The research went "beyond classic work that relied primarily on thought experiments" to tap into human linguistic expertise; in other words, instead of asking participants which answers are correct, they asked which of two answers is better.
Share