The Emergence of Large Language Models: Unpredictable or Mirage?

Category Science

tldr #

Large language models have shown breakthrough behavior in certain tasks, leading to comparisons to phase transitions in physics. However, researchers at Stanford University argue that this behavior is not unpredictable or emergent, but rather influenced by the choice of metric used for measurement. The rapid growth and increase in parameters of these models have resulted in improved performance, but the added complexity may also make it difficult to cleanly solve certain problems.


content #

Two years ago, in a project called the Beyond the Imitation Game benchmark, or BIG-bench, 450 researchers compiled a list of 204 tasks designed to test the capabilities of large language models, which power chatbots like ChatGPT. On most tasks, performance improved predictably and smoothly as the models scaled up — the larger the model, the better it got. But with other tasks, the jump in ability wasn’t smooth. The performance remained near zero for a while, then performance jumped. Other studies found similar leaps in ability.

In the BIG-bench study, the GPT-3 model had 175 billion parameters while the LAMDA model had 137 billion parameters.

The authors described this as "breakthrough" behavior; other researchers have likened it to a phase transition in physics, like when liquid water freezes into ice. In a paper published in August 2022, researchers noted that these behaviors are not only surprising but unpredictable, and that they should inform the evolving conversations around AI safety, potential and risk. They called the abilities "emergent," a word that describes collective behaviors that only appear once a system reaches a high level of complexity.

The GPT-3.5 model used in ChatGPT has 350 billion parameters, making it more powerful than GPT-2 with 1.5 billion parameters.

But things may not be so simple. A new paper by a trio of researchers at Stanford University posits that the sudden appearance of these abilities is just a consequence of the way researchers measure the LLM’s performance. The abilities, they argue, are neither unpredictable nor sudden. "The transition is much more predictable than people give it credit for," said Sanmi Koyejo, a computer scientist at Stanford and the paper’s senior author. "Strong claims of emergence have as much to do with the way we choose to measure as they do with what the models are doing." .

The GPT-4 model, currently used in Microsoft Copilot, uses a whopping 1.75 trillion parameters.

We’re only now seeing and studying this behavior because of how large these models have become. Large language models train by analyzing enormous datasets of text — words from online sources including books, web searches and Wikipedia — and finding links between words that often appear together. The size is measured in terms of parameters, roughly analogous to all the ways that words can be connected. The more parameters, the more connections an LLM can find. GPT-2 had 1.5 billion parameters, while GPT-3.5, the LLM that powers ChatGPT, uses 350 billion. GPT-4, which debuted in March 2023 and now underlies Microsoft Copilot, reportedly uses 1.75 trillion.

Large language models train by analyzing huge datasets of text, finding connections between words that often appear together.

That rapid growth has brought an astonishing surge in performance and efficacy, and no one is disputing that large enough LLMs can complete tasks that smaller models can’t, including ones for which they weren’t trained. The trio at Stanford who cast emergence as a "mirage" recognize that LLMs become more effective as they scale up; in fact, the added complexity of larger models should make it possible to get better at more difficult and diverse problems. But they argue that whether this improvement looks smooth and predictable or jagged and sharp results from the choice of metric — or even a paucity of test examples — rather than the model’s inner workings.

The sudden improvement in performance of large language models is referred to as 'breakthrough' behavior or a 'phase transition'.

Three-digit addition offers an example. In the 2022 BIG-bench study, researchers reported that with fewer parameters, both GPT-3 and another LLM named LAMDA failed to accurately complete addition problem Even though bigger models held the promise of improved performance, they also were blocked by an inability to cleanly solve a problem with a known solution.


hashtags #
worddensity #

Share