The Neuronal Revolution in AI: Where We are Now
Category Science Saturday - May 13 2023, 09:55 UTC - 1 year ago AI has come a long way since the 19th Century and is now capable of doing things that we don't know how to order it to do. This is because our machines now learn from experience and rely on data-driven decision-making. We can reduce our anxiety by understanding that intelligence is not exclusively a human ability and that AI will not evolve towards some form of consciousness.
A machine can only "do whatever we know how to order it to perform," wrote the 19th-century computing pioneer Ada Lovelace. This reassuring statement was made in relation to Charles Babbage's description of the first mechanical computer. Lady Lovelace could not have known that in 2016, a program called AlphaGo, designed to play and improve at the board game "Go", would not only be able to defeat all of its creators, but would do it in ways that they could not explain.
In 2023, the AI chatbot ChatGPT is taking this to another level, holding conversations in multiple languages, solving riddles and even passing legal and medical exams. Our machines are now able to do things that we, their makers, do not know "how to order them to do".
This has provoked both excitement and concern about the potential of this technology. Our anxiety comes from not knowing what to expect from these new machines, both in terms of their immediate behavior and of their future evolution.
We can make some sense of them, and the risks, if we consider that all their successes, and most of their problems, come directly from the particular recipe we are following to create them.
The reason why machines are now able to do things that we, their makers, do not fully understand is because they have become capable of learning from experience. AlphaGo became so good by playing more games of Go than a human could fit into a lifetime. Likewise, no human could read as many books as ChatGPT has absorbed.
--- Reducing anxiety --- .
It's important to understand that machines have become intelligent without thinking in a human way. This realization alone can greatly reduce confusion, and therefore anxiety.
Intelligence is not exclusively a human ability, as any biologist will tell you, and our specific brand of it is neither its pinnacle nor its destination. It may be difficult to accept for some, but intelligence has more to do with chickens crossing the road safely than with writing poetry.
In other words, we should not necessarily expect machine intelligence to evolve towards some form of consciousness. Intelligence is the ability to do the right thing in unfamiliar situations, and this can be found in machines, for example those that recommend a new book to a user.
If we want to understand how to handle AI, we can return to a crisis that hit the industry from the late 1980s, when many researchers were still trying to mimic what we thought humans do. For example, they were trying to understand the rules of language or human reasoning, to program them into machines.
That didn't work, so they ended up taking some shortcuts. This move might well turn out to be one of the most consequential decisions in our history.
--- Fork in the road --- .
The first shortcut was to rely on making decisions based on statistical patterns found in data. This removed the need to actually understand the complex phenomena that we wanted the machines to emulate, such as language. The auto-complete feature in your messaging app can guess the next word without understanding your goals.
While others had similar ideas before, the first to make this method really work, and stick, was probably Fredrick Jelinek at IBM, who invented "statistical language models", the ancestors of all GPTs, while working on machine translation.
In the eaarly 2000s, another significant contribution to the state of AI was made by Geoffrey Hinton, Yann LeCun and Yoshua Bengio, all part of what would eventually become known as the "neuronal revolution". These three pioneers developed and pushed the technique of "deep learning", based on layers of artificial neurons that react to subtly different inputs in a way similar to biological neurons.
This led AI development firmly in the direction of data-driven systems, and away from the strategy of "understanding" the data being analysed.
Share