A New Approach to Computation Reimagines Artificial Intelligence

Category Neuroscience

tldr #

Artificial Neural Networks (ANNs) have been quite successful, but come with their own drawbacks like lack of transparency and burden of processing and storage. Hyperdimensional computing could be the radical different approach, with its fixed-length vectors of random numbers and algebraic operations. These systems can act efficiently with better robustness and could even help explain how our own brains work.

content #

Despite the wild success of ChatGPT and other large language models, the artificial neural networks (ANNs) that underpin these systems might be on the wrong track. For one, ANNs are "super power-hungry," said Cornelia Fermüller, a computer scientist at the University of Maryland. "And the other issue is [their] lack of transparency." Such systems are so complicated that no one truly understands what they’re doing, or why they work so well .

The current structure of ANNs are based on individual artificial neurons

This, in turn, makes it almost impossible to get them to reason by analogy, which is what humans do — using symbols for objects, ideas and the relationships between them. Such shortcomings likely stem from the current structure of ANNs and their building blocks: individual artificial neurons. Each neuron receives inputs, performs computations and produces outputs. Modern ANNs are elaborate networks of these computational units, trained to do specific tasks .

This new approach to computation is known as hyperdimensional computing

Yet the limitations of ANNs have long been obvious. Consider, for example, an ANN that tells circles and squares apart. One way to do it is to have two neurons in its output layer, one that indicates a circle and one that indicates a square. If you want your ANN to also discern the shape’s color — blue or red — you’ll need four output neurons: one each for blue circle, blue square, red circle and red square .

Hypervectors make possible a wide variety of computing tasks with better efficiency and robustness

More features mean even more neurons. This can’t be how our brains perceive the natural world, with all its variations. "You have to propose that, well, you have a neuron for all combinations," said Bruno Olshausen, a neuroscientist at the University of California, Berkeley. "So, you’d have in your brain, [say,] a purple Volkswagen detector." Instead, Olshausen and others argue that information in the brain is represented by the activity of numerous neurons .

Hyperdimensional computing can enable AI machines to make decisions that are transparent to humans

So the perception of a purple Volkswagen is not encoded as a single neuron’s actions, but as those of thousands of neurons. The same set of neurons, firing differently, could represent an entirely different concept (a pink Cadillac, perhaps). This is the starting point for a radically different approach to computation known as hyperdimensional computing. The key is that each piece of information, such as the notion of a car, or its make, model or color, or all of it together, is represented as a single entity: a hyperdimensional vector .

240-dimensional space is enough to represent images with red circles and blue squares

A vector is simply an ordered array of numbers. A 3D vector, for example, comprises three numbers: the x, y and z coordinates of a point in 3D space. A hyperdimensional vector, or hypervector, could be an array of 10,000 numbers, say, representing a point in 10,000-dimensional space. These mathematical objects and the algebra to manipulate them are flexible and powerful enough to take modern computing beyond some of its current limitations and foster a new approach to artificial intelligence .

Hyperdimensional vectored representations are more than enough to understand natural world in its variations

"This is the thing that I’ve been most excited about, practically in my entire career," Olshausen said. To him and many others, hyperdimensional computing promises a new world in which computing is efficient and robust, and machine-made decisions are entirely transparent. Enter High-Dimensional Spaces To understand how hypervectors make computing possible, let’s return to images with red circles and blue squares .

First we need vectored representatation of these shapes, which requires a high-dimensional space. A magnitude of 10,000 dimensions is much more than is necessary; 240 or so should do, said Fermüller. It's theorized that our brains utilize hyperdimensional vectors to represent concepts, however no studies have as of yet verified this. To enable this type of representation, hyperdimensional computing calls for the creation of short, fixed-length vectors of random numbers, known as “hash tags .

” The main motivation behind these tags is to avoid storing explicitly all information in the vectors. After all, 250 bits cannot hold many details. But the property of these tags is the same as that of a hash, which is used to assign files and even digital objects such as photos and music to a corresponding label for data retrieval. This also explains why hyperdimensional vectors in computing are also known as “sparse distributed representations .

” Moreover, hyperdimensional vectors, as with any other vectors, admit the operations of vector algebra: adding, subtracting, comparing and measuring similarity, to mention a few. This fact is important to come up with systems that can recall data, classify patterns and recognize objects, among other tasks. Though these operations are similar to what ANNs can do, hyperdimensional computing could be more efficient, and also faster and more robust .

Unlike ANNs, which learn by trial and error, hyperdimensional computing is equipped with a set of equations so you can calculate — and often prove — the exact output of a computation. If a hypervector system is trained to recognize a dog, you can explain why it classified a particular image correctly or incorrectly. In other words, the actions of hyperdimensional computing systems are transparent .

Theoretically, hyperdimensional computing could also be orders of magnitude more efficient than ANNs — more intelligent use of processing and memory resources leads to savings that can be multiplied. The majority of the burden for AI often falls on the processors and storage of conventional computing machines, not to mention the energy required to run them. In comparison, hyperdimensional systems could use up significantly less memory, making them highly suitable for edge devices such as sensors or nanorobots .

This could revolutionize the way we interact with computers in the years to come. Already, researchers have developed programs to control robots, drive cars, play language games and do music and text transcriptions. But perhaps more importantly, hyperdimensional computing may help us understand how — and why — our own brains work so well.

hashtags #
worddensity #