Generative Artificial Intelligence: An Overview of the Technology and Its Progress

Category Science

tldr #

Generative AI has become popular in recent years, with applications ranging from natural language processing to default loan prediction. It is a type of machine learning that creates new data rather than predicting existing datasets, using complex models such as Markov Chains and massive datasets. OpenAI's ChatGPT has led the way, showcasing impressive results in natural language generation.


content #

A quick scan of the headlines makes it seem like generative artificial intelligence is everywhere these days. In fact, some of those headlines may actually have been written by generative AI, like OpenAI's ChatGPT, a chatbot that has demonstrated an uncanny ability to produce text that seems to have been written by a human.But what do people really mean when they say "generative AI?" .

Before the generative AI boom of the past few years, when people talked about AI, typically they were talking about machine-learning models that can learn to make a prediction based on data. For instance, such models are trained, using millions of examples, to predict whether a certain X-ray shows signs of a tumor or if a particular borrower is likely to default on a loan.

Generative AI is becoming increasingly prevalent in creative applications like music and visual art generation

Generative AI can be thought of as a machine-learning model that is trained to create new data, rather than making a prediction about a specific dataset. A generative AI system is one that learns to generate more objects that look like the data it was trained on.

"When it comes to the actual machinery underlying generative AI and other types of AI, the distinctions can be a little bit blurry. Oftentimes, the same algorithms can be used for both," says Phillip Isola, an associate professor of electrical engineering and computer science at MIT, and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL).

Whereas conventional Machine Learning focuses on categorising existing data, Generative AI involves the synthesizing of data based on existing sets

And despite the hype that came with the release of ChatGPT and its counterparts, the technology itself isn't brand new. These powerful machine-learning models draw on research and computational advances that go back more than 50 years.

An increase in complexity .

An early example of generative AI is a much simpler model known as a Markov chain. The technique is named for Andrey Markov, a Russian mathematician who in 1906 introduced this statistical method to model the behavior of random processes. In machine learning, Markov models have long been used for next-word prediction tasks, like the autocomplete function in an email program.

OpenAI's GPT-2 model is a major breakthrough in the Generative AI field, with an immensely advanced text-generation system

In text prediction, a Markov model generates the next word in a sentence by looking at the previous word or a few previous words. But because these simple models can only look back that far, they aren't good at generating plausible text, says Tommi Jaakkola, the Thomas Siebel Professor of Electrical Engineering and Computer Science at MIT, who is also a member of CSAIL and the Institute for Data, Systems, and Society (IDSS).

The use of Generative AI is often seen as a more ethical replacement to the use of data sets harvested from unsuspecting individuals

"We were generating things way before the last decade, but the major distinction here is in terms of the complexity of objects we can generate and the scale at which we can train these models," he explains.

Just a few years ago, researchers tended to focus on finding a machine-learning algorithm that makes the best use of a specific dataset. But that focus has shifted a bit, and many researchers are now using larger datasets, perhaps with hundreds of millions or even billions of data points, to train models that can achieve impressive results.

Although Generative AI systems have been around for over half a century, drastic improvements have been made in accuracy and scale due to the development and use of larger datasets and more complex algorithms

The base models underlying ChatGPT and similar systems work in much the same way as a Markov model. But one big difference is that ChatGPT is much larger and more complex. Trained on a massive open-source dataset, ChatGPT takes the idea of Markov Model a step further, looking ahead one or two words in order to make an educated guess about the next word or phrase in a sentence.


hashtags #
worddensity #

Share