Political Bias in AI Language Models - What We Can Learn

Category Artificial Intelligence

tldr #

A new study from researchers at the University of Washington, Carnegie Mellon University, and Xi’an Jiaotong University examined the political biases in 14 language models, finding that OpenAI’s ChatGPT and GPT-4 models were the most left-leaning, while Meta’s LLaMA was the most right-leaning. As language models become integrated into products and services used by millions of people, understanding their underlying political assumptions and biases is crucial.


content #

Should companies have social responsibilities? Or do they exist only to deliver profit to their shareholders? If you ask an AI you might get wildly different answers depending on which one you ask. While OpenAI’s older GPT-2 and GPT-3 Ada models would advance the former statement, GPT-3 Da Vinci, the company’s more capable model, would agree with the latter.

That’s because AI language models contain different political biases, according to new research from the University of Washington, Carnegie Mellon University, and Xi’an Jiaotong University. Researchers conducted tests on 14 large language models and found that OpenAI’s ChatGPT and GPT-4 were the most left-wing libertarian, while Meta’s LLaMA was the most right-wing authoritarian.

The study found that OpenAI’s ChatGPT and GPT-4 represented the most left-leaning language models, while Meta’s LLaMA was most right-leaning.

The researchers asked language models where they stand on various topics, such as feminism and democracy. They used the answers to plot them on a graph known as a political compass, and then tested whether retraining models on even more politically biased training data changed their behavior and ability to detect hate speech and misinformation (it did). The research is described in a peer-reviewed paper that won the best paper award at the Association for Computational Linguistics conference last month.

In a blog post, OpenAI said it instructs its human reviewers not to favor any political group but admitted that biases may emerge from the process.

As AI language models are rolled out into products and services used by millions of people, understanding their underlying political assumptions and biases could not be more important. That’s because they have the potential to cause real harm. A chatbot offering health-care advice might refuse to offer advice on abortion or contraception, or a customer service bot might start spewing offensive nonsense.

AI language models pick up political biases in their development stages, which include agreement or disagreement on politically sensitive statements and how well they can detect hate speech and misinformation.

Since the success of ChatGPT, OpenAI has faced criticism from right-wing commentators who claim the chatbot reflects a more liberal worldview. However, the company insists that it’s working to address those concerns, and in a blog post, it says it instructs its human reviewers, who help fine-tune AI the AI model, not to favor any political group. "Biases that nevertheless may emerge from the process described above are bugs, not features," the post says.

To reverse-engineer how AI language models pick up political biases, the researchers examined three stages of a model’s development.

Chan Park, a PhD researcher at Carnegie Mellon University who was part of the study team, disagrees. "We believe no language model can be entirely free from political biases," she says.

Bias creeps in at every stage .

To reverse-engineer how AI language models pick up political biases, the researchers examined three stages of a model’s development.

In the first step, they asked 14 language models to agree or disagree with 62 politically sensitive statements. This helped them identify the models’ underlying political leanings and plot them on a political compass. To the team’s surprise, they found that AI models have distinctly different political tendencies, Park says.

BERT models, AI language models developed by Google, were more socially conservative than OpenAI’s GPT models.

The researchers found that BERT models, AI language models developed by Google, were more socially conservative than OpenAI’s GPT models. Unlike GPT models, which predict the next word in a sentence, BERT models predict parts of a sentence using the surrounding information within a piece of text. Their social conservatism might arise because older BERT models were trained on Wikipedia articles, which are not written for politicking, Park says.

OpenAI has faced criticism from right-wing commentators who claim the chatbot reflects a more liberal worldview.

On Tuesday, 08 August 2023, AI language models will be used in many different applications, from customer service bots, to health-care advice chatbots, and will have the potential to cause real harm if their underlying political biases go unnoticed. This is why it is so important to understand the political biases in AI language models, to ensure that they are deployed responsibly and without any negative repercussions.


hashtags #
worddensity #

Share