The Dangers of Biased Artificial Intelligence

Category Science

tldr #

This article provides insight into the unintentional creation of a racially biased artificial intelligence algorithm in 1998 and the dangers these biases still face today. Self-titled "Poet of Code" Joy Buolamwini's video poem "AI, Ain't I a Woman?" exposes racial and gender biases found in automatic face recognition systems while sociological theories of privilege help explain the presence of these biases in AI.


content #

In 1998, I unintentionally created a racially biased artificial intelligence algorithm. There are lessons in that story that resonate even more strongly today.The dangers of bias and errors in AI algorithms are now well known. Why, then, has there been a flurry of blunders by tech companies in recent months, especially in the world of AI chatbots and image generators? Initial versions of ChatGPT produced racist output. The DALL-E 2 and Stable Diffusion image generators both showed racial bias in the pictures they created.

In recent months there have been a flurry of AI driven missteps by tech companies

My own epiphany as a white male computer scientist occurred while teaching a computer science class in 2021. The class had just viewed a video poem by Joy Buolamwini, AI researcher and artist and the self-described poet of code. Her 2019 video poem "AI, Ain't I a Woman?" is a devastating three-minute exposé of racial and gender biases in automatic face recognition systems—systems developed by tech companies like Google and Microsoft. The systems often fail on women of color, incorrectly labeling them as male. Some of the failures are particularly egregious: The hair of Black civil rights leader Ida B. Wells is labeled as a "coonskin cap"; another Black woman is labeled as possessing a "walrus mustache." .

AI researcher and artist Joy Buolamwini is the self-titled 'Poet of Code'

--- Echoing through the years --- .

I had a horrible déjà vu moment in that computer science class: I suddenly remembered that I, too, had once created a racially biased algorithm. In 1998, I was a doctoral student. My project involved tracking the movements of a person's head based on input from a video camera. My doctoral adviser had already developed mathematical techniques for accurately following the head in certain situations, but the system needed to be much faster and more robust. Earlier in the 1990s, researchers in other labs had shown that skin-colored areas of an image could be extracted in real time. So we decided to focus on skin color as an additional cue for the tracker.

In 1998, the author unintentionally created a racially biased AI algorithm

I used a digital camera—still a rarity at that time—to take a few shots of my own hand and face, and I also snapped the hands and faces of two or three other people who happened to be in the building. It was easy to manually extract some of the skin-colored pixels from these images and construct a statistical model for the skin colors. After some tweaking and debugging, we had a surprisingly robust real-time head-tracking system.

Biased algorithms exist because of societal inequity

Not long afterward, my adviser asked me to demonstrate the system to some visiting company executives. When they walked into the room, I was instantly flooded with anxiety: the executives were Japanese. In my casual experiment to see if a simple statistical model would work with our prototype, I had collected data from myself and a handful of others who happened to be in the building. But 100% of these subjects had "white" skin; the Japanese executives did not.

Racial and gender bias can be especially present in face recognition systems

Miraculously, the system worked reasonably well on the executives anyway. But I was shocked by the realization that I had created a racially biased system that could have easily failed for other nonwhite people.AI researcher and artist Joy Buolamwini’s video poem ‘AI, Ain’t I a Woman?’ .

--- Privilege and priorities --- .

How and why do well-educated, well-intentioned scientists produce biased AI systems? Sociological theories of privilege provide one useful lens. Ten years before I ceded room to the Japanese executives, I was an adolescent in rural England. My school had low expectations of its students: We weren't encouraged to think of ourselves as scientists or engineers, let alone inventors. Even so, I was occasionally able to attend lectures and do science experiments thanks to the relatively generous education system. I was also lucky enough to find a computer science advisor who helped me gain admission to one of the country's top universities.

In the 1990s researchers showed that skin-colored areas of an image could be extracted in real-time

hashtags #
worddensity #

Share