AI and the Ethics of Data Colonialism
Category Artificial Intelligence Tuesday - October 31 2023, 07:22 UTC - 1 year ago AI researcher and activist Joy Buolamwini has spent much of her research career standing at the forefront of exposing bias in AI systems. Now, she's calling for a radical rethink on how AI systems are built and encourages AI developers to ask essential questions about the products and its implications before they are brought to market.
Joy Buolamwini, the renowned AI researcher and activist, appears on the Zoom screen from home in Boston, wearing her signature thick-rimmed glasses. As an MIT grad, she seems genuinely interested in seeing old covers of MIT Technology Review that hang in our London office. An edition of the magazine from 1961 asks: "Will your son get into college?" .
Buolamwini is best known for a pioneering paper she co-wrote with AI researcher Timnit Gebru in 2018, called "Gender Shades," which exposed how commercial facial recognition systems often failed to recognize the faces of Black and brown people, especially Black women. Her research and advocacy led companies such as Google, IBM, and Microsoft to improve their software so it would be less biased and back away from selling their technology to law enforcement.Now, Buolamwini has a new target in sight. She is calling for a radical rethink of how AI systems are built. Buolamwini tells MIT Technology Review that, amid the current AI hype cycle, she sees a very real risk of letting technology companies pen the rules that apply to them—repeating the very mistake, she argues, that has previously allowed biased and oppressive technology to thrive.
"What concerns me is we're giving so many companies a free pass, or we're applauding the innovation while turning our head [away from the harms]," Buolamwini says.A particular concern, says Buolamwini, is the basis upon which we are building today’s sparkliest AI toys, so-called foundation models. Technologists envision these multifunctional models serving as a springboard for many other AI applications, from chatbots to automated movie-making. They are built by scraping masses of data from the internet, inevitably including copyrighted content and personal information. Many AI companies are now being sued by artists, music companies, and writers, who claim their intellectual property was taken without consent.
The current modus operandi of today’s AI companies is unethical—a form of "data colonialism," Buolamwini says, with a "full disregard for consent." .
"What’s out there for the taking, if there aren’t laws—it’s just pillaged," she says. As an author, Buolamwini says, she fully expects her book, her poems, her voice, and her op-eds—even her PhD dissertation—to be scraped into AI models.
"Should I find that any of my work has been used in these systems, I will definitely speak up. That’s what we do," she says.
Buolamwini says a real innovation worth boasting about would be models that companies could show have a positive climate impact and legitimately sourced data, for example.
"I see an opportunity to learn from so many mistakes of the past when it comes to oppressive systems powering advanced technologies. My hope is we go in a different direction," she says.
I ask what aspect of AI systems she would audit today if she could repeat the success of "Gender Shades." Without missing a beat, Buolamwini says that instead of one single audit, she would like AI to have an audit culture, where systems get rigorously tested before they are deployed in the real world. She’d like to see AI developers ask essential questions about a product’s underlying data, and the society-wide implications of the tool, before bringing it to market.
Share