AI Regulation and Responsible AI Teams: Three Things You Need to Know

Category Artificial Intelligence

tldr #

This week has seen a lot of movement in the world of AI regulation. A voluntary code of conduct for AI companies has been agreed by the G7, and the UK is hosting an AI Safety Summit. AI researcher Buolamwini has highlighted the dangers of AI systems, and the potential for bias in facial recognition systems. AI scientist Sutskever is prioritising preventing AI from going rogue. Making AI systems 'responsible' is no easy task, and those who point out potential AI-related risks can be subject to aggressive criticism online.


content #

This week everyone is talking about AI. The White House just unveiled a new executive order that aims to promote safe, secure, and trustworthy AI systems. It’s the most far-reaching bit of AI regulation the US has produced yet, and my colleague Tate Ryan-Mosley and I have highlighted three things you need to know about it. The G7 has just agreed a (voluntary) code of conduct that AI companies should abide by, as governments seek to minimize the harms and risks created by AI systems .

The UK's government-led AI Safety Summit is being held this week

And later this week, the UK will be full of AI movers and shakers attending the government’s AI Safety Summit, an effort to come up with global rules on AI safety. In all, these events suggest that the narrative pushed by Silicon Valley about the "existential risk" posed by AI seems to be increasingly dominant in public discourse. I had the pleasure of talking with Buolamwini about her life story and what concerns her in AI today .

The G7 agreed to a voluntary code of conduct for AI companies

Buolamwini is an influential voice in the field. Her research on bias in facial recognition systems made companies such as IBM, Google, and Microsoft change their systems and back away from selling their technology to law enforcement. Partly thanks to researchers like Buolamwini, tech companies face more public scrutiny over their AI systems. Companies realized they needed responsible AI teams to ensure that their products are developed in a way that mitigates any potential harm .

Most tech companies now have responsible AI teams in place to mitigate potential harm due to AI systems

These teams evaluate how our lives, societies, and political systems are affected by the way these systems are designed, developed, and deployed. But people who point out problems caused by AI systems often face aggressive criticism online, as well as pushback from their employers. Buolamwini described having to fend off public attacks on her research from one of the most powerful technology companies in the world: Amazon .

AI Researcher Buolamwini was subject to pushback from Amazon over her research showing bias in facial recognition systems

When Buolamwini was first starting out, she had to convince people that AI was worth worrying about. Now, people are more aware that AI systems can be biased and harmful. That’s the good news. The bad news is that speaking up against powerful technology companies still carries risks. That is a shame. The voices trying to shift the Overton window on what kinds of risks are being discussed and regulated are growing louder than ever and have captured the attention of lawmakers, such as the UK’s prime minister, Rishi Sunak .

AI is no longer a distant possibility but a current reality

If the culture around AI actively silences other voices, that comes at a price to us all.Deeper LearningInstead of building the hottest new AI models, Sutskever tells Will Douglas Heaven in an exclusive interview, his new priority is to figure out how to stop an artificial superintelligence (a hypothetical future technology he sees coming with the certainty of a true believer) from going rogue.Bits and BytesBut making AI systems “responsible” is no easy task .

AI scientist, Sutskever's main focus is now on preventing an 'unfriendly' superintelligence from forming

As John Giannandrea, CEO of Google AI, recently said, "AI is a tool, and the danger is not with the tool, it’s with the user.” .


hashtags #
worddensity #

Share