Empathy for Artificial Intelligence

Category Artificial Intelligence

tldr #

In order to ensure AI agents act in the interest of humanity, experts have proposed adding 'artificial empathy' into AI-powered robots and algorithms. This empathy would be based on demonstrating a self-perception of physical pain, and allow AI agents to experience the ramifications of their decisions before taking action. Such an idea would require a multi-disciplinary approach that draws insights from neuroscience, psychology, and robotics.


content #

In the movie M3GAN, a toy developer gives her recently orphaned niece, Cady, a child-sized AI-powered robot with one goal: to protect Cady. The robot M3GAN sympathizes with Cady’s trauma. But things soon go south, with the pint-sized robot attacking anything and anyone who it perceives to be a threat to Cady.M3GAN wasn’t malicious. It followed its programming, but without any care or respect for other beings—ultimately including Cady. In a sense, as it engaged with the physical world, M3GAN became an AI sociopath.

A recent surge in AI-powered applications has given rise to ethical and safety concerns among experts, particularly as AI-powered agents are often unmonitored regarding who they interact with

Sociopathic AI isn’t just a topic explored in Hollywood. To Dr. Leonardo Christov-Moore at the University of Southern California and colleagues, it’s high time we build artificial empathy into AI—and nip any antisocial behaviors in the bud. In an essay published last week in Science Robotics, the team argued for a neuroscience perspective to embed empathy into lines of code. The key is to add "gut instincts" for survival—for example, the need to avoid physical pain. With a sense of how it may be "hurt," an AI agent could then map that knowledge onto others. It’s similar to the way humans gauge each others’ feelings: I understand and feel your pain because I’ve been there before.

AI empathy could potentially help guide autonomous weapons to be less harmful to innocent people in war zones and other dangerous situations

AI agents based on empathy add an additional layer of guardrails that "prevents irreversible grave harm," said Christov-Moore. It’s very difficult to do harm to others if you’re digitally mimicking—and thus "experiencing"—the consequences.

Digital da Vinci .

The rapid rise of ChatGPT and other large generative models took everyone by surprise, immediately raising questions about how they can integrate into our world. Some countries are already banning the technology due to cybersecurity risks and privacy protections. AI experts also raised alarm bells in an open letter earlier this year that warned of the technology’s "profound risks to society." .

Empathy is a unique emotional response that includes both cognitive and emotional components and in some cases, even sharing the same physical sensations as the person being empathized with

We are still adapting to an AI-powered world. But as these algorithms increasingly weave their way into the fabric of society, it’s high time to look ahead to their potential consequences. How do we guide AI agents to do no harm, but instead work with humanity and help society? .

It’s a tough problem. Most AI algorithms remain a black box. We don’t know how or why many algorithms generate decisions. Yet the agents have an uncanny ability to come up with "amazing and also mysterious" solutions that are counter-intuitive to humans, said Christov-Moore. Give them a challenge—say, finding ways to build as many therapeutic proteins as possible—and they’ll often imagine solutions that humans haven’t even considered.

Empathy is not to be confused with sympathy, where instead of feeling what the other person feels, you merely show understanding or sympathy for their struggles and hardships

Untethered creativity comes at a cost. "The problem is it’s possible they could pick a solution that might result in catastrophic irreversible harm to living beings, and humans in particular," said Christov-Moore.

Adding a dose of artificial empathy to AI may be the strongest guardrail we have at this point.

Let’s Talk Feelings .

Empathy isn’t sympathy.

As an example: I recently poured hydrogen peroxide onto a fresh three-inch-wide wound. Sympathy is when you understand it was painful and show care and compassion. Empathy is when you vividly imagine how the pain would feel on your own body, said Christov-Moore.

Empathetic AI would require AI agents to have their own emotional mechanisms while interacting with humans, allowing them to autonomously respond with appropriate emotions

The team advocates that AI agents learn the same way, adding a biological sense of protection of its body and its inner simulated organs that it would have, said Christov-Moore. And as AI agents learn from the environment, they could draw upon that neural knowledge to simulate the experiencing of humans and build trust as they work with humans.

For example, Christov-Moore envisions an AI flight assistant that could intervene on behalf of passengers in the face of any emergency. The agent wouldn’t just recognize a bad situation involving certain parameters—it would respond with an “empathic” understanding of the humans in the situation, and use its own motivations for self-preservation to take appropriate action.

AI systems capable of exhibiting emotion can serve multiple purposes, from aiding clinical care to providing emotional companionship for the elderly or even for children undergoing emotional trauma

But teaching robots and AI agents to empathize isn't an easy task. They need to learn about their bodies, how they interact with the world, and the consequences of their actions. They also need to learn about other bodies, and how to decode moods and more subtle behaviors in humans.

It’s a tricky problem to solve. The team proposes a multi-disciplinary approach to gain insights from neuroscience, psychology, and roboticists that can be incorporated into these algorithms. Although the team is still refining their idea, they’ve already seen promising results in some of their early prototypes.

At the end of the day, experts hope to integrate empathy into these agents to simulate morality. It’s the same way that toddlers learn concepts like fairness and reciprocity. We don’t need robots with a perfect moral code—just robots that understand our feelings and our intentions, and can act accordingly.


hashtags #
worddensity #

Share