The Downside of Human-Robot Teams: Social Loafing in the Workplace

Category Artificial Intelligence

tldr #

A study in Frontiers in Robotics and AI suggests there could be potential downsides when robots and humans form a team. The study found that when humans were asked to spot defects, they did worse when they thought a robot had already checked the same images - an effect which can be seen among humans too, known as social loafing. The cause of the decrease in performance is related to lack of motivation and identification with the task.


content #

It’s not uncommon for people to take their foot off the pedal at work if they know others will cover for them. And it turns out, the same might be true when people think robots have got their backs.

While robots have been a fixture in the workplace for decades, they’ve typically taken the form of heavy machinery that workers should steer well clear of. But in recent years, with advances in AI, there have been efforts to build collaborative robots that work alongside humans as teammates and partners.

Social loafing can impact both professional and personal settings, such as within school or among family and friends

Being able to share a workspace and cooperate with humans could allow robots to assist in a far wider range of tasks and augment human workers to boost their productivity. But it’s still far from clear how the dynamics of human-robot teams would play out in reality.

New research in Frontiers in Robotics and AI suggests there could be potential downsides if the technology isn’t deployed thoughtfully. The researchers found that when humans were asked to spot defects in electronic components, they did a worse job when they thought a robot had already checked a piece.

The research of this study shows that the same psychological effect can appear when a robot is part of the team too.

"Teamwork is a mixed blessing," first author Dietlind Helene Cymek, from the Technical University of Berlin in Germany, said in a press release. "Working together can motivate people to perform well, but it can also lead to a loss of motivation because the individual contribution is not as visible. We were interested in whether we could also find such motivational effects when the team partner is a robot." .

The effect of social loafing can also be seen when there is comparison of others that are part of the team if they are seen to be of lower or higher performance.

The phenomenon the researchers uncovered is already well-known among humans. Social loafing, as it is known, has been extensively studied by psychologists and refers to an individual putting less effort into a task performed as a team compared to one performed alone.

This often manifests when it’s hard to identify individual contributions to a shared task, say the researchers, which can lead to a lack of motivation. Having a high performing co-worker can also make it more likely.

The decreased performance using the robot for the defect spotters was not because the task became easier.

To see if the phenomenon could also impact teams of robots and humans, the researchers set up a simulated quality assurance task in which volunteers were asked to check images of circuit boards for defects. To measure how the humans were inspecting the boards, the images were blurred out and only became clear in areas where the participants hovered their mouse cursor.

Of the 42 people who took part in the trial, half worked alone, and the other half were told that a robot had already checked the images they were seeing. For the second group, each image featured red check marks where the robot had spotted problems, but crucially, it had missed five defects. Afterwards the participants were asked to rate themselves on how they performed, their effort, and how responsible for the task they felt.

Motivation and level of responsibility of the individual influence how successful the robot and human collaboration will be.

The researchers found that both groups spent more or less the same amount of time inspecting the boards, covered the same areas, and their self-perception of how they’d done was similar. However, the group that worked in tandem with the robot only spotted an average of 3.3 of the 5 defects missed by the machine, while the other group caught 4.23 on average.

The researchers sifted through the data to assess the possible motivations behind the decrease in performance. They found that the volunteers underperformed most when they were aware the robot had already seen the images and when they felt completely responsible and identify with the task – for example those who identified as engineers.

The quality assurance task for the study was chosen as it is a task where artificial intelligence can be used alongside humans already.

hashtags #
worddensity #

Share