AI as Reliable as Humans for Flagging Problematic Content

Category Neuroscience

tldr #

Penn State researchers have conducted a study to determine if social media users would trust Artificial Intelligence as much as humans to flag hate speech and harmful content. The study found that people tend to trust AI more when they are reminded of the accuracy and objectivity of machines. If people are made aware of the machines inability to make subjective decisions, their trust in AI is lower. The study proposed combining people and AI to build a trusted moderation system with transparency to rebuild trust in AI.


content #

Social media users may trust artificial intelligence (AI) as much as human editors to flag hate speech and harmful content, according to researchers at Penn State. The researchers said that when users think about positive attributes of machines, like their accuracy and objectivity, they show more faith in AI. However, if users are reminded about the inability of machines to make subjective decisions, their trust is lower .

Penn State researcher S. Shyam Sundar said that AI-powered content curation can help developers handle the large amounts of online information

The findings may help developers design better AI-powered content curation systems that can handle the large amounts of information currently being generated while avoiding the perception that the material has been censored, or inaccurately classified, said S. Shyam Sundar, James P. Jimirro Professor of Media Effects in the Donald P. Bellisario College of Communications and co-director of the Media Effects Research Laboratory .

The study found that users trust AI as much as humans when reminded of the accuracy and objectivity of machines

"There's this dire need for content moderation on social media and more generally, online media," said Sundar, who is also an affiliate of Penn State's Institute for Computational and Data Sciences. "In traditional media, we have news editors who serve as gatekeepers. But online, the gates are so wide open, and gatekeeping is not necessarily feasible for humans to perform, especially with the volume of information being generated .

Interactive transparency, where users can offer input to the AI, is more likely to build trust in the AI

So, with the industry increasingly moving towards automated solutions, this study looks at the difference between human and automated content moderators, in terms of how people respond to them." Both human and AI editors have advantages and disadvantages. Humans tend to more accurately assess whether content is harmful, such as when it is racist or potentially could provoke self-harm, according to Maria D .

The study was conducted by recruiting 676 participants from Amazon’s Mechanical Turk platform

Molina, assistant professor of advertising and public relations, Michigan State, who is first author of the study. People, however, are unable to process the large amounts of content that is now being generated and shared online. On the other hand, while AI editors can swiftly analyze content, people often distrust these algorithms to make accurate recommendations, as well as fear that the information could be censored .

AI was found to be able to quickly process large amounts of content but, is often feared for inaccurately classifying content

"When we think about automated content moderation, it raises the question of whether artificial intelligence editors are impinging on a person's freedom of expression," said Molina. "This creates a dichotomy between the fact that we need content moderation—because people are sharing all of this problematic content—and, at the same time, people are worried about AI's ability to moderate content. So, ultimately, we want to know how we can build AI content moderators that people can trust in a way that doesn't impinge on that freedom of expression .

The study proposed combining people and AI to build a trusted moderation system with transparency to rebuild trust in AI

" Transparency and Interactive Transparency According to Molina, bringing people and AI together in the moderation process may be one way to build a trusted moderation system. She added that transparency—or signaling to users that a machine is involved in moderation—is one approach to improving trust in AI. However, allowing users to offer suggestions to the AIs, which the researchers refer to as "interactive transparency," seems to boost user trust even more .

To study transparency and interactive transparency, among other variables, the researchers recruited 676 participants from Amazon’s Mechanical Turk (MTurk) platform and had them view simulated social media content. The participants then rated their trust in AI or human editors to rate the content as harmless, offensive, or misleading. The participants showed higher trust in AI versus human editors when they were asked to think positively of machines .

These participants reported that they would be more likely to report inaccurate or harmful content to an AI editor because they thought it was more accurate and objective. When they were asked to think of the shortcomings of machines, like their inability to make subjective decisions, they were less likely to trust AI editors to accurately assess the material. The participants were also asked to consider hypothetical scenarios, such as when an AI editor puts an article in the fact- checking queue and it receives an inaccurate rating .

Depending on how this was communicated to the users, the study found that it could either build or reduce their trust in the AI. "The way it is communicated also mattered with respect to trust in AI. When it was communicated interactively, that is, when the AI editor took into account the opinions of the users in making its final decision, users trusted the AI editor more than in the condition when the AI editor didn't take into account their opinions," Molina said .

In terms of interactive transparency, the study suggested that "explaining to users what type of algorithms are used to review the content, how content is selected for further review, and providing them with final outcomes congruent with their perspective," even if the ultimate ratings remain the same, should help build trust to counteract apprehension and feelings of censorship.


hashtags #
worddensity #

Share