The Issue of Moderation in the Metaverse
Category Technology Saturday - April 29 2023, 01:01 UTC - 1 year ago Ravi Yekkanti is an employee of WebPurify, a content moderation services provider to internet companies such as Microsoft and Play Lab. His job is to make sure everyone in the metaverse is safe and having a good time, as reports of sexual assaults, bullying, and child grooming in the metaverse arise. His job is complex, murky and in-demand, as the burden of safety in the metaverse is left to private security agents such as Yekkanti to intervene.
When Ravi Yekkanti puts on his headset to go to work, he never knows what the day spent in virtual reality will bring. Who might he meet? Will a child’s voice accost him with a racist remark? Will a cartoon try to grab his genitals? He adjusts the extraterrestrial-looking goggles haloing his head as he sits at the desk in his office in Hyderabad, India, and prepares to immerse himself in an "office" full of animated avatars. Yekkanti’s job, as he sees it, is to make sure everyone in the metaverse is safe and having a good time, and he takes pride in it.
Yekkanti is at the forefront of a new field, VR and metaverse content moderation. Digital safety in the metaverse has been off to a somewhat rocky start, with reports of sexual assaults, bullying, and child grooming. That issue is becoming more urgent with Meta’s announcement last week that it is lowering the age minimum for its Horizon Worlds platform from 18 to 13. The announcement also mentioned a slew of features and rules intended to protect younger users. However, someone has to enforce those rules and make sure people aren’t getting around the safeguards.
Meta won’t say how many content moderators it employs or contracts in Horizon Worlds, or whether the company intends to increase that number with the new age policy. But the change puts a spotlight on those tasked with enforcement in these new online spaces—people like Yekkanti—and how they go about their jobs.
Yekkanti has worked as a moderator and training manager in virtual reality since 2020 and came to the job after doing traditional moderation work on text and images. He is employed by WebPurify, a company that provides content moderation services to internet companies such as Microsoft and Play Lab, and works with a team based in India. His work is mostly done in mainstream platforms, including those owned by Meta, although WebPurify declined to confirm which ones specifically citing client confidentiality agreements. Meta spokesperson Kate McLaughlin says that Meta Quest doesn’t work with WebPurify directly.A longtime internet enthusiast, Yekkanti says he loves putting on a VR headset, meeting people from all over the world, and giving advice to metaverse creators about how to improve their games and "worlds." .
He is part of a new class of workers protecting safety in the metaverse as private security agents, interacting with the avatars of very real people to suss out virtual-reality misbehavior. He does not publicly disclose his moderator status. Instead, he works more or less undercover, presenting as an average user to better witness violations.Because traditional moderation tools, such as AI-enabled filters on certain words, don’t translate well to real-time immersive environments, mods like Yekkanti are the primary way to ensure safety in the digital world, and the work is getting more important every day.
--- The metaverse’s safety problem --- .
The metaverse’s safety problem is complex and opaque. Journalists have reported instances of abusive comments, scamming, sexual assaults, and even a kidnapping orchestrated through Meta’s Oculus. The biggest immersive platforms, like Roblox and Meta’s Horizon Worlds, keep their statistics about bad behavior very hush-hush, but Yekkanti says he encounters reportable transgressions every day.
Meta acknowledges the necessity of content moderation, but why isn’t the company doing more to ensure its users’ safety? While the platform does provide guidelines and tools to help users stay safe, it seems the company is relying on organic user-led enforcement and moderation, short of having its own private moderation team—which, in turn, puts the burden of safety on the content moderators themselves.
And, as outside moderators, Yekkanti and his coworkers are working in a largely unregulated area, one in which privacy laws can be muddied by virtual digits. It’s a complicated but increasingly in-demand job that comes with its own set of challenges and rewards—although one that’s still largely misunderstood.
Share