Arts

@LARGE | Michael Andor Brodeur

The #MomoChallenge, and the bigger problem with YouTube  

Adobe/globe staff illustration
Adobe/globe staff illustration

There’s always something to be horrified about on YouTube, but right now it’s Momo. 

Momo, or the #MomoChallenge, is the latest viral fad on the Internet that exists exclusively to off your kids for lols. And much like the dangerous trends and viral phenomena that preceded it — see: Slender Man, the Blue Whale, the #TidePodChallenge — it occupies that weird digital zone between hoax and reality. 

The idea is that Momo — a terrifying googly-eyed bird-person-thing — appears across social media to instruct kids to text a number on WhatsApp, where they receive instructions to attempt increasingly dangerous tasks and, ultimately, take their own lives. That’s the idea.

Advertisement

But this new iteration of the Momo thing seems to have tapped into a darker energy.

Get The Weekender in your inbox:
The Globe's top picks for what to see and do each weekend, in Boston and beyond.
Thank you for signing up! Sign up for more newsletters here

Momo has also allegedly been making guest appearances in nefariously edited children’s videos on YouTube. One altered “Peppa the Pig” clip I found was jarringly interrupted by an image of Momo, accompanied a distorted voice offering instructions for slashing one’s wrists to get “results” — among other even more explicit calls to self-harm. This brought awareness of the so-called #MomoChallenge, and calls on YouTube to remove all Momo-tainted videos, to Kardashian levels.

YouTube has attempted to comply with the demand (that Peppa video I found is now deleted), and the real-life consequences remains enough of a question that most outlets have converged upon the word “hoax” to describe the debacle. As EJ Dickson reported in Rolling Stone, “Momo” is actually “a sculpture made by Keisuke Aisawa of the Link Factory, a Japanese company that makes horror film props and special effects” that first emerged online in 2016 and took on a life of its own. 

YouTube even issued its own statement on Thursday: “After much review, we’ve seen no recent evidence of videos promoting the Momo Challenge on YouTube. Videos encouraging harmful and dangerous challenges are clearly against our policies, the Momo challenge included.” Any instances of the image were also banned from the child-focused platform, YouTube Kids

But it didn’t much matter to me if the MomoChallenge was “real” or a “hoax,” or if the edited clip I’d seen was put on YouTube specifically to hurt kids, to frighten their parents, or simply to troll people like me, who write about things that hurt kids and frighten parents. All the poison needs is circulation.

Advertisement

The world’s largest streaming video platform has long struggled with content moderation — the ease with which users can find themselves tumbling down wormholes of “recommended” alt-right, white nationalist, and full-on racist content is something the platform is only now starting to address with any measurable results

But while adults can effectively filter and sort and choose what to watch and what to ignore, children don’t have nearly the agency or aptitude to discern what content is there to help, and what is there to hurt — and YouTube algorithms are arguably worse. 

That “Peppa the Pig” example, for instance, is just one of many altered “Peppa” videos that have plagued the platform for years — easily produced lookalikes that often veer into violent and disturbing content. A report last year from Wired revealed hours and hours of videos ostensibly posted for children, and reliably veering into territory that ranged from weird and dark to violent and depraved. 

And a devastating 20-minute video posted last week by freshly decamped YouTuber Matt Watson demonstrates just how easily any user could stumble into an elaborate and active network of child predators that operate in the comments sections — communicating with each other, sharing links to child porn sites, and directing each other to timestamped portions of ostensibly innocuous videos. 

For its part, YouTube says it is trying (or at least, trying to try) to identify and remove  “borderline content and content that could misinform users in harmful ways — such as videos promoting a phony miracle cure for a serious illness, claiming the earth is flat, or making blatantly false claims about historic events like 9/11.”

Advertisement

And as advertisers and “influencers” start to pull revenues from the platform, expect these efforts to tidy YouTube (its images and its image) to accelerate. A letter sent out to brands and agencies on Thursday outlines three new initiatives: disabling comments on videos featuring minors, launching a new “comments classifier” that will “detect and remove” thousands more comments that violate YouTube terms, and “taking decisive action on creators who cause egregious harm to the community.” 

These measures are all the least that can be done by YouTube to clean up its mess and protect its users from abusive content. But it’s not a solution for the problem kids face when it comes to the safety of YouTube. No matter how many videos it removes, or comments it scrubs, or accounts it cancels, YouTube is still a place on the Internet, and no fence can seal it off into a playground. 

In its grand experiment to give us each a view of the world, YouTube has actually succeeded — allowing us to follow the paths tread by each other’s clicks into some of the Internet’s darkest corners. (Thus it’s hard to say if YouTube is what’s really broken.) If you’re really looking for a monster in your kids’ room, don’t bother looking under the bed, just delete it from their iPads.

Michael Andor Brodeur can be reached at mbrodeur@globe.com. Follow him on Twitter @MBrodeur