Why I F@#king Hate Moderation

March 10, 2025

The word “moderation” makes my skin crawl.

It’s a word that conjures images of control, surveillance, and a creeping loss of freedom. It’s the omnipresent Big Brother of Orwell’s nightmares — a shadowy, all-seeing eye hovering over communities, deciding who speaks and who is silenced.

The industry has accepted "moderation" as the default -- without questioning what it's really doing to online spaces.

“Moderation” has become the crutch for an industry that has forgotten the fundamental truth of online communities: we’re not here to police every word.

Online space should be safe, not surveilled. Protection means stopping the worst without silencing communities.

It’s one of those words that feels harmless enough until you really sit with it. The more we use it, the more we see it, the more it feels like the all-watching eye of Big Brother instead of a pro-user tool set out to help us. “Moderation” is control dressed up as protection, surveillance that pretends to keep the peace. A silent force hovering over communities, deciding whose voices are heard and whose are cut off. Somewhere along the way, the concept of moderation became a crutch for an industry that’s lost sight of the point: online communities aren’t built to be policed nor will they ever want to be.

They want to be protected. We’re here to protect.

---------

“Moderation” Isn’t the Future — It’s a Warning

Moderation isn't working. It's slow, intrusive, and outdated. Protection is the future.

The industry worships “moderation” as though it’s the natural evolution of managing digital spaces. But moderation is bloated, reactive, and invasive. It casts too wide a net, turning platforms into over-policed states where every joke, every heated moment, every whisper is suspect. This obsession with control doesn’t just infringe on privacy — it erodes trust.

Traditional moderation fails at scale. It's bloated, slow, and ineffective -- missing real threats while over-policing communities.

Moderation isn’t scalable, either. The more you try to catch everything, the more you drown in noise. Meanwhile, the truly harmful behavior — the hate, the harassment, the threats, the grooming — slips through. And here’s the kicker: the more “moderation” grows, the more it normalizes surveillance as the cost of digital connection. That’s not a future I want to build.

“Protection” Is the Alternative We Need

At VoicePatrol, we don’t believe in moderating. We don’t want to be the eye in the sky, watching every interaction with the aim of “cleaning up” the internet. Our mission is much simpler — and far more effective: to protect digital communities only from the most extreme, harmful incidents.

Let me make this clear: protection isn’t about silencing people. It’s about giving game developers the tools to create safer environments — ones that foster real connection without unnecessary overreach.

Protection does NOT equal censorship. It's about stopping real harm, not micromanaging every conversation.

  • Protection is stopping the kind of severe toxicity that drives people away from games, platforms, and communities.
  • Protection is intervening in moments of severe harassment, abuse, or hate speech — not nitpicking over minor disagreements, edgy jokes and friendly “f@#ck you’s”.
  • Protection is fast, targeted, and precise. It doesn’t overreach. It doesn’t pry.

When we talk about “voice protection,” we mean creating an environment where players can connect, compete, and thrive without worrying about being shouted down or attacked because they slightly hurt someone’s feelings. Protection means safeguarding the boundaries of human connection, not invading them.

----------

Words Matter — And So Do Narratives

The industry calls us a “moderation” company because it’s the term they’ve chosen to use. It’s a convenient bucket to toss anything that manages online behavior. But language is powerful, and the words we use shape the way we think. “Moderation” tells people they need to comply. “Protection” tells them they’ll be safe.

Moderation enforces control. Protection empowers communities. Which future are we building?

This distinction isn’t trivial. Moderation assumes conflict is inevitable and that platforms must become omniscient referees. Protection assumes people deserve better — and that we can create systems to shield them from harm while still respecting their freedom.

By rejecting the word “moderation,” we’re rejecting the broken systems that have failed to address toxicity for decades. We’re refusing to accept surveillance as a solution.

“Protection” is the Future

If you’re tired of seeing platforms overstep, silencing voices and missing the mark, join us in reshaping this industry. Let’s stop normalizing all-seeing moderation as a necessary evil. Instead, let’s focus on targeted, ethical, and effective protection.

We don’t need to act on every little offense to make communities better. We just need to stop the worst from taking root.

So yeah, I f@#king hate the word “moderation”. And I hope, one day, the rest of the industry will hate it too.