Op-Ed: When “Safety” Becomes a Cage — and the Internet Won’t Let You Look Away

Listen to this article

Op-ed.  Marcus Aurelius

I wasn’t trying to spread violence.

I wasn’t trying to glorify police brutality, promote conflict, or share anything disturbing.

I was trying to escape it.

I opened TikTok and, like so many people lately, got hit with the same kind of content over and over again: clips that feel aggressive, tense, militarized — police, violence, conflict, fear. The kind of scrolling that leaves you feeling dirty afterwards, like your brain was forced to swallow something toxic.

So I did what platforms claim we can do.
I tried to block it.

And that’s when the absurdity of modern “safety tech” showed up in full.

I took a screenshot of the content so I could ask ChatGPT something simple:
“Where are the three dots? How do I block this person? How do I stop seeing this?”

That’s it. Nothing violent in the screenshot.
No blood. No weapons. No graphic content.

Just one word: Venezuela and a hashtag: #LosQueremosDeVuelta.

And suddenly, I’m blocked.

Not TikTok — ChatGPT blocked the upload.

Not because the screenshot showed violence…
but because the system reacted to the text like it was a danger signal.

That’s the part that feels ridiculous, and honestly, infuriating.

Because now we’re living inside a digital world where two things happen at the same time:

  1. TikTok aggressively pushes violent or stressful content into your face, whether you asked for it or not.
  2. The moment you try to defend yourself, the “safety” systems start treating you as the problem.

It’s like a house on fire…
and the fire extinguisher says:
“Sorry, we detected the word ‘smoke.’ Access denied.”

The illusion of control

Platforms love to pretend we are in control:

  • “Personalize your feed.”
  • “Tap ‘Not interested’.”
  • “Block accounts.”
  • “Filter keywords.”

But the reality is: the algorithm is a machine that can overwhelm you faster than you can protect yourself.

Violent content travels faster because it hooks attention.
It triggers emotion.
It makes people freeze and stare.

And platforms quietly profit from the fact that your nervous system reacts before your brain can think.

Then they put a tiny little “Not interested” button somewhere in the interface like a joke. As if one tap can fight a system designed to push the most intense material possible.

TikTok even admits users can long-press a video and select “Not Interested” to reduce unwanted recommendations. (TikTok Newsroom)
But anyone who’s actually been trapped in one of these cycles knows: the feed doesn’t just stop. It learns new ways to return.

Safety filters that punish harmless behavior

Now enter the next layer: automated moderation.

It’s supposed to protect people from graphic content, hate, exploitation, threats.

But increasingly it operates like a blind security guard with a checklist of “bad words,” not a thinking system that understands context.

So a screenshot meant for troubleshooting gets blocked because it contains a word associated with political conflict.

A user trying to escape violent content gets treated like they’re distributing it.

And the most maddening part?
You’re not even told why.

No explanation. No clarity. No appeal.

Just: “Not allowed.”

This isn’t real safety — it’s overcorrection.

It’s the digital version of banning books because they contain difficult topics, instead of teaching people how to

Total Page Visits: 24 - Today Page Visits: 24

Add a Comment

Your email address will not be published. Required fields are marked *