AgaInst the Slop
Why Your Revulsion to Garbage AI Content Isn't the Uncanny Valley
You know that feeling when you scroll past an AI-generated video and something in your gut recoils? Maybe it’s a fake celebrity endorsement. Maybe it’s a suspiciously smooth “news” clip. Maybe it’s just another piece of synthetic content clogging your feed, unlabeled and masquerading as real.
You might think you’re experiencing the uncanny valley, that famous phenomenon where almost-human things trigger instinctive disgust. But I’m here to tell you: what you’re feeling is something deeper, more important, and far more justified.
You’re not experiencing aesthetic discomfort. You’re experiencing moral disgust.
And you should lean into it.
The Slop Economy
Let’s be clear about what we’re dealing with. “AI slop” isn’t just a cute term for bad AI content. It’s a specific phenomenon: synthetic media created cheaply, deployed at scale, and designed to deceive or exploit. It’s the content equivalent of spam email or robocalls, except it looks like your reality.
The slop comes in many flavors:
Fake celebrity videos selling you crypto scams
Synthetic “documentary” footage of events that never happened
AI-generated news anchors reading fabricated stories
Engagement bait designed to trigger shares and comments
Historical “photos” that rewrite the past
Deepfakes of real people saying things they never said
What makes it slop isn’t the AI generation itself. It’s the intent. It’s the deliberate choice to create synthetic content and pass it off as real. To flood the information commons with forgeries. To exploit the gap between what AI can generate and what most people can detect.
This Isn’t About Technology, It’s About Deception
Here’s what defenders of unlabeled AI content don’t want you to understand: the problem isn’t that the technology exists. The problem is the choice not to label it.
Every piece of unlabeled AI content represents a decision. Someone looked at their synthetic creation and thought: “I could be honest about what this is... or I could let people think it’s real.”
That’s not innovation. That’s fraud.
Think about it: we require nutrition labels on food. We require disclosures on advertising. We prosecute counterfeiting. We understand that in a functioning society, people need to know what they’re consuming and who’s trying to persuade them.
But somehow, when it comes to synthetic media which can impersonate people, fabricate events, and manipulate millions, we’re supposed to shrug and call it progress?
The Epistemic Collapse
If you work in tech, digital media, or anywhere near the information ecosystem, you’re probably feeling something that others aren’t quite feeling yet: panic.
Because you can see where this goes.
Every unlabeled piece of AI slop makes everything else a little less trustworthy. Every fake video that fools people makes the next real video less credible. Every synthetic “photograph” poisons the well of visual evidence.
We’re watching the epistemological foundation of shared reality crumble in real-time. And the people creating unlabeled slop are the ones with the sledgehammers.
This is why your revulsion intensifies even as the technology “improves.” Better AI doesn’t solve the problem, it makes it catastrophically worse. A convincing fake is more dangerous than an obvious one. A lie that fools millions is more destructive than a lie nobody believes.
You’re not irrationally resistant to progress. You’re watching a trust infrastructure get demolished for engagement metrics and pocket change.
The Trained Eye’s Curse
Once you know what to look for, you can’t unsee it. The telltale smoothness of AI motion interpolation. The dreamlike logic of scene transitions. The slightly-wrong physics of fabric and hair. The way faces sometimes do that thing in peripheral detail.
Working in digital spaces, you’ve developed pattern recognition that most people don’t have yet. You’re the canary in the coal mine, and what you’re detecting is poison gas.
But here’s the curse: you can spot the slop, but you can’t protect everyone else from it. You watch people share obvious fakes. You see synthetic content go viral. You know that millions are being deceived, and there’s very little you can do except shout into the void: “This isn’t real!”
That sense of helplessness compounds the disgust. You’re not just offended by the deception, you’re frustrated by your inability to stop it at scale.
The Moral Clarity We Need
So let’s be absolutely clear about the moral framework here:
Unlabeled AI-generated content that mimics reality is unethical. Full stop.
It doesn’t matter how impressive the technology is. It doesn’t matter if the creator thinks it’s “obvious” that it’s fake. It doesn’t matter if they claim it’s “just for entertainment.”
If you create synthetic media that looks real and you don’t clearly label it as AI-generated, you are:
Deceiving your audience, even if only some of them
Eroding epistemic trust in media and information
Contributing to the slop economy that degrades the information commons
Disrespecting people’s right to know what’s real
Potentially enabling manipulation, fraud, and harm
This isn’t gatekeeping. This isn’t technophobia. This is basic ethical behavior in an age where synthetic media is trivially easy to create and catastrophically easy to spread.
What the Revulsion Is Telling You
That visceral response you feel when you encounter unlabeled AI slop? That’s not a bug in your psychology. That’s a feature.
Your disgust is your moral immune system recognizing a pathogen. It’s your bullshit detector working overtime. It’s your values: honesty, authenticity, respect for truth, recoiling from their violation.
Don’t pathologize that response. Don’t let people tell you you’re “overreacting” or “afraid of AI.” You’re having an appropriate emotional reaction to something genuinely wrong.
The contempt you feel toward creators who deliberately deceive? That’s justified. They deserve it. They’ve earned it by choosing to poison the well we all drink from.
Embracing the Grand Canyon
Maybe your uncanny valley is wider than most people’s. Maybe you notice the tells earlier. Maybe you feel the revulsion more intensely. Maybe, like me, you’d call it more of an Uncanny Grand Canyon.
Good.
We need people who are sensitive to this. We need people whose alarm bells go off early and loud. We need people who refuse to normalize the slop, who push back against the “it’s just harmless fun” deflections, who insist on honesty and labeling.
Because here’s the thing: most people won’t develop my level of sensitivity until it’s too late. By the time the average person can reliably spot AI slop, the damage will be done. The trust will be gone. The information ecosystem will be so polluted that nobody knows what’s real anymore.
You’re an early warning system. Act like it.
What We Can Do
This isn’t a counsel of despair. There are things we can actually do:
As creators: Label everything. Always. No exceptions. If you generate synthetic media, mark it clearly and permanently. Make labeling part of your creative ethics.
As platforms: Require disclosure. Enforce labeling. Penalize deception. Make it harder to spread unlabeled slop than to spread labeled synthetic content.
As consumers: Trust your gut. When something feels off, investigate. Share your skepticism. Normalize asking “Is this real?” Make it socially acceptable, even admirable, to question and verify.
As a community: Build detection tools. Share telltales. Educate others. Create a culture where unlabeled AI content is seen as what it is: an ethical violation.
As citizens: Support regulation that requires disclosure. Push for laws that treat deceptive synthetic media as fraud. Demand accountability from platforms and creators.
The Line in the Sand
The battle over AI-generated content isn’t really about technology. It’s about honesty.
It’s about whether we’re going to build a future where synthetic media is transparent and disclosed, or whether we’re going to sleepwalk into a reality where nothing can be trusted and everything might be fake.
Your revulsion is trying to tell you something. It’s telling you that we’re at a critical juncture. It’s telling you that the choices we make now about disclosure and labeling will shape the information ecosystem for decades.
It’s telling you that this matters.
So the next time someone tells you to relax about unlabeled AI slop, to stop being so sensitive, to accept that “this is just how things are now,” remember this:
Your disgust is appropriate. Your contempt is earned. Your insistence on honesty and labeling is the right position.
Don’t let anyone talk you out of it.
The slop economy depends on people like you getting tired, giving up, accepting the new normal. Your refusal to do that: your continued revulsion, your vocal contempt, your insistence that deception is wrong, is a form of resistance.
We need more people to feel what you feel. To see what you see. To refuse what you refuse.
We need more Uncanny Grand Canyons.
What’s your relationship with AI-generated content? Do you find yourself increasingly sensitive to the tells? Has your tolerance for unlabeled synthetic media dropped even as the technology improves? I want to hear from others who feel this moral disgust… and from people who are just starting to notice the slop. Let’s build a community of people who refuse to normalize deception.



