5 Reasons Hackers Hate AI Slop Even More

The complaint sounds all too familiar. A user logs into their favorite online space and groans at the sight of new AI features being forced upon them. “I’m disappointed that you are working to incorporate AI garbage into the site,” one anonymous post reads. “No one is asking for this.” But this wasn’t a comment on a mainstream social media platform. It was a post on a cybercrime forum. This surprising twist reveals a growing tension in the underground. The outcry is not about privacy or job security. It is about something far more fundamental to their world: reputation, community, and signal.

hackers hate ai slop

The Great AI Backlash in the Underground

When ChatGPT launched in late 2022, security researchers braced for a wave of AI-powered cyberattacks. And they were right to be concerned. Both sophisticated hacking groups and low-level scammers rushed to experiment with the technology. They used it to write phishing emails, translate social engineering scripts, and even attempt to discover software vulnerabilities. The initial buzz on cybercrime forums was one of excitement and opportunity. It seemed like a perfect tool for criminals.

However, a recent study led by Ben Collier, a security researcher and senior lecturer at the University of Edinburgh, tells a different story. Collier, along with researchers from the University of Cambridge and the University of Strathclyde, analyzed a massive dataset of 97,895 AI-related conversations on underground forums from November 2022 to the end of 2023. What they found was a significant shift in sentiment. The initial enthusiasm quickly soured, replaced by frustration and outright hostility towards low-quality AI-generated content. The very people you would expect to embrace the technology are leading the charge against it.

This backlash provides a fascinating window into the social dynamics of cybercrime. It turns out that the elements that make AI useful to a novice—speed, ease, and generic knowledge—make it toxic in a community built on proven skill and human trust. Let’s explore the five specific reasons why this tension has erupted.

5 Reasons Hackers Hate AI Slop (And Why You Should Care)

The study highlights that cybercrime forums are not just marketplaces for stolen data and hacking services. They are intricate social ecosystems. Reputation is currency, and trust is built slowly over time. AI slop disrupts these delicate systems in several critical ways. Understanding these reasons helps us grasp the human side of cybersecurity.

1. Reputation Poisoning: Why Hackers Hate AI Slop

In the world of cybercrime forums, your reputation is everything. Users build their standing by sharing valuable insights, writing detailed tutorials, and helping others. Forum owners even hold writing competitions to encourage high-quality contributions. This social capital allows them to find partners, sell services, and avoid being scammed themselves. It is a meritocracy, or at least it tries to be.

AI slop completely undermines this system. When a new user posts a generic, bullet-pointed explainer generated by ChatGPT, it signals one thing: they are not a skilled professional. They are a “script kiddie” looking for a shortcut. As Ben Collier notes, “It undermines their claim to be a skilled person.” If anyone can generate a passable tutorial with a single prompt, the hard-earned reputation of a genuine expert becomes worthless. This is a direct threat to the social hierarchy of the forum.

Imagine you are a moderator on such a forum. You see a new account post a ten-point list on “What is Phishing?” that reads like a textbook summary. A veteran member, who has spent years crafting custom attack vectors, reports it instantly. The veteran feels insulted. Their years of effort are being equated to a few seconds of typing. This dynamic plays out thousands of times, creating a toxic atmosphere where genuine skill is devalued.

2. Signal vs. Noise: A Core Reason Hackers Hate AI Slop

Cybercrime forums thrive on high-signal, niche information. A seasoned hacker spends weeks crafting a new phishing script or finding a novel exploit. They share their findings to gain status and feedback. The value of the forum lies in this concentrated, hard-to-find knowledge. It is the opposite of a generic Google search.

AI-generated content, on the other hand, is mostly noise. It rehashes basic cybersecurity concepts that anyone can find with a simple search. The researchers found specific complaints about users dumping “bullet-pointed explainers” of fundamental topics. This low-quality content clutters the forum, making it harder to find the genuinely valuable posts. It degrades the entire community’s value proposition.

Think of it like trying to find a rare book in a library that has been flooded with pamphlets. The pamphlets are easy to produce, but they make the search nearly impossible. When the signal-to-noise ratio drops too low, the most talented members leave for greener (or more private) pastures. The forum dies a slow death, suffocated by its own low-effort content. This is a practical, business-driven reason for the hatred.

3. It Destroys the Social Fabric of the Community

It might seem strange to think of cybercriminals as social creatures, but these forums are deeply social spaces. People share jokes, argue about techniques, and build real relationships. They come for the human element as much as the illicit information. For many, it is a tribe.

One post cited in the study puts it perfectly: “If I wanted to talk to an AI chatbot, there are many websites for me to do so. I come here for human interaction.” When a thread is filled with robotic, AI-generated responses, it kills the conversation. It feels like spam. The visceral reaction from forum members—”Stop posting that AI garbage”—is a cry to preserve the human heart of their community.

You may also enjoy reading: Why a Sam Bankman-Fried Trial Would Be a Massive Waste.

Another user on Hack Forums expressed irritation that people using AI “don’t even take the time to write a simple sentence or two.” This complaint goes to the heart of the issue. The effort of writing a post is part of the social contract. It shows you are present and engaged. AI breaks that contract. It turns a conversation into a broadcast, and nobody wants to talk to a wall.

4. It Threatens the Forum’s Business Model and Viability

Forum administrators are feeling the pressure too. Many of these sites rely on advertising, donations, or premium memberships to survive. They need a steady stream of visitors and active users. Google’s AI Overviews, which summarize search results directly on the results page, have already started driving down the number of visitors to these sites. Outside competition is squeezing them.

If the content on the forum is also low-quality AI slop, there is even less reason for anyone to visit. Why would a skilled hacker pay for a premium membership on a forum flooded with basic, AI-generated tutorials? They would not. The economic incentive for administrators is to maintain high quality, but the ease of AI generation makes moderation a nightmare.

Furthermore, some forum owners have tried to introduce paid AI features for their users. This has been met with fierce resistance. The anonymous poster who complained about “AI garbage” was specifically angry that the forum was charging for new AI features while ignoring basic site improvements. The users feel exploited. They see AI as a cash grab that ruins their experience, not a value-add.

5. The Ultimate Irony: Scammers Hate Being Scammed

There is a deep irony in all of this. These are people who make a living by deceiving others, yet they are furious when they feel deceived by low-effort content. They value craftsmanship and effort. A well-written phishing email takes skill. A sophisticated malware strain takes weeks of work. They respect the craft, even if the craft is illegal.

AI slop represents the opposite of craftsmanship. It is lazy, generic, and requires no skill to produce. It is a scam on the community itself. The outrage is not just about the quality of the information; it is about the disrespect shown to the community’s values. They want to interact with skilled adversaries, not a chatbot that regurgitates Wikipedia articles.

This hypocrisy is a powerful motivator. It forces forum members to call out low-quality content publicly to defend the standards of their community. By doing so, they reinforce the idea that skill and effort matter. If you cannot be bothered to write your own post, you are not a real hacker. You are just a tourist. And in the high-stakes world of cybercrime, tourists are a liability.

What This Backlash Teaches Us About AI and Community

The fact that hackers hate AI slop so much is a powerful reminder that AI has limits in social spaces. Technology alone cannot replace the nuance, trust, and human connection that make communities work—even criminal ones. For cybersecurity researchers, this backlash provides a unique window into the values of the underground. It shows that even in the darkest corners of the internet, people crave authenticity, skill, and genuine interaction.

It also serves as a warning for legitimate online communities. If cybercriminals are actively policing AI slop to protect their forums, mainstream platforms should take note. Unchecked AI-generated content can destroy the value of any social space. The fight against low-quality AI content is universal. It is a fight for the soul of online interaction, and it is a fight that every community must face.

Add Comment