Cybercriminals complain about a flood of AI content on their forums
Cybercriminals have begun actively complaining about the constant flood of AI-generated content on their forums and dark web platforms. Hackers are irritated th

On forums and dark marketplaces where hackers and other cybercriminals discuss hacking methods, tools, and strategies, AI-generated content has begun to appear en masse. The quality of this content often leaves much to be desired: AI writes notes with errors, inaccuracies, outdated information, and sometimes simply completely hallucinates details that never existed. Cybercriminals complain that this prevents them from finding genuinely useful discussions and valuable advice from experienced participants. The problem began to intensify in recent months as access to content generation tools became easier. Now anyone can create dozens of forum posts in a matter of minutes, and many apparently do just that.
Why This Irritates Hackers
For underground communities, information is critical. When a hacker is looking for a new method to exploit a vulnerability or a tool recommendation, they need accurate and verified information that can be tested in practice. AI-generated content, often containing errors, outdated recommendations, or simply fabricated "facts," creates serious "noise" in discussions. Cybercriminals are forced to spend precious time filtering out useless posts to find real advice and proven methods from experienced hackers with reputation. This is particularly frustrating because in underground communities reputation is built over years, and AI-generated content dilutes all of that. The problem is especially acute on known dark forums, where the volume of posts is growing but the quality of discussions is clearly declining.
Problems and Consequences
- Decline in quality of discussions due to low-quality AI-generated content
- Increased time needed to find useful information for experienced hackers
- Dilution of valuable advice by automatically generated posts
- Difficulty distinguishing real experience from automatically generated text
- Possible decline in activity of experienced users frustrated by the noise
What This Means
The problem of AI spam has now reached even the most closed and specialized corners of the internet. This shows that neural networks influence the information space literally everywhere, regardless of platform legality. For cybercriminals, this means they need to urgently search for new ways to filter content and verify information — possibly transitioning to private channels that require identity confirmation. For security overall, this could turn out to be a useful side effect — cybercriminals will be forced to rely more on verified personal connections and recommendations from trusted sources.