Intruder created an AI agent that conducts pentests in minutes instead of days
London-based Intruder, a graduate of GCHQ's cyber accelerator, has launched AI agents to conduct corporate security pentests. The new agents replicate the full

Manual pentest costs $10–50k, requires weeks for preparation and coordination, days for execution, and the report often becomes outdated before it's fully read. This explains why security in startups remains precarious: audits are expensive, and results quickly lose their relevance. London-based Intruder, a graduate of GCHQ's cyber accelerator, has launched AI agents that replicate the methodology of human testers but perform the work orders of magnitude faster and cheaper. This is the first genuinely automated pentesting system that doesn't require manual script preparation.
How the AI Pentester Works
Intruder's AI sees the screen and browser exactly as a human does. It's not just a wrapper around existing scanners like Burp Suite — the system analyzes the interface, finds input fields, understands application logic, and methodically attempts to break it. Typical process: the agent opens the target website, finds a login form, tries standard passwords and typical combinations (admin/password, root/root), analyzes the server response. Then it explores the interface further, searches for data input fields, enters typical attack vectors into each: SQL injections ('; DROP TABLE users; --), XSS payloads (<script>alert(1)</script>), checks the reaction. It also checks access rights: can a regular user access the admin panel, can one client see another client's data.
Main AI checks:
- Scanning open ports, services, and their versions
- Testing authentication: weak passwords, default credentials, 2FA bypass
- Checking authorization: horizontal and vertical privilege escalation
- Finding SQL injections, XSS, CSRF on all web forms
- Analyzing configuration: unprotected S3 buckets, exposed API keys, code leaks
- Generating a report with recommendations for fixing each vulnerability
The Economics of Pentesting Are Flipped
A professional pentester costs $50–150 per hour. A full audit of an average company requires a week of preparation, a week of testing, and a week of documentation. Total: three weeks, $20–50k, a report that within a month may become outdated due to code updates. AI handles it in hours or minutes depending on system size, costs fall by an order of magnitude. For startups and SMBs this is a revolution — previously they simply couldn't afford professional audits. Now security can be checked regularly: weekly or monthly, like regular regression tests. This fundamentally changes the security model from one-time checking to continuous scanning.
Traps and Limitations
AI works great on typical vulnerabilities but is often blind to logical errors. If an application has non-standard business logic (an error in the payment algorithm that allows cheaper purchases), AI may miss it. Additionally, AI can get stuck on CAPTCHAs, not pass complex JavaScript obfuscation, not handle WebAssembly on the client side. On systems requiring physical access or social engineering, AI is completely helpless. Therefore, AI pentests supplement but don't replace full audits on critical systems. Startups use AI for the first layer. Banks and government agencies — traditional pentest plus AI as a second opinion and continuous scanning.
What This Means
AI in cybersecurity transitions from laboratories to a combat tool. Security stops being a luxury of large corporations and becomes accessible to startups. This is good: fewer obvious vulnerabilities on the internet. But it's bad: threat actors also get access to AI tools for attacks and password cracking. The result is escalation: defenders check more frequently, attackers innovate faster. For companies, the takeaway is one: security is no longer a one-time expense but a constant practice.