Google stopped the first zero-day developed with AI
Google Threat Intelligence Group detected the first zero-day developed by cybercriminals with AI. The vulnerability allowed bypassing two-factor authentication

For the first time in history, Google has discovered a zero-day vulnerability developed by cybercriminals using artificial intelligence. The threat was intended for a mass coordinated cyberattack, which was stopped before implementation.
Threat to administrators
Google Threat Intelligence Group (GTIG) identified a serious threat from a group of cybercriminals who planned to use a zero-day vulnerability to bypass two-factor authentication. The target of the attack was an open web system administration tool, widely used in the corporate sector. The vulnerability would have allowed complete access to administrator systems across numerous organizations.
The cybercriminals were preparing a coordinated attack that would target multiple victims simultaneously. This was not just the development of a single exploit, but a full-scale operation aimed at mass exploitation. According to the attackers' plan, bypassing two-factor protection would have opened the door to all administration systems in affected organizations.
Such a scenario could have led to the compromise of thousands of companies.
How they discovered AI involvement
Google researchers found explicit signs in the exploit's source code indicating it was created with the help of a neural network. Analysis of the Python script revealed strange artifacts typical of LLM-generated content:
- Hallucinated CVSS scores — incorrect, fabricated vulnerability severity values that do not correspond to actual risk
- Structured formatting — the code was formatted in textbook style, with unusual regularity for this type of exploit
- Strange writing style — characteristic of large language models in commenting and variable naming conventions
- Unusual logic — certain fragments of the script contained strange sequences of operations that work but look unnatural
Such signs appear when LLMs generate code for specialized tasks without fully understanding security context and exploit requirements. Neural networks can follow syntax, but don't always grasp semantics.
"This demonstrates a dangerous evolution in cyber threats.
If previously AI mainly helped with mass phishing attacks, now it helps create serious, targeted exploits," — Google researchers note in their report.
Danger of lowered entry barrier
This is the first documented case where generative AI was used to develop a zero-day exploit. Previously, artificial intelligence was applied in cyberattacks, but mainly for automating phishing campaigns, creating fake profiles, and social engineering. A zero-day exploit is an entirely different level of danger.
The main consequence of this discovery: the entry barrier for developing serious cyber threats is dramatically lowered. Now cybercriminals don't need to hire experienced developers with knowledge of web application architecture and vulnerabilities. They can simply ask ChatGPT, Claude, or another large language model for help with exploit code. Even if the code contains errors (like those hallucinated CVSS scores), the exploit will remain functional and dangerous. This means the number of cyber threats could increase sharply as development becomes more accessible.
What this means for companies
Google's discovery points to an acceleration in the full cycle of cyber threat development. Companies now need to respond even faster to new vulnerabilities and apply critical-level patches. Postponing patching for a week can be dangerous. For large organizations, this means investing not only in perimeter security but also in anomaly monitoring, quick detection of suspicious traffic within the network, and incident readiness. AI-developed exploits may be less sophisticated, but there will be more of them, and they will emerge faster.