The Verge→ оригинал

OpenAI launched Daybreak for automated vulnerability discovery in code

OpenAI launched Daybreak for automated vulnerability discovery and remediation in code. The system analyzes the company's code, builds a model of possible attac

OpenAI launched Daybreak for automated vulnerability discovery in code
Source: The Verge. Коллаж: Hamidun News.
◐ Слушать статью

OpenAI has launched Daybreak — an initiative for automatically detecting and fixing code vulnerabilities before hacker attacks. This is a direct response to Claude Mythos from Anthropic, which recently introduced its security-focused AI.

How

Daybreak Works Daybreak uses the AI agent Codex Security, which OpenAI launched back in March of this year. The agent analyzes an organization's source code and creates a threat model — a detailed analysis of possible attack paths that hackers could exploit when attempting infiltration or unauthorized access. The key feature of the system is that it doesn't simply identify bugs through scanning like SAST tools, and then validate their real danger: it checks whether the identified vulnerability can actually be exploited under the real-world conditions of a specific application and infrastructure.

This sharply reduces the number of false positives, which typically frustrate engineers. After validation, Daybreak automates the detection of the most critical classes of vulnerabilities and helps engineers prioritize fixes based on risk level and probability of exploitation.

What's

Included in Daybreak Complete analysis of source code and application architecture Building threat models based on typical exploitation paths Validation of identified vulnerabilities against real-world exploitation criteria Automatic detection of critical, high, and medium-risk issues * Integration into the CI/CD development pipeline ## Competition Between OpenAI and Anthropic The launch of Daybreak coincides with the emergence of Claude Mythos from Anthropic — a security-focused AI model that its creators consider too dangerous for public release and public demonstration. Anthropic chose not to release the model to open access and public APIs, but instead shared it only with select partners as part of its own Project Glasswing initiative. This is a significant moment in the history of AI security.

Both major AI companies have almost simultaneously invested in security-focused solutions and frameworks. This signals that corporate cybersecurity is becoming a key strategic market for AI firms. Both sides understand the reality: companies urgently need tools that identify vulnerabilities before hackers find them.

The difference in strategies is notable: OpenAI is pursuing a fully public product with access for all Pro subscribers, while Anthropic chose a more conservative model with limited access and deep partnerships only with major enterprise clients.

What

This Means For developers and IT teams at companies, this represents fundamental changes in the development cycle. Instead of waiting for human code review or running expensive penetration tests before production release, engineers will be able to use an AI agent for early detection of critical vulnerabilities right during coding. This should sharply reduce the probability of successful attacks after release and save costs on incident response. For OpenAI and Anthropic themselves, this is another step in the migration of AI models from research laboratories into practical business, where money is real and stakes are high.

ЖХ
Hamidun News
AI‑новости без шума. Ежедневный редакторский отбор из 400+ источников. Продукт Жемала Хамидуна, Head of AI в Alpina Digital.
What do you think?
Loading comments…