OpenAI Blog→ оригинал

OpenAI expanded Trusted Access for Cyber and added GPT-5.5-Cyber for defenders

OpenAI expanded the Trusted Access for Cyber program and added GPT-5.5 and the specialized GPT-5.5-Cyber model to it. Access is granted to verified defenders to

OpenAI expanded Trusted Access for Cyber and added GPT-5.5-Cyber for defenders
Source: OpenAI Blog. Коллаж: Hamidun News.
◐ Слушать статью

OpenAI has expanded its Trusted Access for Cyber program and added GPT-5.5 and GPT-5.5-Cyber models to it. The company is betting on controlled access for verified cybersecurity specialists who need to search for vulnerabilities faster and reduce risks to critical infrastructure.

Who Gets Access

Based on the announcement, this is not about a broad rollout for all users, but about expanding a separate access channel. Trusted Access for Cyber is a format in which powerful models are given not to random enthusiasts, but to verified defenders: vulnerability researchers, blue team specialists, incident response teams, and other security community participants whose work involves the actual protection of systems. For OpenAI, this is a way to strengthen the useful application of the model while keeping riskier scenarios under additional control.

"Verified defenders will be able to research vulnerabilities faster

and protect critical infrastructure."

The program's name itself shows OpenAI's logic: access is scaled not on the principle of "first to everyone, then we'll sort it out," but through verification and selection. In cybersecurity, this is especially important because the same tool can help both defenders and attackers. That's why controlled expansion here looks not like a marketing formality, but part of the product design. The company is clearly trying to expand the practical utility of models for security teams without removing restrictions where a dual effect is possible.

Why GPT-5.5-Cyber Is Needed

The separate mention of GPT-5.5-Cyber is important in itself. If GPT-5.5 is a universal model, then the Cyber version, judging by its positioning, is oriented toward cybersecurity domain tasks: vulnerability analysis, parsing technical descriptions, assistance in research scenarios, and acceleration of protective processes. This doesn't necessarily mean a completely new product for the mass market. Rather, OpenAI is showing that security is becoming an independent vertical direction where not only general LLM capabilities are needed, but also adjustments for specific workflows.

In practice, such access is needed where a team has a lot of routine analytics and little time for initial analysis. The model can accelerate auxiliary stages without replacing the expert or removing their responsibility for the decision. This is especially noticeable in tasks where you need to quickly gather context, identify weak points, and prepare hypotheses for further manual checking.

In this logic, scenarios like the following are useful:

  • quick initial triage assessment of vulnerability reports
  • summarization of long technical logs and advisory documents
  • search for probable weak points in configurations and code
  • preparation of defensive hypotheses for SOC and incident response teams

The most important signal here is not in the model name, but in who it's given to and why. OpenAI is not selling a story about "AI for hackers"; on the contrary, the emphasis is on verified defenders. This means that the priority remains scenarios where the model saves time for experienced specialists: helps them move faster from raw signal to testable hypothesis, and from hypothesis to action. For the industry, this could be more useful than another general chatbot without security context.

Balance of Benefit and Control

The story with Trusted Access shows how AI companies' approach to sensitive domains is changing. The stronger models become, the harder it is to pretend that a single set of rules for all cases is sufficient. Cybersecurity is exactly the area where the value of a tool is high, but the cost of error is also high. If a model helps investigate vulnerabilities, it should be embedded in processes where there is responsibility, user verification, and clear context of application. Without this, any "usefulness" quickly turns into a management risk.

For defenders of critical infrastructure, this is especially relevant. Such organizations have long update cycles, complex IT and OT landscapes, high regulatory burden, and low tolerance for failure. Even a small acceleration in vulnerability analysis, exposure checking, or recommendation preparation can have a noticeable effect. But predictability, audit, and access restriction are equally important. That's why a model run through a trusted channel is more logical here than unlimited access without filters and verification.

What This Means

OpenAI is effectively cementing a new format for deploying powerful models into sensitive industries: first, controlled access for verified teams, then—possible expansion. For the market, this is a signal that AI in security will develop not only through stronger models, but also through admission modes, domain specialization, and a stricter operational framework. This very scheme seems to be becoming the baseline for how AI works with high-risk tasks.

ЖХ
Hamidun News
AI‑новости без шума. Ежедневный редакторский отбор из 400+ источников. Продукт Жемала Хамидуна, Head of AI в Alpina Digital.
What do you think?
Loading comments…