OpenAI rolls out GPT-5.5-Cyber limited preview for critical-infrastructure defenders
OpenAI introduced GPT-5.5-Cyber in limited preview for defensive security teams and paired it with GPT-5.5 plus Trusted Access for Cyber. The release matters because OpenAI is separating cyber-specific access and permissiveness from general-model access rather than treating security work as a normal prompting mode.

TL;DR
- OpenAI split its cyber rollout into two access paths: standard GPT-5.5 with Trusted Access for Cyber for most defensive work, and a separate GPT-5.5-Cyber limited preview for more permissive authorized workflows, according to cryps1s' launch thread and TheRealAdamG's access matrix screenshot.
- The company is framing GPT-5.5-Cyber less as a smarter cyber model than as a differently gated one, because deredleritt3r's read of the launch materials said CyberGym scores are roughly the same while the model is trained to be more permissive on security tasks.
- OpenAI's published use-case split is unusually explicit: GPT-5.5 with TAC covers secure code review, vuln triage, malware analysis, detection engineering, and patch validation, while GPT-5.5-Cyber is reserved for authorized red teaming, penetration testing, and controlled validation, per the access table and the launch thread.
- The rollout is tied to government and critical-infrastructure deployment, with deredleritt3r's earlier Chris Lehane update saying CAISI is testing the model and OpenAI is working with the White House on a deployment playbook for governments, allies, and infrastructure operators.
You can read OpenAI's launch post, skim the three-tier access table, and the most concrete product clue before launch was a Codex warning banner that told users flagged for cyber-risk prompts to join the Trusted Access for Cyber program. One early analysis also surfaced the key wrinkle fast: OpenAI says GPT-5.5-Cyber is not meant to be dramatically more capable than GPT-5.5, just more permissive in narrower authorized settings.
Access tiers
OpenAI is shipping cyber access as a ladder, not a single feature flag.
The table in the launch screenshot breaks the rollout into three levels:
- GPT-5.5 default: standard safeguards for general-purpose, developer, and knowledge work.
- GPT-5.5 with TAC: more precise safeguards for verified defensive work in authorized environments.
- GPT-5.5-Cyber: the most permissive behavior, but only with stronger verification and account-level controls.
That structure is the story. OpenAI is separating normal model access, verified defensive work, and specialized offensive-style validation into different products and policy envelopes, rather than treating all security work as ordinary prompting inside one model.
Permissiveness, not a benchmark jump
The sharpest caveat came from deredleritt3r's summary of the launch materials, which said GPT-5.5-Cyber is "not intended" to be significantly better at cybersecurity than GPT-5.5.
According to that thread, CyberGym scores are about the same for both models. The differentiator is that GPT-5.5-Cyber is "primarily trained to be more permissive on security-related tasks."
That matches the use-case split in cryps1s' announcement. GPT-5.5 with TAC is the starting point for secure code review, vulnerability triage, detection engineering, malware analysis, and patch validation. GPT-5.5-Cyber moves into authorized red teaming, penetration testing, and controlled validation.
OpenAI also said, via the alpha-testing note, that the model has already been used to scale automated red teaming of critical systems and validate high-severity vulnerabilities. A technical deep dive is still pending.
Government and infrastructure rollout
The preview population is narrow by design. gdb's post said GPT-5.5-Cyber is in limited preview for defenders securing critical infrastructure, and deredleritt3r's earlier update adds that CAISI is already testing it.
The same update said OpenAI is working with the White House and broader administration on a responsible deployment strategy, including how to provide these capabilities to the US government, state and local governments, allies, and critical infrastructure operators.
That makes the launch read more like a controlled access program than a normal model release. Even the public framing in OpenAI's announcement centers verification, account controls, and who gets access first.
Codex already showed the gating
Before the formal announcement, petergostev's Codex screenshot showed what this policy looks like in product.
The banner says the chat was flagged for possible cybersecurity risk, tells the user to rephrase or submit feedback if the flag is wrong, and points them to Trusted Access for Cyber for authorized security work. That is a pretty direct product artifact of the access split OpenAI has now made official.
The screenshot also suggests the cyber policy is not confined to one standalone security product. It is surfacing inside general coding surfaces, where OpenAI has to decide whether a request stays in the default lane or gets routed into a verified cyber lane.