Tool vendors add GPT-5.5 to Cursor, Databricks, Droid, and ml-intern within 24 hours
Independent tools and platforms shipped GPT-5.5 support within a day of the API rollout, spanning IDEs, hosted research agents, enterprise stacks, and coding agents. That shortens evaluation time because teams can test the model inside existing workflows instead of rebuilding around a single OpenAI surface.

TL;DR
- Within a day of the apparent API rollout, Cursor's GPT-5.5 post put the model into its IDE, Lewis Tunstall's ml-intern update wired it into Hugging Face's research agent, Databricks' availability post added it to its governed stack, and FactoryAI's Droid launch card shipped it in a mobile coding agent.
- The fastest public benchmark claim came from Cursor's announcement, which said GPT-5.5 was already leading CursorBench at 72.8%.
- Tunstall's thread made the integration story unusually concrete: in ml-intern, GPT-5.5 gets access to Hugging Face infrastructure including buckets, jobs, and repos, with the implementation linked in PR #118.
- The spread across a local-feeling agent shell, an IDE, a hosted research agent, and an enterprise platform made this look less like a single-surface model drop and more like an immediate ecosystem rollout, according to FactoryAI, ml-intern, Cursor, and Databricks.
Cursor and Droid
You could test the model in two very different coding surfaces almost immediately: Cursor's launch post dropped GPT-5.5 into a mainstream IDE, while FactoryAI's Droid card showed GPT-5.5 and GPT-5.5 Pro inside a mobile-first agent interface.
Cursor attached a number to the launch. Cursor's post said GPT-5.5 was already the top model on CursorBench at 72.8%, while Droid's screenshot emphasized mode selection and direct access to both model variants.
ml-intern
The Hugging Face integration exposed the most about how teams might actually use GPT-5.5 in practice. Lewis Tunstall's thread said ml-intern gives the model access to Hugging Face buckets, jobs, and repos for AI research work, and linked the implementation in the GitHub PR.
That matters mostly because it moved the model past chat and into an agent harness with infrastructure permissions. Tunstall's post also noted multiple ML interns collaborating on the hub, which suggests GPT-5.5 landed inside an existing multi-agent workflow rather than as a standalone model toggle.
Databricks
Databricks' post added the enterprise version of the story: GPT-5.5 became available on Databricks with Codex coding workflows and model inference governed through Unity Catalog.
That is a different integration shape from Cursor or ml-intern. The pitch was not raw model access, but letting teams use GPT-5.5 inside Databricks' existing control plane for permissions, audit, and managed inference.