Cursor reports Composer 2 is based on Kimi K2.5 after API model IDs surfaced
Cursor and Kimi said Composer 2 starts from Kimi K2.5, with continued pretraining and RL added on top after developers spotted Kimi model IDs in traffic. Teams should benchmark it as a productized open-base stack, not a from-scratch model.

TL;DR
- Developers first spotted
accounts/anysphere/models/kimi-k2p5-rl-0317-s515-fastin Cursor traffic, and Cursor later confirmed Composer 2 starts from Kimi K2.5 rather than a from-scratch base model, according to the API capture and Cursor's reply. - Kimi said the integration is an authorized commercial partnership, with Cursor doing “continued pretraining & high-compute RL training” on top of Kimi K2.5 via Fireworks’ hosted stack, as described in Kimi's statement.
- Cursor says its training recipe was base-model selection via perplexity evals, then continued pretraining and a “4x scale-up” in RL, with Fireworks supplying inference and RL samplers, per the Cursor post.
- The disclosure matters because engineers evaluating Composer 2 should treat it as a heavily productized open-base stack, while the backlash centered on Cursor not naming Kimi in the launch blog until after the model IDs surfaced, as shown by Cursor's admission and the community criticism.
What did Cursor confirm?
The immediate trigger was a developer-circulated traffic capture whose request dump showed the model field accounts/anysphere/models/kimi-k2p5-rl-0317-s515-fast inside Cursor's /chat/completions call. The same capture also showed Cursor's coding-assistant system prompt and tool use, which made the claim testable rather than pure speculation.
Kimi then confirmed that Composer 2 uses Kimi K2.5 as its foundation. In Kimi's words, Cursor added “continued pretraining & high-compute RL training” and accessed the model through Fireworks “as part of an authorized commercial partnership” Kimi statement. Cursor's own follow-up matched that account, saying it was “a miss” not to name the Kimi base model in the original blog and that the team would “fix that for the next model” Cursor's reply.
What changed technically in Composer 2?
Cursor says the stack started with base-model selection using “perplexity-based evals,” where Kimi K2.5 “proved to be the strongest” Cursor training details. From there, the team says it ran continued pretraining and then “high-compute RL,” described as “a 4x scale-up,” with Fireworks providing both inference and “RL samplers” Cursor training details.
That framing is important for engineers because it narrows what Composer 2 actually represents: not a net-new frontier pretraining run, but an aggressively adapted coding model built from an existing open-weight base plus post-training and product integration. Fireworks also appears in the rollout path beyond plain hosting; a Fireworks-linked post said “it's not just inference but also RL” on the platform Fireworks launch RT. Separately, Cursor increased capacity right after launch, with team members posting “2x more usage all weekend” and “We're giving everyone 2x usage,” which suggests heavy demand during the release window weekend capacity boost 2x usage post.
Why the disclosure matters for engineering teams
For teams benchmarking coding agents, the practical takeaway is attribution and comparability. If Composer 2 is Kimi K2.5 plus continued pretraining, RL, and Cursor's agent product layer, then comparisons against other coding models should separate base-model quality from post-training, serving, and tool orchestration. Cursor itself now describes the result as “the strong base, CPT and RL, and Fireworks' inference and RL samplers” rather than a from-scratch model effort Cursor's reply.
The controversy was mostly about disclosure, not licensing. Practitioners' criticism focused on the launch blog omitting a “direct reference to Kimi K2” Transparency reaction, while broader reaction argued the issue only got addressed after community uproar Trust criticism. That distinction matters operationally: the technical story is a credible open-base-to-product pipeline, while the process story is that model provenance became visible first through traffic inspection and only then through official confirmation API sniff Kimi statement.