Xiaomi releases MiMo-V2-Pro: 1M context, 49 AA score, and Hunter Alpha identity confirmed
Xiaomi launched MiMo-V2-Pro through its own API and confirmed Hunter Alpha was an early internal build. That makes the model easier to compare directly for long-context coding and tool-use workloads.

TL;DR
- Xiaomi has released MiMo-V2-Pro as a text-only reasoning model with a 1M-token context window, but unlike MiMo-V2-Flash it is not open weights yet and is available through Xiaomi's own API, according to Artificial Analysis and Xiaomi-linked release material surfaced in the Hunter Alpha reveal.
- On Artificial Analysis, MiMo-V2-Pro scored 49 on the Intelligence Index, up from MiMo-V2-Flash's 41, placing it between GLM-5 at 50 and Kimi K2.5 at 47 in the current ranking shown by the benchmark thread.
- The model's strongest public differentiator is agentic work: Artificial Analysis reports a 1426 Elo on GDPval-AA, while Xiaomi's release note quoted in the reveal post says it was trained with SFT and RL on "complex and diverse Agent scaffolds."
- Xiaomi also confirmed that the stealth "Hunter Alpha" model was an early MiMo-V2-Pro build, and OpenRouter plus client tools including OpenClaw, OpenCode, and Cline have already exposed it for hands-on testing via OpenRouter, OpenCode, and Cline.
What exactly shipped?
MiMo-V2-Pro is Xiaomi's new flagship reasoning model. Artificial Analysis lists a 1M-token context window, text-in/text-out only, and API pricing of $1 per 1M input tokens and $3 per 1M output tokens at the 256K tier, rising to $2/$6 at the 1M-token tier. The same benchmark thread says Xiaomi has "not yet released the weights," making this a shift from the earlier open-weights MiMo-V2-Flash release.
The stealth identity question is now closed. Xiaomi-linked material in the reveal post says "Hunter Alpha shown below is an early anonymous version of MiMo-V2-Pro," and OpenRouter separately confirmed that both Hunter Alpha and Healer Alpha map to MiMo-V2-Pro and MiMo-V2-Omni. That matters for engineers who were already A/B testing the anonymous model on routed APIs: the mystery leaderboard entry can now be tied to a named model family and first-party endpoint.
How strong is it on agentic and long-context workloads?
On Artificial Analysis, MiMo-V2-Pro's overall score is 49, which puts it one point behind GLM-5 and ahead of Kimi K2.5 and Qwen3.5 397B A17B, per the main ranking and the detailed breakout in the eval breakdown. The per-benchmark view is more useful than the aggregate for engineering decisions: MiMo-V2-Pro hits 47% on GDPval-AA, 39% on Terminal-Bench Hard, 63% on AA-LCR for long-context reasoning, 43% on SciCode, and 63% on IFBench, according to Artificial Analysis details.
The biggest claim is efficiency relative to peers. Artificial Analysis says MiMo-V2-Pro used 77M output tokens to run the Intelligence Index, versus 109M for GLM-5 and 89M for Kimi K2.5, with a total benchmark run cost of $348. The same analysis attributes part of its knowledge profile to lower hallucination: MiMo-V2-Pro scored +5 on AA-Omniscience, and the detailed post says that was driven by a 30% hallucination rate, down from 48% for MiMo-V2-Flash.
Where can engineers try it now?
MiMo-V2-Pro is no longer limited to Xiaomi's own surface area. OpenRouter says the model is live now on OpenRouter, with direct model strings for OpenClaw such as openrouter/xiaomi/mimo-v2-pro, and the OpenRouter model page exposes routing and pricing details through that API layer.
Tooling vendors moved quickly because developers had already been using the model in stealth. OpenCode said MiMo-V2-Pro and MiMo-V2-Omni are now free in OpenCode and described Pro as "~1T params" and "optimized for coding." Cline added one week of free access in Cline and highlighted a "78.0 on SWE-bench," which it framed as close to Claude Sonnet 4.6's 79.6. OpenRouter's own leaderboard post leaderboard context had already shown Hunter Alpha at #1 by usage before the reveal, suggesting the anonymous preview had substantial real traffic before Xiaomi put its name on it.