KAT-Coder-Pro V2 reached 44 on the Artificial Analysis Intelligence Index, with reported gains in token efficiency, cost, speed, and lower hallucination scores over V1. The release shows a non-reasoning coding model posting frontier-adjacent results with much lower output-token use.

KAT-Coder-Pro V2 is a text-in/text-out coding model with a 256K context window, available through StreamLake and AtlasCloud, and Artificial Analysis ranks it just behind Claude Opus 4.6 among non-reasoning models launch thread. The headline number is the 44 Intelligence Index score, but the more implementation-relevant change is where the gains came from: Artificial Analysis says V2 now posts 90% on Tau2-Telecom tool use, 49% on Terminal-Bench Hard, and a 1123 GDPval-AA score after a large jump in agentic evaluations full breakdown.
The tradeoff is that V2 did not improve uniformly. Artificial Analysis says the model regressed on long-context reasoning and knowledge recall versus V1, falling 8 percentage points on AA-LCR to 66% and 17 points on HLE to 16% launch thread. Its hallucination profile improved anyway: the omniscience post reports a -22 Omniscience score, up 15 points from V1, with the gain coming primarily from reduced hallucinations rather than a higher share of correct answers.
The release matters because V2 is posting frontier-adjacent coding numbers without a reasoning pass. Artificial Analysis says that design keeps output-token use to about 8.7M for the full index, versus roughly 11M for Claude Opus 4.6, 14M for Claude Sonnet 4.6, and far more for reasoning models such as DeepSeek V3.2 and Qwen3.5 397B A17B token breakdown. That lower token budget helps explain why the model can hit about 109 output tokens per second and, in Artificial Analysis' phrasing, deliver "one of the fastest end-to-end response times" because there is no extra reasoning delay speed post.
Cost lands in the same pattern. Artificial Analysis prices the model at $0.30 per 1M input tokens and $1.20 per 1M output tokens, and estimates $73 to run the Intelligence Index, down slightly from V1's $76 because V2 needed fewer turns in agentic evals pricing post. That is still not the absolute cheapest option at this capability tier, but it undercuts Sonnet 4.6 by a wide margin in Artificial Analysis' estimate and comes close to DeepSeek V3.2 while staying in a non-reasoning latency envelope; the model page has the full benchmark breakdown.
KwaiKAT has released KAT-Coder-Pro V2, a non-reasoning model that scores 44 on the Artificial Analysis Intelligence Index, an 8 point improvement from KAT-Coder-Pro V1 @KwaiAICoder has updated their flagship proprietary coding model with the release of KAT-Coder-Pro V2. Show more
KAT-Coder-Pro V2 scores -22 on the Artificial Analysis Omniscience Index, up 15 points from KAT-Coder-Pro V1. This improvement is primarily driven by reduced hallucinations, rather than increased accuracy (percentage questions correct)
KAT-Coder-Pro V2 runs at ~109 output tokens per second, far ahead of Claude Opus 4.6 (non-reasoning, 39 OTPS) and Claude Sonnet 4.6 (non-reasoning, 43 OTPS). Because it also has low time to first token without any reasoning delay, it delivers one of the fastest end-to-end Show more
KAT-Coder-Pro V2 is currently priced at $0.30/$1.20 per 1M input/output tokens. Combined with its low token use due to not using reasoning, KAT-Coder-Pro V2 has high cost efficiency for its level of intelligence, costing $73 to run the Artificial Analysis Intelligence Index. This Show more