Skip to content
AI Primer
MODEL7 stories

GLM

GLM model family from Z.ai.

NEWS4w ago
GLM-5.1 ranks #3 on Code Arena

Arena ranked GLM-5.1 third on Code Arena and first among open models, putting it on par with Claude Sonnet 4.6 and within about 20 points of the overall lead. The update gives the open model a new frontier coding benchmark after its initial release and hosting wave.

NEWS1mo ago
GLM-5.1 lands on Modal, Together AI, Letta Code, and Tembo

Providers and agent platforms added GLM-5.1 endpoints across Modal, Together AI, Letta Code, Tembo, and Tabbit, with free trials, no-key access, and 99.9% SLA options. Use the new hosting options to test the model for coding and long-horizon agent workloads without waiting on self-hosting.

RELEASE1mo ago
Z.ai releases GLM-5.1, a 744B open model with 58.4 SWE-Bench Pro and 8-hour agent runs

Z.ai released GLM-5.1, a 744B open model built for long-horizon agentic coding and ranked first among open systems on SWE-Bench Pro. Day-0 support in OpenRouter, Ollama, SGLang, vLLM, OpenCode, and local quantization paths makes it ready to test in existing stacks.

RELEASE1mo ago
Z.ai launches GLM-5V-Turbo for screenshot coding and GUI-agent tasks

Z.ai released GLM-5V-Turbo, a multimodal coding model for screenshots, video, design drafts, and GUI-agent tasks. It keeps text-coding performance steady while adding native vision support, so teams can test visual workflows without swapping models.

RELEASE1mo ago
Z.ai releases GLM-5.1 to all Coding Plan users with 5am–11am PT switch window

Z.ai said GLM-5.1 is now available to all GLM Coding Plan users and highlighted a 5am to 11am PT switch window. The update broadens access beyond the initial rollout, though early practitioner tests reported weaker Repo bench and tool-calling behavior than 5.0.

RELEASE1mo ago
Z.ai releases GLM-5.1 to Coding Plan users with `glm-5.1` model switch

Z.ai made GLM-5.1 available to all Coding Plan users and documented how to route coding agents to it by changing the model name in config. Early harness benchmarks place it near Opus 4.6 on coding evals, but BridgeBench users report much slower tokens per second.

RELEASE1mo ago
Z.ai releases GLM-5-Turbo with 202K context for OpenClaw-style agent workflows

Z.ai released GLM-5-Turbo as a faster GLM-5 variant for OpenClaw-style tool use, with 202K context, OpenRouter access, and higher off-peak limits. Try it as a cheaper speed tier for agent workflows, but benchmark completion quality on your own tasks before wider use.

AI PrimerAI Primer

Your daily guide to AI tools, workflows, and creative inspiration.

© 2026 AI Primer. All rights reserved.