GLM model family from Z.ai.
Z.ai said GLM-5.1 is now available to all GLM Coding Plan users and highlighted a 5am to 11am PT switch window. The update broadens access beyond the initial rollout, though early practitioner tests reported weaker Repo bench and tool-calling behavior than 5.0.
Z.ai made GLM-5.1 available to all Coding Plan users and documented how to route coding agents to it by changing the model name in config. Early harness benchmarks place it near Opus 4.6 on coding evals, but BridgeBench users report much slower tokens per second.
Z.ai released GLM-5-Turbo as a faster GLM-5 variant for OpenClaw-style tool use, with 202K context, OpenRouter access, and higher off-peak limits. Try it as a cheaper speed tier for agent workflows, but benchmark completion quality on your own tasks before wider use.