Qwen Code added phone-based control via Telegram, DingTalk, and WeChat, scheduled agent loops, per-subagent model selection, and a planning mode before execution. The release also centers Qwen3.6-Plus, which Alibaba says offers 1M context and 1,000 free daily requests, while Vals ranked the model #17 overall and #11 multimodal.

/loop after enabling "experimental": { "cron": true }, and Qwen's plan mode docs describe a read-only planning flow that waits for approval before execution.You can read the full release notes, skim the Channels implementation PR, check how plan mode exits into execution, and compare the model on Vals' Qwen 3.6 Plus page. One small but useful detail, Qwen's keyboard shortcuts doc still describes Ctrl+O as a debug-console toggle, while the launch thread frames it as a verbosity switch mid-conversation.
The headline feature is a coding agent you can poke from chat apps instead of a terminal. In Alibaba's example, a Telegram message asks the agent to inspect /var/log/app, runs on the server, and returns the result to the phone.
The Channels PR makes clear this is not just three adapters glued on top. It adds a channel SDK, built-in Telegram, WeChat, and DingTalk connectors, plus session routing modes for per-user, per-thread, or single shared sessions.
That PR also lists a few engineering details that did not make the launch tweet:
steer, collect, and followup@qwen-code/channel-baseThe other operator candy is scheduled work. Alibaba's cron post says /loop can turn prompts like "check if tests pass every 30 minutes" into a recurring session job, with the feature gated behind ~/.qwen/settings.json and an experimental cron flag.
Planning mode sits on the other end of that spectrum, slowing the agent down on purpose. Alibaba's release thread describes /plan as a pre-execution pass over files and steps, and the exit_plan_mode docs say the workflow stays in read-only planning until the user approves the implementation plan.
The release notes add two small UI changes around that loop: the GitHub changelog mentions clickable follow-up suggestions after a task finishes, and the keyboard shortcuts page documents Ctrl+S for full long-response printing.
Per-subagent model choice is the most obviously cost-shaped change in the batch. In Alibaba's example, the main agent stays on Qwen 3.6 Plus while a skill file can route a subtask to openai:qwen3.5-plus.
That turns model selection into part of the task graph instead of a session-wide toggle. The release notes also mention fixes to preserve session subagents during cache refresh, which is the kind of boring plumbing you want shipped alongside a feature like this.
Alibaba paired the client release with a push for Qwen 3.6 Plus, which its thread describes as a 1 million token model with 1,000 free daily requests. On Vals' model page, the reported context window is 984k, max output is 66k tokens, latency is 343.02 seconds, and cost per test is $0.26.
The ranking spread is mixed, not bad. According to Vals' first post, Qwen 3.6 Plus landed #17 overall and #11 multimodal, while Vals' coding note put it at #13 on Vibe Code Bench and #15 on both SWE-Bench Verified and Terminal Bench 2.
Vals' weaker-benchmarks follow-up adds the rough edges: CaseLaw came in at #43, and MedCode and MedScribe placed 30 of 50 and 27 of 50, respectively. That gives the launch one last concrete shape, strong enough on coding to market inside Qwen Code, uneven enough that the leaderboard page is more useful than the headline.