OpenRouter made Qwen 3.6 Plus Preview available for free with a 1 million token context window for a limited time. Watch prompt and completion data policies closely, since they may be collected to improve the model.

OpenRouter’s launch post says Qwen 3.6 Plus Preview is available “for free for a limited time,” and the linked docs show the platform also exposing service-tier controls through the same API surface via a service_tier parameter for cost/latency tradeoffs service tiers docs. That matters for teams already routing multiple providers through OpenRouter rather than integrating a fresh vendor endpoint.
Alibaba Qwen’s own post confirms the model is live on OpenRouter as an “early preview,” which makes this more than a third-party catalog addition Qwen confirmation. A widely shared screenshot of the model card describes it as stronger on “reasoning” and “more reliable agentic behavior” than the 3.5 series, with emphasis on “agentic coding, front-end development, and complex problem-solving,” though those capability claims are still launch-page claims rather than independently reproduced evals in the evidence here model listing screenshot.
The biggest practical upside is scale: OpenRouter’s listing shows “1M context” at “$0/M input tokens” and “$0/M output tokens” during preview free preview post. That combination drew immediate attention from developers tracking long-context economics, with one practitioner calling “a million tokens of context for free” the standout part of the drop developer reaction.
The biggest caveat is data handling. OpenRouter explicitly says prompts and completions “will be collected and may be used to improve the model” during the free window launch note. There was also a product-positioning correction within minutes of launch: after an initial description tied the model to Qwen’s vision-language series, OpenRouter clarified that this preview “is not a vision language model at this time” correction thread. For engineers, that means treating the release as a text model preview with unusually cheap long-context access, not as a multimodal endpoint.
How to get faster and higher priority inference: Service Tiers openrouter.ai/docs/guides/fe… Example: "/fast" in Codex
Our new model is now live on OpenRouter for an early preview, go give it a try! Looking forward to your feedback~😎
Qwen 3.6 Plus Preview from @Alibaba_Qwen is live now for free for a limited time on OpenRouter! During this free period, prompts and completions will be collected and may be used to improve the model.
Qwen 3.6 Plus Preview from @Alibaba_Qwen is live now for free for a limited time on OpenRouter! During this free period, prompts and completions will be collected and may be used to improve the model.
Qwen3.6-Plus is now free during preview
Zen x Qwen3.6-Plus - free during preview improved reasoning vs Qwen3.6-Minus.. i mean Qwen3.5 1M context · text only