Alibaba shipped the Qwen 3.6 model family across April 20-22, delivering both a proprietary frontier model and an open-weight variant optimized for agentic coding. The Qwen3.6-Max-Preview achieves the highest reported scores on six major coding and agent benchmarks, DataNorth reported, while the open-weight Qwen3.6-27B is available under Apache 2.0 on Hugging Face.
Max-Preview: 35 Billion Parameters, 3 Billion Active
The flagship Qwen3.6-Max-Preview uses a mixture-of-experts architecture with 35 billion total parameters but activates only 3 billion per inference call, per DataNorth. The model supports a 256,000-token context window and includes a preserve_thinking feature that carries reasoning traces across multi-turn conversations, a capability Alibaba specifically recommends for agentic workflows.
On benchmarks, Max-Preview ranks first on SWE-bench Pro (real-world software engineering), Terminal-Bench 2.0 (command-line execution), SkillsBench (general problem-solving), QwenClawBench (tool use), QwenWebBench (web interaction), and SciCode (scientific programming), DataNorth confirmed. Compared to its predecessor Qwen3.6-Plus, the gains include +9.9 points on SkillsBench, +10.8 on SciCode, and +3.8 on Terminal-Bench 2.0.
The model is available through Alibaba Cloud’s Bailian platform and Qwen Studio, with API compatibility for both OpenAI and Anthropic specifications. It remains proprietary with no open weights.
27B Open-Weight Variant
Alongside the proprietary model, Alibaba released Qwen3.6-27B on April 22 as a dense 27-billion-parameter model under the Apache 2.0 license, per the official GitHub repository. The open-weight release targets developers who need to self-host or fine-tune models for agent-specific workflows.
Cost as Strategy
The competitive angle goes beyond benchmarks. With only 3 billion parameters activated per request, Max-Preview is substantially cheaper to run than dense models like GPT-5.4 ($2.50/$15 per million tokens) or Claude Opus 4.7 ($5/$25 per million tokens), DataNorth noted. Alibaba has not announced final pricing for Max, though Qwen3.6-Plus is currently free during its preview period.
Il Sole 24 Ore’s analysis framed the cost positioning as a deliberate strategy: “Chinese companies maintain tight control over the most advanced models, but at the same time favour more openness in intermediate versions and aggressive price competition.”
Agent-Era Positioning
The Qwen 3.6 release arrives in the same week as DeepSeek V4 and OpenAI’s GPT-5.5. All three model families emphasize agentic capabilities over raw text generation. Il Sole 24 Ore described Qwen 3.6 as built “not oriented to simple text generation, more built to perform complex tasks autonomously,” reflecting a broader shift in Chinese AI development toward models that decompose tasks, use tools, and navigate ambiguity without continuous human oversight.
For teams building agent systems, the practical implication is widening model choice. An Apache 2.0 licensed 27B model that competes on agentic coding benchmarks, combined with a proprietary frontier model compatible with both OpenAI and Anthropic API formats, gives builders more leverage in pricing negotiations and reduces single-vendor dependency.