Alibaba logo
Alibaba
Open Weights

Qwen3 235B A22B 2507 (Reasoning)

Released Jul 2025

Intelligence
#120
Coding
#141
Math
#21
Context256K
Parameters235B (22B active)

Qwen3 235B A22B 2507 (Reasoning) is a flagship Mixture-of-Experts (MoE) large language model developed by Alibaba. As part of the Qwen3 family, this "Thinking" variant is specifically optimized for complex reasoning tasks, including advanced mathematics, logic, coding, and scientific problem-solving. It features a total of 235 billion parameters, with 22 billion parameters activated per token during inference to balance computational efficiency with high-performance output.

The model natively supports a 256K-token context window, enabling the processing and generation of extremely long documents and complex multi-step reasoning chains. Unlike the standard "Instruct" version, the Reasoning variant is designed to utilize an internal chain-of-thought process to tackle sophisticated academic and professional benchmarks, often producing structured intermediate steps before providing a final answer.

In evaluation, the model demonstrates competitive performance against leading frontier models, particularly in reasoning-heavy assessments like AIME25, SuperGPQA, and LiveCodeBench. Its architecture consists of 94 layers with 128 specialized experts, utilizing Grouped Query Attention (GQA) to optimize throughput. The model is released under the Apache 2.0 license, facilitating broad integration for agentic workflows and specialized research applications.

Rankings & Comparison