Alibaba logo
Alibaba
Open Weights

Qwen3 30B A3B 2507 (Reasoning)

Released Jul 2025

Intelligence
#188
Coding
#213
Math
#126
Context262K
Parameters30.5B (3.3B active)

Qwen3-30B-A3B-Thinking-2507 is a Mixture-of-Experts (MoE) large language model developed by Alibaba's Qwen team. Released in July 2025 as part of the Qwen3 family, this model is a specialized reasoning variant optimized for complex logical deduction, mathematical problem-solving, and advanced coding. Unlike standard instruct versions, it operates in a dedicated "thinking" mode, utilizing internal chain-of-thought processes to arrive at solutions for sophisticated queries.

The model architecture consists of 30.5 billion total parameters, with approximately 3.3 billion parameters activated during any single inference step. This design allows the model to deliver high-level reasoning performance while maintaining the efficiency of a smaller model. It features a native context window of 256,000 tokens, which can be extended to handle up to 1 million tokens using length extrapolation and sparse attention mechanisms.

Technical enhancements in the 2507 update include significantly improved depth of reasoning and better alignment with human preferences. The model has demonstrated high proficiency on technical benchmarks, notably achieving a score of 85.0 on the AIME25 mathematics evaluation. It typically outputs its reasoning process within explicit tags, ensuring transparency in how it reaches final answers.

Rankings & Comparison