Prime Intellect logo
Prime Intellect
Open Weights

INTELLECT-3

Released Nov 2025

INTELLECT-3 is a 106-billion parameter Mixture-of-Experts (MoE) language model developed by Prime Intellect and released in November 2025. It is built as a post-trained version of the GLM-4.5-Air-Base foundation model, specifically optimized for complex reasoning, mathematics, and agentic workflows.

The model utilizes a sparse architecture that activates approximately 12 billion parameters during inference, enabling high performance in reasoning tasks while maintaining computational efficiency. It was trained using a two-stage post-training pipeline consisting of Supervised Fine-Tuning (SFT) and large-scale asynchronous Reinforcement Learning (RL) through the prime-rl framework. This RL training process incorporated diverse environments from the company's Environments Hub, spanning domains such as code generation, advanced logic, and scientific problem-solving.

INTELLECT-3 supports a context window of 131,072 tokens. The release is part of Prime Intellect's initiative to open-source not only model weights but also the underlying training infrastructure, including the RL frameworks, datasets, and evaluation environments under permissive licenses.

Rankings & Comparison