MBZUAI Institute of Foundation Models logo
MBZUAI Institute of Foundation Models
Open Weights

K2-V2 (high)

Released Dec 2025

Intelligence
#202
Coding
#199
Math
#71
Context512K
Parameters70B

K2-V2 (high) is the high-reasoning configuration of the K2-V2 large language model, a 70-billion-parameter system developed by the MBZUAI Institute of Foundation Models (IFM). It belongs to the K2-V2 series, which is characterized by a "360-open" development philosophy. This approach provides transparency across the entire model lifecycle, including the release of weights, training data compositions, training logs, and intermediate checkpoints for full reproducibility.

Architecture and Training

The model is a dense decoder-only transformer featuring 80 layers and a hidden size of 8192. It was pre-trained on approximately 12 trillion tokens from the TxT360 corpus, followed by a mid-training phase designed to inject reasoning behaviors and extend the context window to 512,000 tokens. The series utilize a three-tier reasoning effort system—low, medium, and high—allowing for variable computation during inference based on task complexity.

Reasoning Capabilities

The "high" effort setting is specifically optimized for advanced problem-solving in mathematics, STEM, and logic. In this mode, the model generates extended internal reasoning traces, or "thinking tokens," before producing a final answer. This configuration has demonstrated improved accuracy on challenging benchmarks such as AIME 2025 and GPQA-Diamond, positioning it as a transparent alternative to proprietary and closed-weight reasoning models of similar scale.

Rankings & Comparison