MBZUAI Institute of Foundation Models logo
MBZUAI Institute of Foundation Models
Open Weights

K2-V2 (low)

Released Dec 2025

Intelligence
#304
Coding
#282
Math
#173
Context512K
Parameters70B

K2-V2 (low) is a specific operational configuration of the K2-V2 large language model, a 70-billion parameter system developed by the MBZUAI Institute of Foundation Models (IFM). Part of the LLM360 initiative, the model is built with a "360-open" philosophy, providing full transparency by releasing its weights, training code, data composition, and intermediate checkpoints.

The "low" designation refers to one of three reasoning effort modes—low, medium, and high—that control the volume of internal thinking tokens generated before an answer is finalized. In this mode, the model is optimized for cost-effective inference and higher throughput, providing improved reasoning capabilities over standard models while using significantly fewer additional tokens than the high-effort configuration.

Technically, K2-V2 is a dense transformer that underwent a three-stage training pipeline: broad pre-training, a "mid-training" phase focused on reasoning and long-context skills, and supervised fine-tuning. It supports an extensive context window of up to 512,000 tokens and is designed to excel in mathematical reasoning, STEM subjects, and complex logic puzzles.

Rankings & Comparison