Trillion Labs
Open Weights

Tri-21B-Think

Released Feb 2026

Intelligence
#227
Coding
#325
Context32K
Parameters21B

Tri-21B-Think is a reasoning-enhanced language model developed by the South Korean AI startup Trillion Labs. Building upon the company's Tri-21B foundation, this model integrates reinforcement learning (RL) and supervised fine-tuning (SFT) to maximize its internal reasoning capabilities. It is specifically designed to perform complex problem-solving by unfolding a multi-step "thinking" process in token form before arriving at a final answer, allowing for greater depth in logical deduction.

A defining feature of Tri-21B-Think is its backtracking architecture, which enables the model to revisit and revise previous reasoning steps during inference. This process is driven by test-time scaling technology, where the model's accuracy on difficult tasks can improve based on the amount of computation time allocated during the reasoning phase. This approach makes it particularly effective for deep research, advanced mathematical proofs, and complex coding environments where standard linear generation often fails.

Technically, the model features a transformer decoder architecture with 20.73 billion parameters, utilizing components such as Rotary Positional Embeddings (RoPE), SwiGLU activation, RMSNorm, and Grouped-Query Attention (GQA). It was developed with a focus on resource efficiency, allowing it to be deployed on a single GPU while maintaining competitive performance. The model's native context window is 32,768 tokens, which can be extended up to 262,144 tokens using YaRN scaling.

Tri-21B-Think exhibits strong agentic capabilities, supporting multi-turn tool calling and autonomous interactions. It also features significant improvements in Korean language processing compared to previous iterations, leveraging Trillion Labs' proprietary systems to transfer knowledge from English-rich datasets into Korean and Japanese contexts. While the model utilizes thought tokens, it was trained without special <think> tags, which were added post-training to ensure compatibility with standard reasoning parsers.

Rankings & Comparison