DeepSeek logo
DeepSeek
Open Weights

DeepSeek R1 0528 (May '25)

Released May 2025

DeepSeek R1 0528 is an iterative update to the DeepSeek-R1 reasoning model, released on May 28, 2025. It utilizes a 671 billion parameter Mixture-of-Experts (MoE) architecture, with 37 billion parameters active during any single inference pass. The update focused on increasing the depth of the model's internal reasoning process, effectively doubling the average token usage for complex logical tasks compared to the initial release.

Key enhancements in this version include a significant reduction in hallucination rates—estimated between 45% and 50% for tasks such as summarization and technical rewriting—and improved performance on mathematical benchmarks, reaching 87.5% accuracy on AIME 2025. Architecturally, the 0528 version introduced native support for system prompts, allowing users to guide the model's reasoning behavior without requiring manual "thinking" tags.

The model is optimized for logic-heavy workflows, including advanced programming and scientific analysis. Released under the MIT License, it continues the DeepSeek-R1 series' focus on open-source weights for high-compute reasoning models.

Rankings & Comparison