Mistral logo
Mistral
Open Weights

Ministral 3 3B

Released Dec 2025

Intelligence
#366
Coding
#334
Math
#204
Context256K
Parameters3.8B

Ministral 3 3B is a compact multimodal language model developed by Mistral AI, released in December 2025 as part of the Ministral 3 family. It is specifically engineered for high-efficiency edge deployment, integrating a 3.4 billion parameter language backbone with a 0.4 billion parameter vision encoder, totaling approximately 3.8 billion parameters. The model was developed using a "Cascade Distillation" strategy, which transfers pretrained knowledge from the larger Mistral Small 3.1 model into a smaller, more efficient architecture optimized for resource-constrained environments.

The model features a large context window of up to 256,000 tokens and natively supports multimodal inputs, allowing it to interpret both text and visual data. Its architecture utilizes Grouped Query Attention (GQA) to balance performance with low memory utilization, making it capable of running locally on hardware with limited VRAM. Key capabilities include multilingual support for dozens of languages, image captioning, data extraction, and real-time translation.

Ministral 3 3B provides robust support for agentic workflows with native function calling and structured JSON output. Unlike previous iterations in the Ministral series, the Ministral 3 family is released under the Apache 2.0 license, permitting both commercial and non-commercial use. It is designed to serve as a power-efficient solution for developers building privacy-sensitive or offline AI applications on consumer-grade devices.

Rankings & Comparison