Ministral 3 8B is a multimodal language model released by Mistral AI in December 2025 as part of the Mistral 3 generation of edge-optimized models. Designed for high-performance intelligence on local hardware, it combines an 8.4 billion parameter language backbone with a 0.4 billion parameter vision encoder, totaling approximately 8.8 billion parameters. This architecture allows the model to process both text and image inputs natively, supporting applications such as visual document analysis and multimodal reasoning directly on consumer devices.
Unlike previous models in the Ministral line, the 3-series is distributed under the Apache 2.0 license, facilitating broad use in both research and commercial applications. The model features an expanded context window of 256,000 tokens, enabling it to handle long-form documents and complex, multi-turn agentic workflows with low latency. It is optimized for efficiency, targeting use cases in autonomous robotics, internet-free intelligent assistants, and local data analytics.
Mistral AI provides the model in several variants, including Instruct and Reasoning versions. The Reasoning variant is specifically post-trained to excel in complex, multi-step logical problems, particularly in STEM, coding, and mathematical domains. The model is engineered to balance state-of-the-art performance with the constraints of edge deployment, capable of fitting within 24GB of VRAM in BF16 precision.