Mistral logo
Mistral
Open Weights

mistral-small-3.1-24b-instruct-2503

Released Mar 2025

Arena AI
#165
Context128K
Parameters24B

Mistral Small 3.1-24B-Instruct-2503 is a 24-billion parameter language model developed by Mistral AI. Released in March 2025, this model is an instruction-tuned, multimodal evolution of the Mistral Small series, specifically building upon the foundations of the earlier Mistral Small 3 (2501) release. It is designed to be "knowledge-dense," meaning it can be deployed on consumer-grade hardware like a single RTX 4090 or a MacBook with 32GB of RAM while maintaining competitive performance.

Key enhancements in this version include vision understanding and an expanded context window of 128,000 tokens. These capabilities enable the model to analyze visual content and process long documents without sacrificing text generation quality. The model employs the Tekken tokenizer, which features a 131,000-word vocabulary and is optimized for multilingual support and efficiency in processing math and code.

The model is highly agent-centric, featuring native support for function calling and JSON outputting. These features allow it to serve effectively as a low-latency conversational agent or as a logic engine for complex workflows. It is trained to adhere closely to system prompts and supports dozens of languages, including English, French, German, Hindi, Japanese, and Chinese.

Mistral Small 3.1-24B-Instruct-2503 is released under the Apache 2.0 license, permitting both commercial and non-commercial use. According to internal benchmarks, the model matches or exceeds the performance of several larger models and proprietary small models in tasks involving reasoning, mathematical problem-solving, and instruction following.

Rankings & Comparison