Mistral Small is a language model released by Mistral AI on February 26, 2024, designed for high-efficiency tasks requiring low latency and cost-effectiveness. Launched concurrently with Mistral Large, it was developed as a performance-oriented solution for workloads such as summarization, translation, and structured data extraction.
The model features a context window of 32,768 tokens and is proficient in multiple languages, including English, French, German, Spanish, and Italian. It includes native support for function calling and JSON output modes, facilitating its integration into automated workflows and agentic applications. At its release, Mistral Small was positioned as a proprietary intermediary between the company's open-weight Mixtral 8x7B and its flagship Large model, offering higher performance than the former while maintaining lower latency than the latter.