Mistral Small (September 2024) is a 22-billion parameter dense language model developed by Mistral AI, designated as version v24.09. It was released to provide an enterprise-grade solution that balances high performance with low-latency reasoning, serving as a mid-tier option between the smaller Mistral NeMo (12B) and the flagship Mistral Large models.
Built on a dense transformer architecture, the model features a 128,000 token context window and is optimized for multilingual support, advanced reasoning, and coding tasks. It incorporates the Tekken tokenizer, which allows for more efficient compression of natural language and source code compared to previous generations.
Key Capabilities
The model is specifically tuned for agentic workflows and complex instruction following. It supports advanced features such as function calling and structured output generation, making it suitable for integration into automated systems. Compared to earlier iterations of Mistral's small-tier models, the September 2024 release provides significant improvements in human alignment and logical reasoning.
While highly capable in general-purpose tasks, Mistral Small is particularly effective for high-volume enterprise use cases including translation, summarization, and sentiment analysis where cost efficiency and speed are critical factors.