AI21 Labs logo
AI21 Labs
Open Weights

Jamba 1.6 Mini

Released Mar 2025

Intelligence
#439
Context256K
Parameters52B

The Jamba 1.6 Mini is a mid-sized, open-weight language model developed by AI21 Labs that utilizes a unique hybrid architecture. It combines State Space Model (SSM) technology, specifically the Mamba architecture, with traditional Transformer layers. This hybrid design is engineered to provide the high-quality reasoning of Transformers alongside the efficiency and throughput advantages of SSMs, particularly for long-context tasks.

Built as a Mixture of Experts (MoE) model, Jamba 1.6 Mini features 52 billion total parameters, with 12 billion active during inference. This structure allows the model to deliver significantly faster processing speeds—reportedly up to 2.5 times faster than comparable dense Transformer models—while maintaining a small enough footprint to be deployed on enterprise-grade hardware with appropriate quantization.

A core strength of the model is its 256K-token context window, which enables it to process and retrieve information from exceptionally long documents or large datasets. It is optimized for enterprise use cases such as Retrieval-Augmented Generation (RAG), document analysis, and grounded question-answering. The model also natively supports structured JSON output and function calling through a standardized tool-use API.

Jamba 1.6 Mini is a multilingual model supporting nine languages, including English, Spanish, French, Portuguese, Italian, Dutch, German, Arabic, and Hebrew. It was released under the Jamba Open Model License, allowing for both research and commercial use.

Rankings & Comparison