Liquid AI logo
Liquid AI
Open Weights

LFM2.5-1.2B-Instruct

Released Jan 2026

Intelligence
#434
Coding
#371
Context32K
Parameters1.2B

LFM2.5-1.2B-Instruct is a compact, high-performance language model developed by Liquid AI, specifically optimized for on-device and edge deployment. As part of the Liquid Foundation Model (LFM) 2.5 series, it is built on a hybrid architecture that combines double-gated LIV convolution blocks with Grouped Query Attention (GQA). This design is intended to provide the efficiency of linear state-space models while maintaining the reasoning capabilities of standard Transformers.

Fine-tuned for instruction following and conversational tasks, the model features a 32,768-token context window and was pre-trained on a massive 28-trillion-token dataset. Despite its small footprint of approximately 1.2 billion parameters, it is designed to rival significantly larger models in reasoning, math, and coding benchmarks. It supports multiple languages, including English, Chinese, French, German, Japanese, and Korean.

The LFM2.5-1.2B-Instruct is engineered for low-latency inference on memory-constrained hardware, such as mobile devices, laptops, and IoT systems. Its architecture allows for efficient execution across various compute backends, including CPUs and NPUs, utilizing a memory profile that fits under 1GB in quantized formats.

Rankings & Comparison