Liquid AI logo
Liquid AI
Open Weights

LFM2 2.6B

Released Sep 2025

Intelligence
#434
Coding
#360
Math
#234
Context33K
Parameters2.6B

LFM2 2.6B is a compact language model developed by Liquid AI, released as the largest dense model within the Liquid Foundation Model 2 (LFM2) family. It utilizes a distinctive hybrid architecture that combines Grouped Query Attention (GQA) with double-gated short-range LIV convolution blocks. This architectural approach is designed to provide significantly faster inference and reduced memory consumption—particularly for the Key-Value (KV) cache—compared to standard Transformer architectures of similar scale.

The model was trained on a budget of 10 trillion tokens and supports a context window of 32,768 tokens. Despite its relatively small parameter count, it is engineered to compete with models in the 3B to 4B range across reasoning, mathematics, and instruction-following benchmarks. It maintains broad multilingual support, with specific tuning for English and Japanese.

LFM2 2.6B is available in several specialized checkpoints, including LFM2-2.6B-Transcript for long-form meeting analysis and LFM2-2.6B-Exp, which employs reinforcement learning to improve reasoning capabilities. The model is designed for flexible deployment across CPU, GPU, and NPU hardware, targeting applications on edge devices such as laptops and smartphones.

Rankings & Comparison