Liquid AI logo
Liquid AI
Open Weights

LFM2.5-1.2B-Thinking

Released Jan 2026

Intelligence
#431
Coding
#360
Context32K
Parameters1.2B

LFM2.5-1.2B-Thinking is a reasoning-focused language model developed by Liquid AI as part of the LFM2.5 model family. It is a 1.17 billion parameter model designed for on-device deployment, featuring a memory footprint of approximately 900 MB. The model is specifically optimized to perform complex reasoning tasks locally on edge hardware, such as smartphones and laptops, without requiring cloud connectivity.

The model employs a hybrid architecture that integrates 16 total layers. This configuration includes 10 double-gated LIV (Linear Integral Value) convolution blocks and 6 Grouped Query Attention (GQA) blocks. This hybrid approach is designed to provide high-throughput inference and efficient memory management, particularly on mobile and embedded processors.

The "Thinking" variant is distinguished by its ability to generate structured reasoning traces before producing a final answer. This behavior, often referred to as chain-of-thought processing, allows the model to work through problems systematically, resulting in improved performance for mathematical reasoning, programming tasks, and tool orchestration. It supports a context window of 32,768 tokens and provides multilingual support for eight languages, including English, Chinese, German, and Japanese.

Rankings & Comparison