LFM 40B is a large-scale generative model developed by Liquid AI as part of the Liquid Foundation Models (LFMs) series. Departing from traditional Transformer architectures, LFMs are built on principles of dynamical systems and signal processing. This unique design allows the model to handle sequential data with greater memory efficiency and a reduced computational footprint during inference.\n\nThe model utilizes a Mixture of Experts (MoE) architecture, containing 40.3 billion total parameters with roughly 12 billion activated per token. This configuration enables LFM 40B to deliver performance comparable to larger dense models while maintaining higher throughput. It is specifically optimized to run across various hardware platforms, including NVIDIA, AMD, Qualcomm, and Apple silicon.\n\nA key technical advantage of LFM 40B is its handling of long sequences. It supports an optimized context window of 32,768 tokens and avoids the linear KV cache growth typical of attention-based models by using architectural compression. In addition to text, the model is designed to natively support multi-modal inputs, including audio, images, video, and time series data.
Explore AI Studio
Access 50+ top AI models for image, 3D, and audio generation in one unified workspace.
Open AI Studio