Ling-1T is a trillion-parameter large language model developed by InclusionAI, an artificial intelligence research initiative under Ant Group. As the first flagship "non-thinking" model in the Ling 2.0 series, it is designed to provide efficient, high-speed reasoning for applications that require a balance between computational scale and inference speed.
The model utilizes a sparse Mixture-of-Experts (MoE) architecture, maintaining a total capacity of 1 trillion parameters while activating approximately 50 billion parameters per token. Ling-1T is notable for being one of the first trillion-scale foundation models trained entirely using FP8 mixed-precision, a technique that reduces memory overhead and improves training throughput without sacrificing model performance. It supports a context window of up to 128,000 tokens.
Pre-trained on over 20 trillion reasoning-dense tokens, Ling-1T emphasizes logical reasoning, mathematics, and software development. It features a specialized capability referred to as "aesthetic intelligence," which allows it to generate front-end code that integrates functional logic with visual design coherence. The model was optimized through an Evolutionary Chain-of-Thought (Evo-CoT) process to enhance reasoning depth while maintaining the lower latency profile of a non-thinking architecture.