MiniMax M2.7 is a large-scale language model developed by the Chinese AI startup MiniMax. Released in March 2026, it is part of the M-series evolution designed for complex software engineering, agentic workflows, and professional productivity. A defining characteristic of the model is its self-evolving development history; MiniMax utilized earlier iterations of the model to manage its own training environments and data pipelines, with the model reportedly handling between 30% and 50% of its own reinforcement learning research workflow.
The model utilizes a Mixture of Experts (MoE) architecture featuring a total of 2.3 trillion parameters, with 100 billion active parameters per forward pass. This scale is paired with a context window of 200,000 tokens, allowing the model to process large code repositories and dense professional documents in a single session. M2.7 is optimized for high-fidelity tasks, maintaining a 97% adherence rate across complex instruction sets and demonstrating strong performance in multi-round office document revisions for Excel, Word, and PowerPoint.
Performance and Capabilities
M2.7 shows competitive results on industry benchmarks, scoring 56.22% on SWE-Pro and 78% on SWE-bench Verified, levels comparable to leading frontier models in autonomous coding. In the professional domain, it achieved an ELO score of 1,495 on the GDPval-AA benchmark. The model also introduces native support for Agent Teams, a capability enabling the coordination of multiple specialized agents to solve long-horizon engineering and research problems.
Technical Integration
Access to MiniMax M2.7 is provided through the proprietary MiniMax API and the company's agent creation platforms. For optimal performance in reasoning and creative tasks, the developer recommends specific inference settings, including a temperature of 1.0, top-p of 0.95, and top-k of 40. The model also supports automatic caching and specialized endpoints for image understanding and identity preservation in interactive entertainment scenarios.