OpenAI logo
OpenAI
Open Weights

gpt-oss-120B (low)

Released Aug 2025

Intelligence
#164
Coding
#207
Math
#103
Context131K
Parameters117B

gpt-oss-120B is an open-weight large language model developed by OpenAI, released as part of the gpt-oss family. It utilizes a Mixture-of-Experts (MoE) architecture featuring 117 billion total parameters, with 5.1 billion parameters active per token. The model is released under the Apache 2.0 license, representing a significant shift in OpenAI's distribution strategy by providing open weights for a reasoning-class model.

Designed for agentic tasks and complex reasoning, the model includes native support for tool use, function calling, web browsing, and Python code execution. A standout feature is its configurable reasoning effort, which allows users to toggle between "low," "medium," and "high" settings. The "low" effort configuration, referenced in model aliases, is optimized for reduced latency and efficient inference while maintaining logic-heavy performance suitable for production environments.

Architecturally, gpt-oss-120B consists of 36 layers with 128 experts using Top-4 routing. It was post-trained with native MXFP4 quantization, allowing the 117B-parameter model to be deployed on a single 80GB GPU. Unlike OpenAI's proprietary models, gpt-oss-120B provides developers with full access to its internal chain-of-thought reasoning process for enhanced transparency and debugging.

Rankings & Comparison