GPT-4.5 (Preview) is a large-scale multimodal model developed by OpenAI, released in February 2025 as a research preview of the successor to the GPT-4 family. Described as OpenAI's largest model to date, it focuses on scaling up pre-training and post-training via unsupervised learning to improve pattern recognition and creative insight. The model is designed to provide more natural, fluid interactions and is noted for its enhanced ability to follow complex user intent and maintain contextual coherence.
Compared to its predecessors like GPT-4o, GPT-4.5 (Preview) demonstrates significant improvements in factual reliability and reduced hallucination rates. In internal evaluations such as the SimpleQA benchmark, the model showed a notable increase in accuracy for knowledge-based questions. It also features a broader knowledge base and improved emotional intelligence (EQ), which OpenAI suggests makes it more effective for tasks involving creative writing, programming, and nuanced communication.
As a multimodal system, the model supports both text and image inputs and maintains a 128,000-token context window. OpenAI has positioned GPT-4.5 as a general-purpose model that prioritizes fluid, human-like interaction rather than the explicit chain-of-thought reasoning characteristic of the OpenAI o1 or o3 series. Due to its scale and high computational demands, the model was launched with a higher operational cost than the GPT-4o series.