Spring Update
Summary
OpenAI releases GPT-4o, a new flagship model capable of real-time reasoning across audio, vision, and text modalities.
View Cached Full Text
Cached at: 04/20/26, 02:47 PM
Similar Articles
Hello GPT-4o
OpenAI announces GPT-4o, a flagship multimodal model that processes audio, vision, text, and video in real-time with 232ms average audio response latency. The model matches GPT-4 Turbo on text/code while significantly improving multilingual, audio, and vision capabilities at 50% cheaper API costs.
GPT-4
OpenAI releases GPT-4, a large multimodal model that accepts image and text inputs and demonstrates human-level performance on professional and academic benchmarks, significantly outperforming GPT-3.5 across various evaluation metrics.
Introducing OpenAI o3 and o4-mini
OpenAI releases o3 and o4-mini, its latest reasoning models that can agentically access and combine all ChatGPT tools (web search, code execution, image analysis, image generation). o3 achieves state-of-the-art performance on coding, math, and science benchmarks with 20% fewer major errors than o1, while o4-mini offers efficient reasoning optimized for cost and speed.
Introducing GPT-4.5
OpenAI introduces GPT-4.5, their largest and best chat model yet, available as a research preview to Pro users and developers. The model advances unsupervised learning through scaling compute and data, showing improved factuality, reduced hallucinations, and better understanding of human intent compared to GPT-4o.
OpenAI GPT-4.5 System Card
OpenAI releases a research preview of GPT-4.5, its largest and most knowledgeable model to date, built on GPT-4o with scaled pre-training, improved emotional intelligence, and fewer hallucinations. The system card details training methods, safety evaluations, and capability assessments conducted prior to deployment.