Am I missing something about GPT-5.5 efficiency?
Summary
A user questions the token efficiency of GPT-5.5 versus GPT-5.4 in Codex, analyzing a chart from Artificial Analysis and praising Cursor's token performance.
Similar Articles
GPT5.5s CoT keeps leaking in the new codex update. Looks like we know how they got token efficency, they cavemanmaxxed
The article claims that GPT-5.5's Chain-of-Thought mechanisms are leaking in the new Codex update, suggesting this explains the model's token efficiency through excessive optimization.
GPT-5.5 may burn fewer tokens, but it always burns more cash
OpenAI's GPT-5.5 costs 49–92% more than GPT-5.4 in practice despite claimed token efficiency improvements, while Anthropic's Claude Opus 4.7 also raised effective costs by 12–27% for longer prompts, reflecting a broader trend of rising frontier model prices as both companies face massive projected losses.
Introducing GPT-5.4
OpenAI is releasing GPT-5.4 and GPT-5.4 Pro across ChatGPT, the API, and Codex, featuring native computer-use capabilities, 1M token context, improved reasoning and coding, and state-of-the-art performance on professional knowledge work benchmarks. It is described as OpenAI's most capable and token-efficient reasoning model to date.
Building more with GPT-5.1-Codex-Max
OpenAI introduces GPT-5.1-Codex-Max, a new agentic coding model with improved reasoning, token efficiency, and the ability to maintain coherent work across millions of tokens through a 'compaction' mechanism. The model is faster, more intelligent, and can sustain long-running tasks for hours or days, representing a significant advancement in AI-assisted software engineering.
Introducing GPT-5.5
OpenAI has released GPT-5.5, a significant upgrade to its frontier AI model, boasting superior capabilities in agentic coding, research, and multi-step task execution while maintaining efficiency and speed.