Claude Token Counter, now with model comparisons
Summary
Simon Willison upgraded his Claude Token Counter tool to support comparing token counts across different Claude models, revealing that Claude Opus 4.7's new tokenizer uses 1.46x more tokens than Opus 4.6 for the same text, resulting in ~40% higher costs despite identical pricing.
View Cached Full Text
Cached at: 04/20/26, 08:26 AM
Similar Articles
@_avichawla: A smarter Claude model burns more tokens, not fewer! And it's not a minor 3-5% difference. But 54% higher token usage. …
The article analyzes why smarter AI agents like Claude consume more tokens when interacting with human-centric backends like Supabase due to inefficient context discovery. It introduces InsForge, an open-source backend tool designed for agents that provides structured context to significantly reduce token usage and manual interventions.
@_avichawla: Claude Code used 3x fewer tokens with one change: - Before: 10.4M tokens · 10 errors · $9.21 - After: 3.7M tokens · 0 e…
By swapping to Insforge Skills + CLI as the backend context layer, a user cut Claude Code token usage by 64 %, eliminated all errors and reduced cost from $9.21 to $2.81.
@akshay_pachaar: https://x.com/akshay_pachaar/status/2045910818450182526
A practical guide explaining how Claude Opus 4.7 differs from 4.6, covering the new xhigh effort level, adaptive thinking replacing fixed token budgets, and a 1M context window, with recommendations on how to adjust prompting and delegation strategies to avoid inflated token costs.
GPT-5.5 may burn fewer tokens, but it always burns more cash
OpenAI's GPT-5.5 costs 49–92% more than GPT-5.4 in practice despite claimed token efficiency improvements, while Anthropic's Claude Opus 4.7 also raised effective costs by 12–27% for longer prompts, reflecting a broader trend of rising frontier model prices as both companies face massive projected losses.
@runes_leo: Nailed it. Re-counted today: Claude boots with 30-40K tokens (four files in rules/ + MEMORY.md), 5-8× Codex, 15× Hermes. The more rules you stuff in, the more it drifts—broke the same P0 rule five times in one session.
Developer recounts that Claude now loads 30-40K tokens of rules at startup—5-8× Codex, 15× Hermes—and drifts further as rules pile up, violating the same P0 rule five times in one session.