@QingQ77: A terminal AI coding agent designed specifically for DeepSeek API prefix caching mechanism, maintaining ultra-low token costs in long sessions through a cache-first architecture. https://github.com/esengine/DeepSeek-Reasonix… Reaso…

X AI KOLs Timeline Tools

Summary

Reasonix is a terminal AI coding agent designed specifically for DeepSeek API prefix caching mechanism, achieving ultra-low token costs in long sessions through a cache-first architecture. In testing, 435 million input tokens cost only about $12, with a cache hit rate of 99.82%.

A terminal AI coding agent designed specifically for DeepSeek API prefix caching mechanism, maintaining ultra-low token costs in long sessions through a cache-first architecture. https://github.com/esengine/DeepSeek-Reasonix… Reasonix is a DeepSeek-native terminal coding agent, designed with prefix caching stability in mind. In testing, 435 million input tokens in a single day cost only about $12, with a cache hit rate of 99.82%. Four architectural pillars: cache-first dialogue loop, R1 chain-of-thought reuse, tool call repair, and cost control. Built-in web search (Mojeek by default, configurable to SearXNG), MCP, hooks, skills, memory, persistent sessions, and web dashboard. One-click launch with npx, MIT licensed, TypeScript implementation. Deliberately supports only DeepSeek, no multi-model switching — focused on extreme optimization for a single backend.
Original Article Export to Word Export to PDF
View Cached Full Text

Cached at: 05/09/26, 01:41 AM

English · 简体中文 · Website · Architecture · Benchmarks

A DeepSeek-native AI coding agent for your terminal.

Engineered around prefix-cache stability — so token costs stay low across long sessions, and you can leave it running.

MIT — see LICENSE Built by the community at esengine/reasonix

Similar Articles

deepseek-ai/DeepSeek-V4-Flash

Hugging Face Models Trending

DeepSeek releases DeepSeek-V4-Flash and DeepSeek-V4-Pro, new MoE language models supporting 1 million token contexts with improved efficiency and performance.

@geekbb: MCP tool that offloads low-risk tasks from Codex to DeepSeek, letting expensive models only make judgments. Average 48% cost savings over five test tasks with about 6 seconds latency. CodexSaver is an MCP tool that delegates low-risk tasks (writing tests, documentation, code explanations...) in Codex coding sessions...

X AI KOLs Timeline

CodexSaver is an MCP tool that offloads low-risk coding tasks (tests, docs, lint fixes) from Codex to a cheaper model like DeepSeek, achieving ~48% cost savings with ~6s latency.

deepseek-ai/DeepSeek-V4-Pro

Hugging Face Models Trending

DeepSeek releases V4-Pro and V4-Flash, Mixture-of-Experts models supporting million-token context with hybrid attention and Muon optimizer.