Tag
The LOOP Skill Engine achieves 99% success and 99% token reduction for periodic AI agent tasks by recording a single LLM-driven execution and replaying it deterministically via a parameterized, branch-free skill, eliminating stochastic failures and high costs.
Tencent AI has open-sourced an Agent memory system that significantly improves token efficiency and agent consistency in long dialogues through three methods: real-time context compression, Mermaid task maps, and Persona memory. Token consumption is reduced by 61%, and persona consistency jumps from 48% to 76%.
This paper introduces 'Hint Tuning,' a data-efficient method that reduces token usage in reasoning models by calibrating reasoning depth based on problem difficulty. It achieves significant token reduction (24–66%) on models like Qwen3-Thinking and DeepSeek-R1-Distill using only 1K self-annotated samples.
AVR is an adaptive visual reasoning framework that dynamically selects optimal reasoning formats to reduce token usage by 50-90% while maintaining accuracy in visual reasoning tasks. The method addresses reasoning path redundancy by decomposing visual reasoning into three cognitive functions and using FS-GRPO training to encourage efficient format selection.