llm-optimization

Tag

Cards List
#llm-optimization

@heygurisingh: Holy shit... a startup just dropped a free open-source spec that generates UI components with 67% fewer tokens than JSO…

X AI KOLs Timeline · 7h ago

OpenUI is a new open-source specification that generates UI components with 67% fewer tokens than JSON, offering a model-agnostic, framework-neutral solution for LLM-based interface generation.

0 favorites 0 likes
#llm-optimization

Multi agent vs Single Agent systems

Reddit r/AI_Agents · 9h ago

The article argues that most 'agentic' systems are actually single agents with tools, highlighting the high costs and complexity of multi-agent setups. It outlines three valid multi-agent patterns—orchestrator-worker, pipeline, and peer-to-peer—and provides criteria for deciding when to use them versus a single agent.

0 favorites 0 likes
#llm-optimization

Q: Does DFlash (and PFlash) work with Heretic models?

Reddit r/LocalLLaMA · 12h ago

The article discusses the potential compatibility of DFlash and PFlash multi-model speedup methods with Heretic, a tool used for model decensoring, while highlighting the performance benefits on models like Qwen3.6 and Gemma 4.

0 favorites 0 likes
#llm-optimization

From Token to Token Pair: Efficient Prompt Compression for Large Language Models in Clinical Prediction

arXiv cs.CL · 16h ago Cached

This paper introduces MedTPE, a method for efficient, lossless prompt compression of electronic health records for large language models, significantly reducing token length and inference latency in clinical prediction tasks.

0 favorites 0 likes
#llm-optimization

@TeksEdge: Unsloth released the fastest Qwen3.6-27B MTP GGUF I've tested. Time to upgrade. Compared to the previous GGUF, Q4/Q6 XL…

X AI KOLs Timeline · 23h ago

Unsloth has released an optimized GGUF version of the Qwen3.6-27B MTP model, achieving significantly faster inference speeds (up to 114 tok/s on an RTX 5090) compared to previous quantizations.

0 favorites 0 likes
#llm-optimization

Echo-LoRA: Parameter-Efficient Fine-Tuning via Cross-Layer Representation Injection

arXiv cs.LG · yesterday Cached

The article introduces Echo-LoRA, a new parameter-efficient fine-tuning method that injects cross-layer representations from deeper source layers into shallow LoRA modules to improve performance without adding inference-time overhead.

0 favorites 0 likes
#llm-optimization

Beyond GRPO and On-Policy Distillation: An Empirical Sparse-to-Dense Reward Principle for Language-Model Post-Training

Hugging Face Daily Papers · yesterday Cached

This paper proposes an empirical 'sparse-to-dense' reward principle for language model post-training, arguing that scarce labeled data should be used with sparse rewards for teacher model discovery and dense rewards for student compression via distillation. The authors demonstrate that this staged approach, bridging sparse RL and on-policy distillation, outperforms direct GRPO on deployment-sized models in math benchmarks.

0 favorites 0 likes
#llm-optimization

Agentic Test-Time Scaling (GitHub Repo)

TLDR AI · yesterday Cached

AutoTTS is an open-source tool that uses agentic discovery to automatically find optimal test-time scaling strategies for LLMs, significantly reducing token usage and cost through replay-based evaluation.

0 favorites 0 likes
#llm-optimization

@tom_doerr: Improves LLM reasoning accuracy without training https://github.com/codelion/optillm…

X AI KOLs Timeline · 2d ago Cached

OptiLLM is an open-source inference proxy that boosts LLM reasoning accuracy by up to 10x using advanced techniques without requiring retraining, compatible with various AI APIs.

0 favorites 0 likes
#llm-optimization

We stopped optimizing our LLM stack manually — it optimizes itself now

Reddit r/artificial · 2d ago

The article describes a company's transition to a self-optimizing LLM stack that uses production traces to automatically route requests and fine-tune models, resulting in significant cost reductions and performance improvements.

0 favorites 0 likes
#llm-optimization

SlimSpec: Low-Rank Draft LM-Head for Accelerated Speculative Decoding

Hugging Face Daily Papers · 2d ago Cached

SlimSpec introduces a low-rank parameterization for drafter LM-heads to accelerate speculative decoding in LLMs, achieving 4-5x speedup while maintaining full vocabulary support.

0 favorites 0 likes
#llm-optimization

I built a context window optimization framework for coding agents — open source + paper

Reddit r/AI_Agents · 2d ago

The author introduces 'Apohara Context Forge,' an open-source framework and methodology for optimizing context windows in coding agents using role-aware segmentation and tiered relevance scoring.

0 favorites 0 likes
#llm-optimization

We tried vectors, ASTs, and brute-force context stuffing for code retrieval. Graphs with LLM-generated semantics worked best. Here's what we learned.

Reddit r/LocalLLaMA · 3d ago

The authors detail their experience building a code indexing system, concluding that graph-based retrieval with LLM-generated semantics outperforms vector embeddings and pure AST parsing. They open-sourced the system, Bytebell, which uses Neo4j to store semantic context for efficient and precise code retrieval.

0 favorites 0 likes
#llm-optimization

@eng_khairallah1: https://x.com/eng_khairallah1/status/2053405155630936297

X AI KOLs Timeline · 3d ago Cached

The article argues that context engineering, which involves structuring the information and memory available to an AI, is more critical for performance than prompt engineering alone. It provides a structured overview of a course designed to teach how to build reliable AI systems by managing context layers like session history and persistent memory.

0 favorites 0 likes
#llm-optimization

@cjzafir: Qwen 3.5 4B model and 8B are too good. I fine-tuned a 4B model today and got 98% accuracy on full precision and Q8 quan…

X AI KOLs Timeline · 3d ago

A developer reports achieving high accuracy with fine-tuned Qwen 3.5 4B and 8B models using Unsloth, suggesting a shift towards specialized Expert Language Models (ELMs) for niche tasks.

0 favorites 0 likes
#llm-optimization

@JulianGoldieSEO: Google just made local AI 3x faster for FREE. Gemma 4 now runs fast enough on normal laptops that local AI finally feel…

X AI KOLs Timeline · 4d ago

Google released Gemma 4, an open-source AI model optimized for local execution on standard laptops, offering 3x faster performance and a 256k context window for free under an Apache 2.0 license.

0 favorites 0 likes
#llm-optimization

LLMs Improving LLMs: Agentic Discovery for Test-Time Scaling

Hugging Face Daily Papers · 5d ago Cached

This paper introduces AutoTTS, an environment-driven framework that automates the discovery of test-time scaling strategies for LLMs by formulating it as controller synthesis. It demonstrates improved accuracy-cost tradeoffs on mathematical reasoning benchmarks with minimal computational overhead.

0 favorites 0 likes
#llm-optimization

Shallow Prefill, Deep Decoding: Efficient Long-Context Inference via Layer-Asymmetric KV Visibility

Hugging Face Daily Papers · 6d ago Cached

The paper introduces SPEED, a layer-asymmetric KV visibility policy that reduces long-context inference costs by processing prompt tokens only in lower layers during prefill while maintaining full-depth attention during decoding.

0 favorites 0 likes
#llm-optimization

What Makes an LLM a Good Optimizer? A Trajectory Analysis of LLM-Guided Evolutionary Search

Hugging Face Daily Papers · 2026-04-21 Cached

Large-scale study of 15 LLMs across 8 tasks reveals that optimization success hinges on maintaining localized search trajectories rather than initial problem-solving ability or solution novelty.

0 favorites 0 likes
#llm-optimization

@omarsar0: Nice paper combining the strength of Skills and RAG. Most RAG systems retrieve on every query, whether the model needs …

X AI KOLs Following · 2026-04-20 Cached

Research introduces Skill-RAG, a novel approach that combines Skills with Retrieval-Augmented Generation to address inefficiencies in traditional RAG systems that retrieve on every query regardless of whether the model actually needs the information.

0 favorites 0 likes
Next →
← Back to home

Submit Feedback