attention-mechanisms

Tag

Cards List
#attention-mechanisms

How Do Answer Tokens Read Reasoning Traces? Self-Reading Patterns in Thinking LLMs for Quantitative Reasoning

arXiv cs.CL · 2026-04-22 Cached

Study reveals that answer tokens in thinking LLMs follow a structured self-reading pattern—forward drift plus focus on key anchors—during quantitative reasoning, and proposes a training-free SRQ steering method to exploit this for accuracy gains.

0 favorites 0 likes
#attention-mechanisms

ATTNPO: Attention-Guided Process Supervision for Efficient Reasoning

arXiv cs.CL · 2026-04-20 Cached

ATTNPO introduces an attention-guided process supervision framework that reduces overthinking in large reasoning models by leveraging intrinsic attention signals for step-level credit assignment, achieving improved performance with shorter reasoning lengths across 9 benchmarks.

0 favorites 0 likes
#attention-mechanisms

Understanding New-Knowledge-Induced Factual Hallucinations in LLMs: Analysis and Interpretation

arXiv cs.CL · 2026-04-20 Cached

This paper investigates how fine-tuning LLMs on new knowledge induces factual hallucinations, showing that unfamiliarity within specific knowledge types drives hallucinations through weakened attention to key entities. The authors propose mitigating this by reintroducing known knowledge during later training stages.

0 favorites 0 likes
#attention-mechanisms

Wisdom is Knowing What not to Say: Hallucination-Free LLMs Unlearning via Attention Shifting

arXiv cs.CL · 2026-04-20 Cached

This paper introduces Attention-Shifting (AS), a novel framework for selective machine unlearning in LLMs that balances effective removal of sensitive information while preventing hallucinations and preserving model utility. The method uses importance-aware attention suppression and retention enhancement to achieve up to 15% higher accuracy preservation compared to existing unlearning approaches on standard benchmarks.

0 favorites 0 likes
#attention-mechanisms

AtManRL: Towards Faithful Reasoning via Differentiable Attention Saliency

arXiv cs.CL · 2026-04-20 Cached

AtManRL is a method that uses differentiable attention manipulation and reinforcement learning to train LLMs to generate more faithful chain-of-thought reasoning by ensuring reasoning tokens causally influence final predictions. Experiments on GSM8K and MMLU with Llama-3.2-3B demonstrate the approach can identify influential reasoning tokens and improve reasoning transparency.

0 favorites 0 likes
#attention-mechanisms

Applied Explainability for Large Language Models: A Comparative Study

arXiv cs.CL · 2026-04-20 Cached

A comparative study evaluating three explainability techniques (Integrated Gradients, Attention Rollout, SHAP) on fine-tuned DistilBERT for sentiment classification, highlighting trade-offs between gradient-based, attention-based, and model-agnostic approaches for LLM interpretability.

0 favorites 0 likes
#attention-mechanisms

A Temporally Augmented Graph Attention Network for Affordance Classification

Hugging Face Daily Papers · 2026-04-11 Cached

EEG-tGAT is a temporally augmented Graph Attention Network that improves affordance classification from interaction sequences by incorporating temporal attention and dropout mechanisms. The model enhances GATv2 for sequential data where temporal dimensions are semantically non-uniform.

0 favorites 0 likes
← Back to home

Submit Feedback