parameter-efficient-fine-tuning

Tag

Cards List
#parameter-efficient-fine-tuning

Decomposing the Basic Abilities of Large Language Models: Mitigating Cross-Task Interference in Multi-Task Instruct-Tuning

arXiv cs.CL · 2d ago Cached

This paper proposes Badit, a method that decomposes large language model parameters into orthogonal high-singular-value LoRA experts to mitigate cross-task interference during multi-task instruction tuning.

0 favorites 0 likes
#parameter-efficient-fine-tuning

SAMoRA: Semantic-Aware Mixture of LoRA Experts for Task-Adaptive Learning

arXiv cs.CL · 2026-04-22 Cached

SAMoRA introduces a semantic-aware router and task-adaptive scaling to improve expert specialization and dynamic weighting in MoE-LoRA fine-tuning, outperforming prior methods on multi-task benchmarks.

0 favorites 0 likes
#parameter-efficient-fine-tuning

Aletheia: Gradient-Guided Layer Selection for Efficient LoRA Fine-Tuning Across Architectures

arXiv cs.CL · 2026-04-20 Cached

Aletheia introduces a gradient-guided layer selection method for efficient LoRA fine-tuning that identifies task-relevant transformer layers via lightweight gradient probes and applies adapters selectively, achieving 15-28% training speedup across 14 models while maintaining downstream performance on MMLU, GSM8K, and HumanEval benchmarks.

0 favorites 0 likes
← Back to home

Submit Feedback