MatryoshkaLoRA: Learning Accurate Hierarchical Low-Rank Representations for LLM Fine-Tuning

Hugging Face Daily Papers 论文

摘要

# Paper page - MatryoshkaLoRA: Learning Accurate Hierarchical Low-Rank Representations for LLM Fine-Tuning Source: [https://huggingface.co/papers/2605.07850](https://huggingface.co/papers/2605.07850) We propose**MatryoshkaLoRA**, a general, Matryoshka\-inspired training framework for LoRA that learns accurate hierarchical low\-rank representations by inserting a fixed, carefully crafted diagonal matrix**P**between the existing LoRA adapters to scale their sub\-ranks accordingly\. By introducing

With the rise in scale for deep learning models to billions of parameters, the computational cost of fine-tuning remains a significant barrier to deployment. While Low-Rank Adaptation (LoRA) has become the standard for parameter-efficient fine-tuning, the need to set a predefined, static rank r requires exhaustive grid searches to balance efficiency and performance. Existing rank-adaptive solutions such as DyLoRA mitigate this by sampling ranks during the training from a predefined distribution. However, they often yield sub-optimal results at higher ranks due to lack of consistent gradient signals across the full hierarchy of ranks, thus making these methods data-inefficient. In this paper, we propose MatryoshkaLoRA, a general, Matryoshka-inspired training framework for LoRA that learns accurate hierarchical low-rank representations by inserting a fixed, carefully crafted diagonal matrix P between the existing LoRA adapters to scale their sub-ranks accordingly. By introducing this simple modification, our general framework recovers LoRA and DyLoRA only by changing P and ensures all sub-ranks embed the available gradient information efficiently. Our MatryoshkaLoRA supports dynamic rank selection with minimal degradation in accuracy. We further propose Area Under the Rank Accuracy Curve (AURAC), a metric that consistently evaluates the performance of hierarchical low-rank adapters. Our results demonstrate that MatryoshkaLoRA learns more accurate hierarchical low-rank representations than prior rank-adaptive approaches and achieves superior accuracy-performance trade-offs across ranks on the evaluated datasets. Our code is available at https://github.com/IST-DASLab/MatryoshkaLoRA.
查看原文 导出为 Word 导出为 PDF
查看缓存全文

缓存时间: 2026/05/11 10:44

Paper page - MatryoshkaLoRA: Learning Accurate Hierarchical Low-Rank Representations for LLM Fine-Tuning

Source: https://huggingface.co/papers/2605.07850 We proposeMatryoshkaLoRA, a general, Matryoshka-inspired training framework for LoRA that learns accurate hierarchical low-rank representations by inserting a fixed, carefully crafted diagonal matrixPbetween the existing LoRA adapters to scale their sub-ranks accordingly.

By introducing this simple modification, our general framework recovers LoRA and DyLoRA only by changingPand ensures all sub-ranks embed the available gradient information efficiently.

OurMatryoshkaLoRAsupports dynamic rank selection with minimal degradation in accuracy. We further proposeArea Under the Rank Accuracy Curve (AURAC), a metric that consistently evaluates the performance of hierarchical low-rank adapters.

Our results show that thatMatryoshkaLoRAlearns more accurate hierarchical low-rank representations than prior rank-adaptive approaches and achieves superior accuracy-performance trade-offs across ranks on the evaluated datasets.

schematic

相似文章

JumpLoRA:大语言模型持续学习的稀疏适配器

arXiv cs.CL

JumpLoRA 引入了一个新颖的稀疏适配器框架,用于大语言模型的持续学习。该方法使用 JumpReLU 门控来动态隔离任务参数并防止灾难性遗忘。它增强了基于 LoRA 的方法,并超越了 ELLA 等最先进的持续学习方法。