RDP LoRA: Geometry-Driven Identification for Parameter-Efficient Adaptation in Large Language Models

Hugging Face Daily Papers Papers

Summary

RDP-LoRA uses geometric trajectory analysis and the Ramer-Douglas-Peucker algorithm to automatically select the most impactful layers for parameter-efficient fine-tuning, outperforming full-layer and random LoRA baselines.

Fine-tuning Large Language Models (LLMs) remains structurally uncertain despite parameter-efficient methods such as Low-Rank Adaptation (LoRA), as the layer-specific roles of internal representations are poorly understood, leading to heuristic decisions about where adaptation should be applied. We model the evolution of hidden states as a high-dimensional geometric trajectory and propose using the Ramer-Douglas-Peucker (RDP) algorithm, a parameter-free and training-free polygon simplification method that preserves global structural transitions while eliminating locally redundant changes, to identify critical breakpoints along the representation path. Crucially, we use these geometric pivots not merely for analysis, but as a direct decision signal for determining which layers should be adapted during parameter-efficient fine-tuning. By integrating this geometry-aware layer selection strategy into LoRA fine-tuning of Qwen3-8B-Base, we achieve superior performance on MMLU-Math using only 13 RDP-selected layers (81.67%), significantly outperforming both full 36-layer adaptation (79.32%) and random 13-layer selection (75.56%), as well as the baseline Qwen3-8B-Base model (74.25%). These results demonstrate that leveraging the intrinsic geometry of representation trajectories provides a robust, interpretable, and training-free signal for optimizing layer selection during model adaptation.
Original Article Export to Word Export to PDF
View Cached Full Text

Cached at: 04/22/26, 02:41 PM

Paper page - RDP LoRA: Geometry-Driven Identification for Parameter-Efficient Adaptation in Large Language Models

Source: https://huggingface.co/papers/2604.19321 This work presents a compelling and principled approach to layer selection in parameter-efficient fine-tuning. By modeling hidden state evolution as a geometric trajectory and leveraging the Ramer-Douglas-Peucker algorithm, the authors introduce a novel, training-free mechanism for identifying structurally significant transition points across layers.

The integration of this geometry-aware signal into Low-Rank Adaptation is particularly noteworthy, as it addresses a well-known limitation of LoRA—namely, the reliance on heuristic or uniform layer selection. The reported results, where a subset of RDP-selected layers outperforms both full-layer adaptation and random selection, provide strong empirical support for the hypothesis that layer-wise contributions to adaptation are highly non-uniform and can be systematically characterized.

From a research perspective, this work contributes to a growing body of literature seeking to improve the interpretability and efficiency of fine-tuning strategies by grounding them in the intrinsic structure of learned representations.

A natural direction for future investigation would be to assess the robustness and transferability of the selected layers across tasks, domains, and model scales, as well as to better understand the theoretical properties linking trajectory geometry to functional adaptation capacity.

Overall, this is a well-motivated and methodologically elegant contribution with meaningful implications for scalable and interpretable LLM adaptation.

Similar Articles

JumpLoRA: Sparse Adapters for Continual Learning in Large Language Models

arXiv cs.CL

JumpLoRA introduces a novel sparse adapter framework for continual learning in LLMs using JumpReLU gating to dynamically isolate task parameters and prevent catastrophic forgetting. The method enhances LoRA-based approaches and outperforms state-of-the-art continual learning methods like ELLA.

Measuring Representation Robustness in Large Language Models for Geometry

arXiv cs.CL

Researchers introduce GeoRepEval, a framework to evaluate LLM robustness across equivalent geometric problem representations (Euclidean, coordinate, vector). Testing 11 LLMs on 158 geometry problems, they find accuracy gaps up to 14 percentage points based solely on representation choice, with vector formulations being a consistent failure point.

Crowded in B-Space: Calibrating Shared Directions for LoRA Merging

Hugging Face Daily Papers

This paper introduces Pico, a data-free method that improves LoRA adapter merging by separately calibrating the output-side matrix B to reduce interference from shared directions while preserving task-specific information. Pico achieves 3.4–8.3 point accuracy improvements over existing merging methods across math, coding, finance, and medical benchmarks.