peft

Tag

Cards List
#peft

Echo-LoRA: Parameter-Efficient Fine-Tuning via Cross-Layer Representation Injection

arXiv cs.LG · yesterday Cached

The article introduces Echo-LoRA, a new parameter-efficient fine-tuning method that injects cross-layer representations from deeper source layers into shallow LoRA modules to improve performance without adding inference-time overhead.

0 favorites 0 likes
#peft

CERSA: Cumulative Energy-Retaining Subspace Adaptation for Memory-Efficient Fine-Tuning

arXiv cs.LG · yesterday Cached

The paper introduces CERSA, a novel parameter-efficient fine-tuning method that uses singular value decomposition to retain principal components, significantly reducing memory usage while outperforming existing methods like LoRA.

0 favorites 0 likes
#peft

ShadowPEFT: Shadow Network for Parameter-Efficient Fine-Tuning

arXiv cs.CL · 2026-04-22 Cached

ShadowPEFT introduces a centralized parameter-efficient fine-tuning method that uses a depth-shared shadow module to refine transformer layer representations, matching or outperforming LoRA/DoRA with comparable trainable parameters.

0 favorites 0 likes
← Back to home

Submit Feedback