parameter-efficient

Tag

Cards List
#parameter-efficient

Parameter-Efficient Multi-View Proficiency Estimation: From Discriminative Classification to Generative Feedback

Hugging Face Daily Papers · 5d ago Cached

This paper introduces three parameter-efficient methods for multi-view proficiency estimation on the Ego-Exo4D dataset, shifting from discriminative classification to generative feedback. The proposed models achieve state-of-the-art accuracy with significantly fewer parameters and training epochs than video-transformer baselines.

0 favorites 0 likes
#parameter-efficient

RDP LoRA: Geometry-Driven Identification for Parameter-Efficient Adaptation in Large Language Models

Hugging Face Daily Papers · 2026-04-21 Cached

RDP-LoRA uses geometric trajectory analysis and the Ramer-Douglas-Peucker algorithm to automatically select the most impactful layers for parameter-efficient fine-tuning, outperforming full-layer and random LoRA baselines.

0 favorites 0 likes
#parameter-efficient

JumpLoRA: Sparse Adapters for Continual Learning in Large Language Models

arXiv cs.CL · 2026-04-20 Cached

JumpLoRA introduces a novel sparse adapter framework for continual learning in LLMs using JumpReLU gating to dynamically isolate task parameters and prevent catastrophic forgetting. The method enhances LoRA-based approaches and outperforms state-of-the-art continual learning methods like ELLA.

0 favorites 0 likes
#parameter-efficient

Crowded in B-Space: Calibrating Shared Directions for LoRA Merging

Hugging Face Daily Papers · 2026-04-18 Cached

This paper introduces Pico, a data-free method that improves LoRA adapter merging by separately calibrating the output-side matrix B to reduce interference from shared directions while preserving task-specific information. Pico achieves 3.4–8.3 point accuracy improvements over existing merging methods across math, coding, finance, and medical benchmarks.

0 favorites 0 likes
#parameter-efficient

MNAFT: modality neuron-aware fine-tuning of multimodal large language models for image translation

Hugging Face Daily Papers · 2026-04-18 Cached

MNAFT (Modality Neuron-Aware Fine-Tuning) is a novel approach that selectively updates language-specific and language-agnostic neurons in multimodal large language models to improve image translation while preserving pre-trained knowledge. The method outperforms state-of-the-art image translation techniques including cascaded models and standard fine-tuning approaches.

0 favorites 0 likes
#parameter-efficient

Motif-Video 2B: Technical Report

Hugging Face Daily Papers · 2026-04-14 Cached

Motif-Video 2B is a 2B parameter text-to-video generation model that achieves 83.76% on VBench, surpassing Wan2.1 14B while using 7x fewer parameters and trained on fewer than 10M clips with less than 100,000 H200 GPU hours. The model uses a specialized architecture with shared cross-attention and a three-part backbone to separate prompt alignment, temporal consistency, and detail refinement.

0 favorites 0 likes
← Back to home

Submit Feedback