Tag
This paper introduces three parameter-efficient methods for multi-view proficiency estimation on the Ego-Exo4D dataset, shifting from discriminative classification to generative feedback. The proposed models achieve state-of-the-art accuracy with significantly fewer parameters and training epochs than video-transformer baselines.
RDP-LoRA uses geometric trajectory analysis and the Ramer-Douglas-Peucker algorithm to automatically select the most impactful layers for parameter-efficient fine-tuning, outperforming full-layer and random LoRA baselines.
JumpLoRA introduces a novel sparse adapter framework for continual learning in LLMs using JumpReLU gating to dynamically isolate task parameters and prevent catastrophic forgetting. The method enhances LoRA-based approaches and outperforms state-of-the-art continual learning methods like ELLA.
This paper introduces Pico, a data-free method that improves LoRA adapter merging by separately calibrating the output-side matrix B to reduce interference from shared directions while preserving task-specific information. Pico achieves 3.4–8.3 point accuracy improvements over existing merging methods across math, coding, finance, and medical benchmarks.
MNAFT (Modality Neuron-Aware Fine-Tuning) is a novel approach that selectively updates language-specific and language-agnostic neurons in multimodal large language models to improve image translation while preserving pre-trained knowledge. The method outperforms state-of-the-art image translation techniques including cascaded models and standard fine-tuning approaches.
Motif-Video 2B is a 2B parameter text-to-video generation model that achieves 83.76% on VBench, surpassing Wan2.1 14B while using 7x fewer parameters and trained on fewer than 10M clips with less than 100,000 H200 GPU hours. The model uses a specialized architecture with shared cross-attention and a three-part backbone to separate prompt alignment, temporal consistency, and detail refinement.