cross-modal

Tag

Cards List
#cross-modal

MNAFT: modality neuron-aware fine-tuning of multimodal large language models for image translation

Hugging Face Daily Papers · 2026-04-18 Cached

MNAFT (Modality Neuron-Aware Fine-Tuning) is a novel approach that selectively updates language-specific and language-agnostic neurons in multimodal large language models to improve image translation while preserving pre-trained knowledge. The method outperforms state-of-the-art image translation techniques including cascaded models and standard fine-tuning approaches.

0 favorites 0 likes
← Back to home

Submit Feedback