MNAFT: modality neuron-aware fine-tuning of multimodal large language models for image translation
Summary
MNAFT (Modality Neuron-Aware Fine-Tuning) is a novel approach that selectively updates language-specific and language-agnostic neurons in multimodal large language models to improve image translation while preserving pre-trained knowledge. The method outperforms state-of-the-art image translation techniques including cascaded models and standard fine-tuning approaches.
View Cached Full Text
Cached at: 04/21/26, 07:20 AM
Paper page - MNAFT: modality neuron-aware fine-tuning of multimodal large language models for image translation
Source: https://huggingface.co/papers/2604.16943 Published on Apr 18
·
Submitted byhttps://huggingface.co/liboaccn
Bo Lion Apr 21
Abstract
Modality neuron-aware fine-tuning (MNAFT) enhances image translation by selectively updating specific neurons in multimodal large language models, preserving pre-trained knowledge while improving cross-modal understanding.
Multimodal large language models(MLLMs) have shown impressive capabilities, yet they often struggle to effectively capture the fine-grained textual information within images crucial for accurateimage translation. This often leads to amodality gapbetween visual text inputs and textual inputs/outputs forimage translation. Existing methods, primarily relying oninstruction fine-tuning, riskparameter redundancyof pre-trained knowledge, hindering generalization performance. To address this, we introducemodality neuron-aware fine-tuning(MNAFT), a novel approach that takes advantage of the specialized roles of individual neurons within MLLMs for enhancedimage translation. MNAFT identifies language-agnostic andlanguage-specific neuronsin both vision and language modules through aninstruction-driven activation analysis, evaluating their importance in various translation tasks. We then performselective fine-tuning, updating only the parameters of language-specific andlanguage-agnostic neuronswithin the selected layers relevant to the target task, while preserving the knowledge encoded in other neurons and layers. Our extensive experiments on multiple benchmarks demonstrate that MNAFT significantly outperforms state-of-the-artimage translationmethods, including cascaded models, standard full fine-tuning, and parameter-efficient tuning techniques. Furthermore, we provide comprehensive analysis, including visualizations of neuron activations and clustering patterns, to offer insights into the roles of different neuron groups in mediatingcross-modal understandingand facilitating accurate language-specific translation.
View arXiv pageView PDFAdd to collection
Get this paper in your agent:
hf papers read 2604\.16943
Don’t have the latest CLI?curl \-LsSf https://hf\.co/cli/install\.sh \| bash
Models citing this paper0
No model linking this paper
Cite arxiv.org/abs/2604.16943 in a model README.md to link it from this page.
Datasets citing this paper0
No dataset linking this paper
Cite arxiv.org/abs/2604.16943 in a dataset README.md to link it from this page.
Spaces citing this paper0
No Space linking this paper
Cite arxiv.org/abs/2604.16943 in a Space README.md to link it from this page.
Collections including this paper0
No Collection including this paper
Add this paper to acollectionto link it from this page.
Similar Articles
Multimodal neurons in artificial neural networks
OpenAI discovers multimodal neurons in CLIP that respond to the same concept across different modalities (visual, symbolic, textual), mirroring biological neurons and explaining the model's robustness on challenging vision tasks. This interpretability research provides insights into how vision-language models organize and represent abstract concepts.
LiFT: Does Instruction Fine-Tuning Improve In-Context Learning for Longitudinal Modelling by Large Language Models?
LiFT is a longitudinal instruction fine-tuning framework that unifies diverse temporal NLP tasks under a shared instruction schema with curriculum-based training. Evaluated across OLMo, LLaMA, and Qwen models, LiFT consistently outperforms base-model in-context learning, especially on out-of-distribution data and rare change events.
Training and Finetuning Multimodal Embedding & Reranker Models with Sentence Transformers
This article provides a technical guide on training and fine-tuning multimodal embedding and reranker models using the Sentence Transformers library, demonstrating performance improvements on Visual Document Retrieval tasks with Qwen3-VL.
Attribution-Guided Continual Learning for Large Language Models
This paper proposes an attribution-guided continual fine-tuning framework for large language models that estimates task-specific parameter importance in Transformer layers and modulates gradients accordingly, mitigating catastrophic forgetting while maintaining performance on new tasks.
Awaking Spatial Intelligence in Unified Multimodal Understanding and Generation
The paper introduces JoyAI-Image, a unified multimodal foundation model that integrates a spatially enhanced MLLM with MMDiT to achieve state-of-the-art performance in visual understanding, text-to-image generation, and instruction-guided editing.