MNAFT: modality neuron-aware fine-tuning of multimodal large language models for image translation

Hugging Face Daily Papers Papers

Summary

MNAFT (Modality Neuron-Aware Fine-Tuning) is a novel approach that selectively updates language-specific and language-agnostic neurons in multimodal large language models to improve image translation while preserving pre-trained knowledge. The method outperforms state-of-the-art image translation techniques including cascaded models and standard fine-tuning approaches.

Multimodal large language models (MLLMs) have shown impressive capabilities, yet they often struggle to effectively capture the fine-grained textual information within images crucial for accurate image translation. This often leads to a modality gap between visual text inputs and textual inputs/outputs for image translation. Existing methods, primarily relying on instruction fine-tuning, risk parameter redundancy of pre-trained knowledge, hindering generalization performance. To address this, we introduce modality neuron-aware fine-tuning (MNAFT), a novel approach that takes advantage of the specialized roles of individual neurons within MLLMs for enhanced image translation. MNAFT identifies language-agnostic and language-specific neurons in both vision and language modules through an instruction-driven activation analysis, evaluating their importance in various translation tasks. We then perform selective fine-tuning, updating only the parameters of language-specific and language-agnostic neurons within the selected layers relevant to the target task, while preserving the knowledge encoded in other neurons and layers. Our extensive experiments on multiple benchmarks demonstrate that MNAFT significantly outperforms state-of-the-art image translation methods, including cascaded models, standard full fine-tuning, and parameter-efficient tuning techniques. Furthermore, we provide comprehensive analysis, including visualizations of neuron activations and clustering patterns, to offer insights into the roles of different neuron groups in mediating cross-modal understanding and facilitating accurate language-specific translation.
Original Article Export to Word Export to PDF
View Cached Full Text

Cached at: 04/21/26, 07:20 AM

Paper page - MNAFT: modality neuron-aware fine-tuning of multimodal large language models for image translation

Source: https://huggingface.co/papers/2604.16943 Published on Apr 18

·

Submitted byhttps://huggingface.co/liboaccn

Bo Lion Apr 21

Abstract

Modality neuron-aware fine-tuning (MNAFT) enhances image translation by selectively updating specific neurons in multimodal large language models, preserving pre-trained knowledge while improving cross-modal understanding.

Multimodal large language models(MLLMs) have shown impressive capabilities, yet they often struggle to effectively capture the fine-grained textual information within images crucial for accurateimage translation. This often leads to amodality gapbetween visual text inputs and textual inputs/outputs forimage translation. Existing methods, primarily relying oninstruction fine-tuning, riskparameter redundancyof pre-trained knowledge, hindering generalization performance. To address this, we introducemodality neuron-aware fine-tuning(MNAFT), a novel approach that takes advantage of the specialized roles of individual neurons within MLLMs for enhancedimage translation. MNAFT identifies language-agnostic andlanguage-specific neuronsin both vision and language modules through aninstruction-driven activation analysis, evaluating their importance in various translation tasks. We then performselective fine-tuning, updating only the parameters of language-specific andlanguage-agnostic neuronswithin the selected layers relevant to the target task, while preserving the knowledge encoded in other neurons and layers. Our extensive experiments on multiple benchmarks demonstrate that MNAFT significantly outperforms state-of-the-artimage translationmethods, including cascaded models, standard full fine-tuning, and parameter-efficient tuning techniques. Furthermore, we provide comprehensive analysis, including visualizations of neuron activations and clustering patterns, to offer insights into the roles of different neuron groups in mediatingcross-modal understandingand facilitating accurate language-specific translation.

View arXiv pageView PDFAdd to collection

Get this paper in your agent:

hf papers read 2604\.16943

Don’t have the latest CLI?curl \-LsSf https://hf\.co/cli/install\.sh \| bash

Models citing this paper0

No model linking this paper

Cite arxiv.org/abs/2604.16943 in a model README.md to link it from this page.

Datasets citing this paper0

No dataset linking this paper

Cite arxiv.org/abs/2604.16943 in a dataset README.md to link it from this page.

Spaces citing this paper0

No Space linking this paper

Cite arxiv.org/abs/2604.16943 in a Space README.md to link it from this page.

Collections including this paper0

No Collection including this paper

Add this paper to acollectionto link it from this page.

Similar Articles

Multimodal neurons in artificial neural networks

OpenAI Blog

OpenAI discovers multimodal neurons in CLIP that respond to the same concept across different modalities (visual, symbolic, textual), mirroring biological neurons and explaining the model's robustness on challenging vision tasks. This interpretability research provides insights into how vision-language models organize and represent abstract concepts.

Attribution-Guided Continual Learning for Large Language Models

arXiv cs.LG

This paper proposes an attribution-guided continual fine-tuning framework for large language models that estimates task-specific parameter importance in Transformer layers and modulates gradients accordingly, mitigating catastrophic forgetting while maintaining performance on new tasks.