Scaling Continual Learning to 300+ Tasks with Bi-Level Routing Mixture-of-Experts
Summary
This paper introduces CaRE, a novel continual learning framework using a bi-level routing mixture-of-experts mechanism to effectively handle class-incremental learning over sequences of 300+ tasks.
View Cached Full Text
Cached at: 05/11/26, 07:21 AM
Paper page - Scaling Continual Learning to 300+ Tasks with Bi-Level Routing Mixture-of-Experts
Source: https://huggingface.co/papers/2602.03473
Abstract
A novel continual learning framework called CaRE with a bi-level routing mixture-of-experts mechanism is proposed for class-incremental learning, demonstrating superior performance on very long task sequences exceeding 300 tasks.
Continual learning, especiallyclass-incremental learning(CIL), on the basis of apre-trained model(PTM) has garnered substantial research interest in recent years. However, how to effectively learn both discriminative and comprehensive feature representations while maintaining stability and plasticity over very long task sequences remains an open problem. We propose CaRE, a scalable {C}ontinual Le{a}rner with efficient Bi-Level {R}outing Mixture-of-{E}xperts (BR-MoE). The core idea of BR-MoE is abi-level routingmechanism: a router selection stage that dynamically activates relevanttask-specific routers, followed by anexpert routingphase that dynamically activates and aggregates experts, aiming to inject discriminative andcomprehensive representationsinto every intermediate network layer. On the other hand, we introduce a challenging dataset,OmniBenchmark-1K, for CIL performance evaluation on very long task sequences with hundreds of tasks. Extensive experiments show that CaRE demonstrates leading performance across a variety of datasets and task settings, including commonly used CIL datasets with classical CIL settings (e.g., 5-20 tasks). To the best of our knowledge, CaRE is the first continual learner that scales to very long task sequences (ranging from 100 to over 300 non-overlapping tasks), while outperforming all baselines by a large margin on such task sequences. We hope that this work will inspire further research intocontinual learningover extremely long task sequences. Code and dataset are publicly released at https://github.com/LMMMEng/CaRE.
View arXiv pageView PDFGitHub5Add to collection
Get this paper in your agent:
hf papers read 2602\.03473
Don’t have the latest CLI?curl \-LsSf https://hf\.co/cli/install\.sh \| bash
Models citing this paper0
No model linking this paper
Cite arxiv.org/abs/2602.03473 in a model README.md to link it from this page.
Datasets citing this paper0
No dataset linking this paper
Cite arxiv.org/abs/2602.03473 in a dataset README.md to link it from this page.
Spaces citing this paper0
No Space linking this paper
Cite arxiv.org/abs/2602.03473 in a Space README.md to link it from this page.
Collections including this paper0
No Collection including this paper
Add this paper to acollectionto link it from this page.
Similar Articles
Value-Decomposed Reinforcement Learning Framework for Taxiway Routing with Hierarchical Conflict-Aware Observations
This paper introduces CaTR, a value-decomposed reinforcement learning framework for real-time multi-aircraft taxiway routing that uses hierarchical foresight traffic representation to balance safety and efficiency.
Iterative Critique-and-Routing Controller for Multi-Agent Systems with Heterogeneous LLMs
This paper introduces a critique-and-routing controller for multi-agent LLM systems that formulates coordination as a sequential decision problem. It uses policy gradients to optimize the controller for iterative refinement, outperforming baselines while reducing reliance on top-tier models.
SAMoRA: Semantic-Aware Mixture of LoRA Experts for Task-Adaptive Learning
SAMoRA introduces a semantic-aware router and task-adaptive scaling to improve expert specialization and dynamic weighting in MoE-LoRA fine-tuning, outperforming prior methods on multi-task benchmarks.
Attribution-Guided Continual Learning for Large Language Models
This paper proposes an attribution-guided continual fine-tuning framework for large language models that estimates task-specific parameter importance in Transformer layers and modulates gradients accordingly, mitigating catastrophic forgetting while maintaining performance on new tasks.
Learning Agent Routing From Early Experience
This paper introduces BoundaryRouter, a training-free framework that optimizes LLM agent usage by routing queries to either lightweight inference or full agent execution based on early experience. It also presents RouteBench, a benchmark for evaluating routing performance, showing significant improvements in speed and accuracy.