meta-learning

Tag

Cards List
#meta-learning

NoiseRater: Meta-Learned Noise Valuation for Diffusion Model Training

arXiv cs.LG · yesterday Cached

This paper introduces NoiseRater, a meta-learning framework that assigns importance scores to individual noise samples during diffusion model training to improve efficiency and generation quality.

0 favorites 0 likes
#meta-learning

RubricEM: Meta-RL with Rubric-guided Policy Decomposition beyond Verifiable Rewards

Hugging Face Daily Papers · 2d ago Cached

This paper introduces RubricEM, a reinforcement learning framework that uses rubric-guided policy decomposition and reflection-based meta-policy evolution to train deep research agents for long-form tasks. The resulting RubricEM-8B model demonstrates strong performance on long-form research benchmarks by leveraging stage-aware planning and denser semantic feedback.

0 favorites 0 likes
#meta-learning

Model-Agnostic Meta Learning for Class Imbalance Adaptation

arXiv cs.CL · 2026-04-22 Cached

University of Memphis researchers propose HAMR, a model-agnostic meta-learning framework that uses bi-level optimization and neighborhood-aware resampling to adaptively reweight hard examples and minority classes across six imbalanced NLP datasets.

0 favorites 0 likes
#meta-learning

FSPO: Few-Shot Optimization of Synthetic Preferences Personalizes to Real Users

arXiv cs.CL · 2026-04-20 Cached

FSPO proposes a few-shot preference optimization algorithm for LLM personalization that reframes reward modeling as meta-learning, enabling models to quickly infer personalized reward functions from limited user preferences. The method achieves 87% personalization performance on synthetic users and 70% on real users through careful synthetic preference dataset construction.

0 favorites 0 likes
#meta-learning

Automatic Combination of Sample Selection Strategies for Few-Shot Learning

arXiv cs.CL · 2026-04-20 Cached

This paper proposes ACSESS, a method for automatically combining multiple sample selection strategies to improve few-shot learning across both in-context learning and gradient-based approaches. The work demonstrates that combining strategies consistently outperforms individual selection methods across 14 datasets with both text and image modalities.

0 favorites 0 likes
#meta-learning

Weak-Link Optimization for Multi-Agent Reasoning and Collaboration

arXiv cs.CL · 2026-04-20 Cached

This paper proposes WORC, a weak-link optimization framework for multi-agent LLM systems that identifies and reinforces underperforming agents through meta-learning-based weight prediction and uncertainty-driven resource allocation, achieving 82.2% accuracy on reasoning benchmarks while improving system stability.

0 favorites 0 likes
#meta-learning

Meta-learning In-Context Enables Training-Free Cross Subject Brain Decoding

Hugging Face Daily Papers · 2026-04-09 Cached

This paper introduces a meta-optimized approach for semantic visual decoding from fMRI signals that generalizes to novel subjects without fine-tuning, using in-context learning to infer unique neural encoding patterns from a small set of image-brain activation examples. The method achieves strong cross-subject and cross-scanner generalization without requiring anatomical alignment or stimulus overlap.

0 favorites 0 likes
#meta-learning

Evolved Policy Gradients

OpenAI Blog · 2018-04-18 Cached

OpenAI introduces Evolved Policy Gradients (EPG), a meta-learning approach that learns loss functions through evolution rather than learning policies directly, enabling RL agents to generalize better across tasks by leveraging prior experience similar to how humans transfer skills.

0 favorites 0 likes
#meta-learning

On first-order meta-learning algorithms

OpenAI Blog · 2018-03-08 Cached

This paper analyzes first-order meta-learning algorithms for few-shot learning, introducing Reptile and providing theoretical insights into why these computationally efficient methods work well on established benchmarks.

0 favorites 0 likes
#meta-learning

Reptile: A scalable meta-learning algorithm

OpenAI Blog · 2018-03-07 Cached

OpenAI introduces Reptile, a scalable meta-learning algorithm for few-shot classification that achieves comparable performance to MAML while converging faster with lower variance. The paper provides theoretical analysis showing Reptile maximizes inner product between task gradients for improved generalization.

0 favorites 0 likes
#meta-learning

Meta-learning for wrestling

OpenAI Blog · 2017-10-11 Cached

OpenAI researchers develop meta-learning agents that continuously adapt their policies during multi-round competitive games, demonstrating superior performance compared to fixed-policy agents and robustness to environmental and bodily changes.

0 favorites 0 likes
#meta-learning

One-shot imitation learning

OpenAI Blog · 2017-03-21 Cached

OpenAI proposes a meta-learning framework for one-shot imitation learning that enables robots to learn new tasks from a single demonstration and generalize to new instances without task-specific engineering. The approach uses soft attention mechanisms to allow neural networks trained on diverse task pairs to perform well on unseen tasks at test time.

0 favorites 0 likes
#meta-learning

RL²: Fast reinforcement learning via slow reinforcement learning

OpenAI Blog · 2016-11-09 Cached

RL² proposes encoding a fast reinforcement learning algorithm as the weights of a recurrent neural network, learned through slow general-purpose RL, enabling agents to adapt to new tasks with few trials similar to biological learning. The method demonstrates strong performance on both small-scale bandit problems and large-scale vision-based navigation tasks.

0 favorites 0 likes
← Back to home

Submit Feedback