few-shot-learning

Tag

Cards List
#few-shot-learning

Automatic Combination of Sample Selection Strategies for Few-Shot Learning

arXiv cs.CL · 2026-04-20 Cached

This paper proposes ACSESS, a method for automatically combining multiple sample selection strategies to improve few-shot learning across both in-context learning and gradient-based approaches. The work demonstrates that combining strategies consistently outperforms individual selection methods across 14 datasets with both text and image modalities.

0 favorites 0 likes
#few-shot-learning

SCHK-HTC: Sibling Contrastive Learning with Hierarchical Knowledge-Aware Prompt Tuning for Hierarchical Text Classification

arXiv cs.CL · 2026-04-20 Cached

SCHK-HTC is a novel method for few-shot hierarchical text classification that combines sibling contrastive learning with hierarchical knowledge-aware prompt tuning to better distinguish semantically similar classes at deeper hierarchy levels. The approach achieves state-of-the-art performance across three benchmark datasets by enhancing model perception of subtle differences between sibling classes.

0 favorites 0 likes
#few-shot-learning

Language models are few-shot learners

OpenAI Blog · 2020-05-28 Cached

OpenAI introduces GPT-3, a 175-billion parameter autoregressive language model that demonstrates strong few-shot learning capabilities across diverse NLP tasks without gradient updates or fine-tuning, representing a paradigm shift in how language models can be applied to new tasks through text interactions alone.

0 favorites 0 likes
#few-shot-learning

Gotta Learn Fast: A new benchmark for generalization in RL

OpenAI Blog · 2018-04-10 Cached

OpenAI presents a new reinforcement learning benchmark based on Sonic the Hedgehog to measure transfer learning and few-shot learning performance in RL agents, along with baseline algorithm evaluations.

0 favorites 0 likes
#few-shot-learning

On first-order meta-learning algorithms

OpenAI Blog · 2018-03-08 Cached

This paper analyzes first-order meta-learning algorithms for few-shot learning, introducing Reptile and providing theoretical insights into why these computationally efficient methods work well on established benchmarks.

0 favorites 0 likes
#few-shot-learning

Reptile: A scalable meta-learning algorithm

OpenAI Blog · 2018-03-07 Cached

OpenAI introduces Reptile, a scalable meta-learning algorithm for few-shot classification that achieves comparable performance to MAML while converging faster with lower variance. The paper provides theoretical analysis showing Reptile maximizes inner product between task gradients for improved generalization.

0 favorites 0 likes
#few-shot-learning

RL²: Fast reinforcement learning via slow reinforcement learning

OpenAI Blog · 2016-11-09 Cached

RL² proposes encoding a fast reinforcement learning algorithm as the weights of a recurrent neural network, learned through slow general-purpose RL, enabling agents to adapt to new tasks with few trials similar to biological learning. The method demonstrates strong performance on both small-scale bandit problems and large-scale vision-based navigation tasks.

0 favorites 0 likes
← Back to home

Submit Feedback