reward-optimization

Tag

Cards List
#reward-optimization

MARBLE: Multi-Aspect Reward Balance for Diffusion RL

Hugging Face Daily Papers · 2d ago Cached

This paper introduces MARBLE, a gradient-space optimization framework for multi-reward reinforcement learning fine-tuning of diffusion models, which harmonizes policy gradients without manual weighting.

0 favorites 0 likes
#reward-optimization

Self-Distillation Zero: Self-Revision Turns Binary Rewards into Dense Supervision

Hugging Face Daily Papers · 2026-04-13 Cached

Self-Distillation Zero (SD-Zero) is a novel training method that converts sparse binary rewards into dense token-level supervision through dual-role training where a model acts as both generator and reviser, achieving 10%+ improvements on math and code reasoning benchmarks with higher sample efficiency than RL approaches.

0 favorites 0 likes
← Back to home

Submit Feedback