Tag
Researchers from MIT, WPI, and Google propose WRING, a novel post-processing debiasing method for Vision-Language Models that avoids the 'Whac-a-mole dilemma' of amplifying other biases when removing specific ones.
This paper proposes Product-of-Experts (PoE) training to reduce dataset artifacts in Natural Language Inference, downweighting examples where biased models are overconfident. PoE nearly preserves accuracy on SNLI (89.10% vs. 89.30%) while reducing bias reliance by ~4.85 percentage points.