long-tail-knowledge

Tag

Cards List
#long-tail-knowledge

Intermittent random token injection during decoding stage increases LLM diversity without fine-tuning

Reddit r/ArtificialInteligence · 2d ago

A Harvard research paper introduces Recoding-Decoding (RD), a novel decoding scheme that injects random priming phrases and diverting tokens to tap into an LLM's long-tail knowledge, significantly boosting output diversity without fine-tuning. The method maintains high relevance while mitigating response homogenization, with stronger models showing greater diversity gains.

0 favorites 0 likes
← Back to home

Submit Feedback