visual-language-action

Tag

Cards List
#visual-language-action

IntentVLA: Short-Horizon Intent Modeling for Aliased Robot Manipulation

Hugging Face Daily Papers · 2d ago Cached

IntentVLA is a history-conditioned visual-language-action framework that improves robot imitation learning stability by encoding short-horizon intents from visual observations, addressing challenges from partial observability and ambiguous observations. It also introduces AliasBench, an ambiguity-aware benchmark for evaluating such methods.

0 favorites 0 likes
#visual-language-action

FrameSkip: Learning from Fewer but More Informative Frames in VLA Training

Hugging Face Daily Papers · 3d ago Cached

FrameSkip is a data-layer frame selection method that improves Vision-Language-Action (VLA) policy training by prioritizing high-importance frames based on action variation and visual-coherence metrics, achieving a macro-average success rate of 76.15% across three benchmarks while using only 20% of unique frames.

0 favorites 0 likes
← Back to home

Submit Feedback