vision-transformers

Tag

Cards List
#vision-transformers

Elastic Attention Cores for Scalable Vision Transformers [R]

Reddit r/MachineLearning · 4h ago

This article presents a new paper on Elastic Attention Cores for Vision Transformers, proposing a core-periphery block-sparse attention structure that improves scalability and accuracy compared to dense self-attention methods like DINOv3.

0 favorites 0 likes
← Back to home

Submit Feedback