Skip to main content

Vision Transformers

Unveil Benign Overfitting for Transformer in Vision: Training Dynamics, Convergence, and Generalization
·1822 words·9 mins· loading · loading
AI Generated Computer Vision Vision Transformers 🏢 University of Tokyo
Vision Transformers (ViTs) generalize surprisingly well, even when overfitting training data; this work provides the first theoretical explanation by characterizing the optimization dynamics of ViTs a…
Dissecting Query-Key Interaction in Vision Transformers
·3134 words·15 mins· loading · loading
Vision Transformers 🏢 University of Miami
Vision transformers’ self-attention mechanism is dissected revealing how early layers focus on similar features for perceptual grouping while later layers integrate dissimilar features for contextuali…