π’ FAIR at Meta
You Donβt Need Domain-Specific Data Augmentations When Scaling Self-Supervised Learning
·2133 words·11 mins·
loading
·
loading
AI Generated
Machine Learning
Self-Supervised Learning
π’ FAIR at Meta
Self-supervised learning’s reliance on complex data augmentations is challenged; a large-scale study shows comparable performance using only cropping, suggesting dataset size is more important than au…
On improved Conditioning Mechanisms and Pre-training Strategies for Diffusion Models
·3235 words·16 mins·
loading
·
loading
Computer Vision
Image Generation
π’ FAIR at Meta
Researchers achieve state-of-the-art image generation by disentangling semantic and control metadata in diffusion models and optimizing pre-training across resolutions.
Measuring Dejavu Memorization Efficiently
·2794 words·14 mins·
loading
·
loading
Computer Vision
Representation Learning
π’ FAIR at Meta
New method efficiently measures how well AI models memorize training data, revealing that open-source models memorize less than expected.