Self-Supervised Learning
Your Diffusion Model is Secretly a Noise Classifier and Benefits from Contrastive Training
·2424 words·12 mins·
loading
·
loading
Machine Learning
Self-Supervised Learning
π’ UC Riverside
Diffusion models benefit from contrastive training, improving sample quality and speed by addressing poor denoiser estimation in out-of-distribution regions.
Your contrastive learning problem is secretly a distribution alignment problem
·381 words·2 mins·
loading
·
loading
Machine Learning
Self-Supervised Learning
π’ University of Toronto
Contrastive learning is reframed as a distribution alignment problem, leading to a flexible framework (GCA) that improves representation learning with unbalanced optimal transport.
You Donβt Need Domain-Specific Data Augmentations When Scaling Self-Supervised Learning
·2133 words·11 mins·
loading
·
loading
AI Generated
Machine Learning
Self-Supervised Learning
π’ FAIR at Meta
Self-supervised learning’s reliance on complex data augmentations is challenged; a large-scale study shows comparable performance using only cropping, suggesting dataset size is more important than au…
Unified Graph Augmentations for Generalized Contrastive Learning on Graphs
·2324 words·11 mins·
loading
·
loading
Machine Learning
Self-Supervised Learning
π’ Hebei University of Technology
Unified Graph Augmentations (UGA) module boosts graph contrastive learning by unifying diverse augmentation strategies, improving model generalizability and efficiency.
Understanding the Role of Equivariance in Self-supervised Learning
·2016 words·10 mins·
loading
·
loading
AI Generated
Machine Learning
Self-Supervised Learning
π’ MIT
E-SSL’s generalization ability is rigorously analyzed via an information-theoretic lens, revealing key design principles for improved performance.
Uncovering the Redundancy in Graph Self-supervised Learning Models
·2804 words·14 mins·
loading
·
loading
AI Generated
Machine Learning
Self-Supervised Learning
π’ Beihang University
Graph self-supervised learning models surprisingly exhibit high redundancy, allowing for significant parameter reduction without performance loss. A novel framework, SLIDE, leverages this discovery f…
Towards a 'Universal Translator' for Neural Dynamics at Single-Cell, Single-Spike Resolution
·2778 words·14 mins·
loading
·
loading
Machine Learning
Self-Supervised Learning
π’ Columbia University
A new self-supervised learning approach, Multi-task Masking (MtM), significantly improves the prediction accuracy of neural population activity by capturing neural dynamics at multiple spatial scales,…
The Benefits of Balance: From Information Projections to Variance Reduction
·1859 words·9 mins·
loading
·
loading
Machine Learning
Self-Supervised Learning
π’ University of Washington
Data balancing in foundation models surprisingly reduces variance, improving model training and performance.
Self-supervised Transformation Learning for Equivariant Representations
·2895 words·14 mins·
loading
·
loading
AI Generated
Machine Learning
Self-Supervised Learning
π’ Korea Advanced Institute of Science and Technology (KAIST)
Self-Supervised Transformation Learning (STL) enhances equivariant representations by replacing transformation labels with image-pair-derived representations, improving performance on diverse classifi…
Self-Supervised Adversarial Training via Diverse Augmented Queries and Self-Supervised Double Perturbation
·2025 words·10 mins·
loading
·
loading
Machine Learning
Self-Supervised Learning
π’ Institute of Computing Technology, Chinese Academy of Sciences
DAQ-SDP enhances self-supervised adversarial training by using diverse augmented queries, a self-supervised double perturbation scheme, and a novel Aug-Adv Pairwise-BatchNorm method, bridging the gap …
Self-Labeling the Job Shop Scheduling Problem
·2214 words·11 mins·
loading
·
loading
AI Generated
Machine Learning
Self-Supervised Learning
π’ University of Modena and Reggio Emilia
Self-Labeling Improves Generative Model Training for Combinatorial Problems
Self-Healing Machine Learning: A Framework for Autonomous Adaptation in Real-World Environments
·2758 words·13 mins·
loading
·
loading
Machine Learning
Self-Supervised Learning
π’ University of Cambridge
Self-healing machine learning (SHML) autonomously diagnoses and fixes model performance degradation caused by data shifts, outperforming reason-agnostic methods.
Self-Guided Masked Autoencoder
·3698 words·18 mins·
loading
·
loading
AI Generated
Computer Vision
Self-Supervised Learning
π’ Seoul National University
Self-guided MAE boosts self-supervised learning by intelligently masking image patches based on internal clustering patterns, dramatically accelerating training without external data.
Revisiting Self-Supervised Heterogeneous Graph Learning from Spectral Clustering Perspective
·1781 words·9 mins·
loading
·
loading
Machine Learning
Self-Supervised Learning
π’ National University of Singapore
SCHOOL: A novel SHGL framework enhancing spectral clustering with rank and dual consistency constraints, effectively mitigating noise and leveraging cluster-level information for improved downstream t…
Resource-Aware Federated Self-Supervised Learning with Global Class Representations
·2832 words·14 mins·
loading
·
loading
AI Generated
Machine Learning
Self-Supervised Learning
π’ Shandong University
FedMKD: A multi-teacher framework for federated self-supervised learning, enabling global class representations even with diverse client models and skewed data distributions.
Protected Test-Time Adaptation via Online Entropy Matching: A Betting Approach
·1945 words·10 mins·
loading
·
loading
Machine Learning
Self-Supervised Learning
π’ Department of Computer Science, TechnionβIsrael Institute of Technology
POEM: a novel test-time adaptation approach using online self-training improves accuracy under distribution shifts by dynamically updating the classifier, ensuring invariance to shifts while maintaini…
Preventing Dimensional Collapse in Self-Supervised Learning via Orthogonality Regularization
·2561 words·13 mins·
loading
·
loading
Machine Learning
Self-Supervised Learning
π’ Hong Kong Polytechnic University
Orthogonal regularization prevents dimensional collapse in self-supervised learning, significantly boosting model performance across diverse benchmarks.
Online Feature Updates Improve Online (Generalized) Label Shift Adaptation
·1991 words·10 mins·
loading
·
loading
Machine Learning
Self-Supervised Learning
π’ UC San Diego
Online Label Shift adaptation with Online Feature Updates (OLS-OFU) significantly boosts online label shift adaptation by dynamically refining feature extractors using self-supervised learning, achiev…
Multiple Physics Pretraining for Spatiotemporal Surrogate Models
·3133 words·15 mins·
loading
·
loading
Machine Learning
Self-Supervised Learning
π’ Flatiron Institute
Multiple Physics Pretraining (MPP) revolutionizes spatiotemporal physical surrogate modeling by pretraining transformers on diverse physics simultaneously, enabling accurate predictions on unseen syst…
MSA Generation with Seqs2Seqs Pretraining: Advancing Protein Structure Predictions
·2100 words·10 mins·
loading
·
loading
Machine Learning
Self-Supervised Learning
π’ Fudan University
Self-supervised generative model MSA-Generator boosts protein structure prediction accuracy by producing high-quality MSAs, especially for challenging sequences lacking homologs.