Skip to main content

Machine Learning

Single-Loop Stochastic Algorithms for Difference of Max-Structured Weakly Convex Functions
·1750 words·9 mins· loading · loading
AI Generated Machine Learning Optimization 🏒 Texas A&M University
SMAG, a novel single-loop stochastic algorithm, achieves state-of-the-art convergence for solving non-smooth non-convex optimization problems involving differences of max-structured weakly convex func…
Simulation-Free Training of Neural ODEs on Paired Data
·3545 words·17 mins· loading · loading
AI Generated Machine Learning Deep Learning 🏒 KAIST
Train Neural ODEs without simulations, achieving high performance on regression and classification by using flow matching in the embedding space of data pairs.
Simplifying Latent Dynamics with Softly State-Invariant World Models
·2423 words·12 mins· loading · loading
Machine Learning Reinforcement Learning 🏒 Max Planck Institute for Biological Cybernetics
This paper introduces the Parsimonious Latent Space Model (PLSM), a novel world model that regularizes latent dynamics to improve action predictability, enhancing RL performance.
Simplifying Constraint Inference with Inverse Reinforcement Learning
·1653 words·8 mins· loading · loading
Machine Learning Reinforcement Learning 🏒 University of Toronto
This paper simplifies constraint inference in reinforcement learning, demonstrating that standard inverse RL methods can effectively infer constraints from expert data, surpassing complex, previously …
Similarity-Navigated Conformal Prediction for Graph Neural Networks
·2658 words·13 mins· loading · loading
Machine Learning Semi-Supervised Learning 🏒 State Key Laboratory of Novel Software Technology, Nanjing University
SNAPS: a novel algorithm boosts graph neural network accuracy by efficiently aggregating non-conformity scores, improving prediction sets without sacrificing validity.
Sigmoid Gating is More Sample Efficient than Softmax Gating in Mixture of Experts
·1350 words·7 mins· loading · loading
Machine Learning Deep Learning 🏒 University of Texas at Austin
Sigmoid gating significantly boosts sample efficiency in Mixture of Experts models compared to softmax gating, offering faster convergence rates for various expert functions.
Sharpness-diversity tradeoff: improving flat ensembles with SharpBalance
·2661 words·13 mins· loading · loading
Machine Learning Deep Learning 🏒 UC San Diego
SharpBalance, a novel training approach, effectively improves deep ensemble performance by addressing the sharpness-diversity trade-off, leading to significant improvements in both in-distribution and…
Shape analysis for time series
·2156 words·11 mins· loading · loading
Machine Learning Representation Learning 🏒 Université Paris-Saclay
TS-LDDMM: Unsupervised time-series analysis handles irregular data, offering interpretable shape-based representations & exceeding existing methods in benchmarks.
Shadowheart SGD: Distributed Asynchronous SGD with Optimal Time Complexity Under Arbitrary Computation and Communication Heterogeneity
·2146 words·11 mins· loading · loading
AI Generated Machine Learning Federated Learning 🏒 KAUST AIRI
Shadowheart SGD achieves optimal time complexity for asynchronous SGD in distributed settings with arbitrary computation and communication heterogeneity.
Set-based Neural Network Encoding Without Weight Tying
·5047 words·24 mins· loading · loading
AI Generated Machine Learning Deep Learning 🏒 University of Oxford
Set-based Neural Network Encoder (SNE) efficiently encodes neural network weights for property prediction, eliminating the need for architecture-specific models and improving generalization across dat…
SequentialAttention++ for Block Sparsification: Differentiable Pruning Meets Combinatorial Optimization
·1692 words·8 mins· loading · loading
Machine Learning Deep Learning 🏒 Google Research
SequentialAttention++ unites differentiable pruning with combinatorial optimization for efficient and accurate neural network block sparsification, achieving state-of-the-art results.
Sequential Harmful Shift Detection Without Labels
·2657 words·13 mins· loading · loading
Machine Learning Deep Learning 🏒 J.P. Morgan AI Research
This paper introduces a novel, label-free method for detecting harmful distribution shifts in machine learning models deployed in production environments, leveraging a proxy error derived from an erro…
Sequential Decision Making with Expert Demonstrations under Unobserved Heterogeneity
·1771 words·9 mins· loading · loading
Machine Learning Reinforcement Learning 🏒 University of Toronto
ExPerior leverages expert demonstrations to enhance online decision-making, even when experts use hidden contextual information unseen by the learner.
Semi-supervised Knowledge Transfer Across Multi-omic Single-cell Data
·2468 words·12 mins· loading · loading
Machine Learning Semi-Supervised Learning 🏒 Georgia Institute of Technology
DANCE, a novel semi-supervised framework, efficiently transfers cell types across multi-omic single-cell data even with limited labeled samples, outperforming current state-of-the-art methods.
Self-supervised Transformation Learning for Equivariant Representations
·2895 words·14 mins· loading · loading
AI Generated Machine Learning Self-Supervised Learning 🏒 Korea Advanced Institute of Science and Technology (KAIST)
Self-Supervised Transformation Learning (STL) enhances equivariant representations by replacing transformation labels with image-pair-derived representations, improving performance on diverse classifi…
Self-Supervised Adversarial Training via Diverse Augmented Queries and Self-Supervised Double Perturbation
·2025 words·10 mins· loading · loading
Machine Learning Self-Supervised Learning 🏒 Institute of Computing Technology, Chinese Academy of Sciences
DAQ-SDP enhances self-supervised adversarial training by using diverse augmented queries, a self-supervised double perturbation scheme, and a novel Aug-Adv Pairwise-BatchNorm method, bridging the gap …
Self-Refining Diffusion Samplers: Enabling Parallelization via Parareal Iterations
·2449 words·12 mins· loading · loading
Machine Learning Deep Learning 🏒 Stanford University
Self-Refining Diffusion Samplers (SRDS) dramatically speeds up diffusion model sampling by leveraging Parareal iterations for parallel-in-time computation, maintaining high-quality outputs.
Self-Labeling the Job Shop Scheduling Problem
·2214 words·11 mins· loading · loading
AI Generated Machine Learning Self-Supervised Learning 🏒 University of Modena and Reggio Emilia
Self-Labeling Improves Generative Model Training for Combinatorial Problems
Self-Healing Machine Learning: A Framework for Autonomous Adaptation in Real-World Environments
·2758 words·13 mins· loading · loading
Machine Learning Self-Supervised Learning 🏒 University of Cambridge
Self-healing machine learning (SHML) autonomously diagnoses and fixes model performance degradation caused by data shifts, outperforming reason-agnostic methods.
SEL-BALD: Deep Bayesian Active Learning for Selective Labeling with Instance Rejection
·2048 words·10 mins· loading · loading
Machine Learning Active Learning 🏒 University of Texas at Dallas
SEL-BALD tackles the challenge of human discretion in active learning by proposing novel algorithms that account for instance rejection, significantly boosting sample efficiency.