Machine Learning
SPEAR: Exact Gradient Inversion of Batches in Federated Learning
·2907 words·14 mins·
loading
·
loading
Machine Learning
Federated Learning
π’ ETH Zurich
SPEAR, a novel algorithm, precisely reconstructs entire data batches from gradients in federated learning, defying previous limitations and enhancing privacy risk assessment.
SpeAr: A Spectral Approach for Zero-Shot Node Classification
·1935 words·10 mins·
loading
·
loading
Machine Learning
Semi-Supervised Learning
π’ North University of China
SpeAr: A novel spectral approach significantly improves zero-shot node classification by using inherent graph structure to reduce prediction bias and effectively identifying unseen node classes.
Spatio-Spectral Graph Neural Networks
·4413 words·21 mins·
loading
·
loading
Machine Learning
Deep Learning
π’ Technical University of Munich
Spatio-Spectral GNNs synergistically combine spatial and spectral graph filters for efficient, global information propagation, overcoming limitations of existing methods.
Sparsity-Agnostic Linear Bandits with Adaptive Adversaries
·336 words·2 mins·
loading
·
loading
Machine Learning
Reinforcement Learning
π’ National University of Singapore
SparseLinUCB: First sparse regret bounds for adversarial action sets with unknown sparsity, achieving superior performance over existing methods!
Sparse maximal update parameterization: A holistic approach to sparse training dynamics
·3095 words·15 mins·
loading
·
loading
AI Generated
Machine Learning
Deep Learning
π’ Cerebras Systems
SΒ΅Par stabilizes sparse neural network training, slashing tuning costs and boosting performance, especially at high sparsity levels, via a novel parameterization technique.
Sparse Bayesian Generative Modeling for Compressive Sensing
·2495 words·12 mins·
loading
·
loading
AI Generated
Machine Learning
Deep Learning
π’ TUM School of Computation, Information and Technology
A new learnable prior for compressive sensing solves the inverse problem using only a few corrupted data samples, enabling sparse signal recovery without ground-truth information and uncertainty quant…
SPARKLE: A Unified Single-Loop Primal-Dual Framework for Decentralized Bilevel Optimization
·1927 words·10 mins·
loading
·
loading
Machine Learning
Meta Learning
π’ Peking University
SPARKLE: A single-loop primal-dual framework unifies decentralized bilevel optimization, enabling flexible heterogeneity-correction and mixed update strategies for improved convergence.
SpaFL: Communication-Efficient Federated Learning With Sparse Models And Low Computational Overhead
·2099 words·10 mins·
loading
·
loading
AI Generated
Machine Learning
Federated Learning
π’ Virginia Tech
SpaFL: A communication-efficient federated learning framework that optimizes sparse model structures with low computational overhead by using trainable thresholds to prune model parameters.
Sourcerer: Sample-based Maximum Entropy Source Distribution Estimation
·4767 words·23 mins·
loading
·
loading
AI Generated
Machine Learning
Deep Learning
π’ University of TΓΌbingen
Sourcerer: A novel sample-based method for maximum entropy source distribution estimation, resolving ill-posedness while maintaining simulation accuracy.
Solving Zero-Sum Markov Games with Continous State via Spectral Dynamic Embedding
·391 words·2 mins·
loading
·
loading
Machine Learning
Reinforcement Learning
π’ Zhejiang University
SDEPO, a new natural policy gradient algorithm, efficiently solves zero-sum Markov games with continuous state spaces, achieving near-optimal convergence independent of state space cardinality.
Solving Sparse & High-Dimensional-Output Regression via Compression
·2050 words·10 mins·
loading
·
loading
AI Generated
Machine Learning
Optimization
π’ National University of Singapore
SHORE: a novel two-stage framework efficiently solves sparse & high-dimensional output regression, boosting interpretability and scalability.
Solving Minimum-Cost Reach Avoid using Reinforcement Learning
·2253 words·11 mins·
loading
·
loading
AI Generated
Machine Learning
Reinforcement Learning
π’ MIT
RC-PPO: Reinforcement learning solves minimum-cost reach-avoid problems with up to 57% lower costs!
Soft ascent-descent as a stable and flexible alternative to flooding
·2106 words·10 mins·
loading
·
loading
Machine Learning
Deep Learning
π’ Osaka University
Soft ascent-descent (SoftAD) improves test accuracy and generalization by softening the flooding method, offering competitive accuracy with reduced loss and model complexity.
Small steps no more: Global convergence of stochastic gradient bandits for arbitrary learning rates
·447 words·3 mins·
loading
·
loading
AI Generated
Machine Learning
Reinforcement Learning
π’ Google DeepMind
Stochastic gradient bandit algorithms now guaranteed to globally converge, using ANY constant learning rate!
SLowcalSGD : Slow Query Points Improve Local-SGD for Stochastic Convex Optimization
·362 words·2 mins·
loading
·
loading
Machine Learning
Federated Learning
π’ Technion
SLowcal-SGD, a new local update method for distributed learning, provably outperforms Minibatch-SGD and Local-SGD in heterogeneous settings by using a slow querying technique, mitigating bias from loc…
SleeperNets: Universal Backdoor Poisoning Attacks Against Reinforcement Learning Agents
·2849 words·14 mins·
loading
·
loading
Machine Learning
Reinforcement Learning
π’ Khoury College of Computer Sciences, Northeastern University
SleeperNets: A universal backdoor attack against RL agents, achieving 100% success rate across diverse environments while preserving benign performance.
Skill-aware Mutual Information Optimisation for Zero-shot Generalisation in Reinforcement Learning
·5509 words·26 mins·
loading
·
loading
AI Generated
Machine Learning
Reinforcement Learning
π’ University of Edinburgh
Skill-aware Mutual Information optimization enhances RL agent generalization across diverse tasks by distinguishing context embeddings based on skills, leading to improved zero-shot performance and ro…
SkiLD: Unsupervised Skill Discovery Guided by Factor Interactions
·2028 words·10 mins·
loading
·
loading
Machine Learning
Reinforcement Learning
π’ University of Texas at Austin
SkiLD, a novel unsupervised skill discovery method, uses state factorization and a new objective function to learn skills inducing diverse interactions between state factors, outperforming existing me…
Sketching for Distributed Deep Learning: A Sharper Analysis
·3663 words·18 mins·
loading
·
loading
AI Generated
Machine Learning
Federated Learning
π’ University of Illinois Urbana-Champaign
This work presents a sharper analysis of sketching for distributed deep learning, eliminating the problematic dependence on ambient dimension in convergence analysis and proving ambient dimension-inde…
Sketched Lanczos uncertainty score: a low-memory summary of the Fisher information
·2226 words·11 mins·
loading
·
loading
Machine Learning
Deep Learning
π’ Technical University of Denmark
SLU: a novel, low-memory uncertainty score for neural networks, achieves logarithmic memory scaling with model parameters, providing well-calibrated uncertainties and outperforming existing methods.