Skip to main content

AI Theory

Provable Acceleration of Nesterov's Accelerated Gradient for Asymmetric Matrix Factorization and Linear Neural Networks
·1572 words·8 mins· loading · loading
AI Theory Optimization 🏒 Georgia Institute of Technology
This paper proves Nesterov’s Accelerated Gradient achieves faster convergence for rectangular matrix factorization and linear neural networks, using a novel unbalanced initialization.
ProTransformer: Robustify Transformers via Plug-and-Play Paradigm
·6210 words·30 mins· loading · loading
AI Generated AI Theory Robustness 🏒 North Carolina State University
ProTransformer robustifies transformers with a novel plug-and-play attention mechanism, significantly improving robustness across various tasks and domains without retraining.
Proportional Fairness in Non-Centroid Clustering
·2752 words·13 mins· loading · loading
AI Theory Fairness 🏒 Aarhus University
This paper introduces proportionally fair non-centroid clustering, achieving fairness guarantees via novel algorithms and auditing methods, demonstrating significant improvements over traditional meth…
Proportional Fairness in Clustering: A Social Choice Perspective
·289 words·2 mins· loading · loading
AI Theory Fairness 🏒 Technische UniversitÀt Clausthal
This paper reveals the surprising connection between individual and proportional fairness in clustering, showing that any approximation to one directly implies an approximation to the other, enabling …
Promoting Fairness Among Dynamic Agents in Online-Matching Markets under Known Stationary Arrival Distributions
·1572 words·8 mins· loading · loading
AI Generated AI Theory Fairness 🏒 Columbia University
This paper presents novel algorithms for online matching markets that prioritize fairness among dynamic agents, achieving asymptotic optimality in various scenarios and offering extensions to group-le…
PRODuctive bandits: Importance Weighting No More
·229 words·2 mins· loading · loading
AI Generated AI Theory Optimization 🏒 Google Research
Prod-family algorithms achieve optimal regret in adversarial multi-armed bandits, disproving prior suboptimality conjectures.
PrivCirNet: Efficient Private Inference via Block Circulant Transformation
·3185 words·15 mins· loading · loading
AI Theory Privacy 🏒 Peking University
PrivCirNet accelerates private deep learning inference by cleverly transforming DNN weights into circulant matrices, converting matrix-vector multiplications into efficient 1D convolutions suitable fo…
Private Stochastic Convex Optimization with Heavy Tails: Near-Optimality from Simple Reductions
·397 words·2 mins· loading · loading
AI Theory Privacy 🏒 Apple
Achieving near-optimal rates for differentially private stochastic convex optimization with heavy-tailed gradients is possible using simple reduction-based techniques.
Private Online Learning via Lazy Algorithms
·475 words·3 mins· loading · loading
AI Generated AI Theory Privacy 🏒 Apple
New transformation boosts privacy in online learning!
Private Geometric Median
·1335 words·7 mins· loading · loading
AI Theory Privacy 🏒 Khoury College of Computer Sciences, Northeastern University
This paper introduces new differentially private algorithms to compute the geometric median, achieving improved accuracy by scaling with the effective data diameter instead of a known radius.
Private Edge Density Estimation for Random Graphs: Optimal, Efficient and Robust
·261 words·2 mins· loading · loading
AI Theory Privacy 🏒 ETH Zurich
This paper delivers a groundbreaking polynomial-time algorithm for optimally estimating edge density in random graphs while ensuring node privacy and robustness against data corruption.
Private Algorithms for Stochastic Saddle Points and Variational Inequalities: Beyond Euclidean Geometry
·315 words·2 mins· loading · loading
AI Generated AI Theory Optimization 🏒 Ohio State University
This paper presents novel, privacy-preserving algorithms achieving near-optimal rates for solving stochastic saddle point problems and variational inequalities in non-Euclidean geometries.
Privacy without Noisy Gradients: Slicing Mechanism for Generative Model Training
·1432 words·7 mins· loading · loading
AI Theory Privacy 🏒 MIT-IBM Watson AI Lab, IBM Research
Train high-quality generative models with strong differential privacy using a novel slicing mechanism that injects noise into random low-dimensional data projections, avoiding noisy gradients.
Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models
·1757 words·9 mins· loading · loading
AI Theory Privacy 🏒 Google DeepMind
Researchers reveal ‘privacy backdoors,’ a new attack that exploits pre-trained models to leak user training data, highlighting critical vulnerabilities and prompting stricter model security measures.
Prior-itizing Privacy: A Bayesian Approach to Setting the Privacy Budget in Differential Privacy
·1883 words·9 mins· loading · loading
AI Theory Privacy 🏒 Department of Statistical Science
This paper introduces a Bayesian approach to setting the privacy budget in differential privacy, enabling agencies to balance data utility and confidentiality by customizing risk profiles.
Principled Bayesian Optimization in Collaboration with Human Experts
·2248 words·11 mins· loading · loading
AI Theory Optimization 🏒 University of Oxford
COBOL: a novel Bayesian Optimization algorithm leverages human expert advice via binary labels, achieving both fast convergence and robustness to noisy input, while guaranteeing minimal expert effort.
Position Coupling: Improving Length Generalization of Arithmetic Transformers Using Task Structure
·15685 words·74 mins· loading · loading
AI Generated AI Theory Generalization 🏒 Google Research
Position coupling, a novel method, enhances the length generalization ability of arithmetic Transformers by directly embedding task structures into positional encodings. This simple technique enables…
Poseidon: Efficient Foundation Models for PDEs
·9448 words·45 mins· loading · loading
AI Theory Representation Learning 🏒 ETH Zurich
POSEIDON: a novel foundation model for PDEs achieves significant gains in accuracy and sample efficiency, generalizing well to unseen physics.
Policy Aggregation
·1384 words·7 mins· loading · loading
AI Theory Fairness 🏒 University of Toronto
This paper introduces efficient algorithms that leverage social choice theory to aggregate multiple individual preferences, resulting in a desirable collective AI policy.
Plant-and-Steal: Truthful Fair Allocations via Predictions
·1745 words·9 mins· loading · loading
AI Theory Fairness 🏒 Bar-Ilan University
Learning-augmented mechanisms for fair allocation achieve constant-factor approximation with accurate predictions and near-optimal approximation even with inaccurate ones.