Skip to main content

AI Theory

Scalable DP-SGD: Shuffling vs. Poisson Subsampling
·2155 words·11 mins· loading · loading
AI Generated AI Theory Privacy 🏢 Google Research
This paper reveals significant privacy gaps in shuffling-based DP-SGD, proposes a scalable Poisson subsampling method, and demonstrates its superior utility for private model training.
Sample-Efficient Private Learning of Mixtures of Gaussians
·256 words·2 mins· loading · loading
AI Theory Privacy 🏢 McMaster University
Researchers achieve a breakthrough in privacy-preserving machine learning by developing sample-efficient algorithms for learning Gaussian Mixture Models, significantly reducing the data needed while m…
Sample-Efficient Geometry Reconstruction from Euclidean Distances using Non-Convex Optimization
·1912 words·9 mins· loading · loading
AI Theory Optimization 🏢 University of North Carolina at Charlotte
Reconstructing geometry from minimal Euclidean distance samples: A novel algorithm achieves state-of-the-art data efficiency with theoretical guarantees.
Sample-efficient Bayesian Optimisation Using Known Invariances
·1812 words·9 mins· loading · loading
AI Theory Optimization 🏢 University College London
Boost Bayesian Optimization’s efficiency by leveraging known invariances in objective functions for faster, more effective solutions.
Sample Efficient Bayesian Learning of Causal Graphs from Interventions
·1696 words·8 mins· loading · loading
AI Theory Causality 🏢 Purdue University
Efficiently learn causal graphs from limited interventions using a novel Bayesian algorithm that outperforms existing methods and requires fewer experiments.
Sample Complexity of Posted Pricing for a Single Item
·273 words·2 mins· loading · loading
AI Theory Optimization 🏢 Cornell University
This paper reveals how many buyer samples are needed to set near-optimal posted prices for a single item, resolving a fundamental problem in online markets and offering both theoretical and practical …
Sample Complexity of Interventional Causal Representation Learning
·449 words·3 mins· loading · loading
AI Theory Representation Learning 🏢 Carnegie Mellon University
First finite-sample analysis of interventional causal representation learning shows that surprisingly few samples suffice for accurate graph and latent variable recovery.
Sample Complexity of Algorithm Selection Using Neural Networks and Its Applications to Branch-and-Cut
·428 words·3 mins· loading · loading
AI Theory Optimization 🏢 Johns Hopkins University
Neural networks enhance algorithm selection in branch-and-cut, significantly reducing tree sizes and improving efficiency for mixed-integer optimization, as proven by rigorous theoretical bounds and e…
Sample and Computationally Efficient Robust Learning of Gaussian Single-Index Models
·262 words·2 mins· loading · loading
AI Generated AI Theory Robustness 🏢 University of Wisconsin, Madison
This paper presents a computationally efficient algorithm for robustly learning Gaussian single-index models under adversarial label noise, achieving near-optimal sample complexity.
Safe Exploitative Play with Untrusted Type Beliefs
·1930 words·10 mins· loading · loading
AI Theory Optimization 🏢 School of Data Science, the Chinese University of Hong Kong, Shenzhen
This paper characterizes the fundamental tradeoff between trusting and distrusting learned type beliefs in games, establishing upper and lower bounds for optimal strategies in both normal-form and sto…
Safe and Sparse Newton Method for Entropic-Regularized Optimal Transport
·2040 words·10 mins· loading · loading
AI Generated AI Theory Optimization 🏢 Shanghai University of Finance and Economics
A novel safe & sparse Newton method (SSNS) for entropic-regularized optimal transport boasts strict error control, avoids singularity, needs no hyperparameter tuning, and offers rigorous convergence a…
S-SOS: Stochastic Sum-Of-Squares for Parametric Polynomial Optimization
·1441 words·7 mins· loading · loading
AI Theory Optimization 🏢 University of Chicago
S-SOS: A new algorithm solves complex, parameterized polynomial problems with provable convergence, enabling efficient solutions for high-dimensional applications like sensor network localization.
Rule Based Rewards for Language Model Safety
·3342 words·16 mins· loading · loading
AI Theory Safety 🏢 OpenAI
Rule-Based Rewards (RBRs) enhance LLM safety by using AI feedback and a few-shot prompt-based approach, achieving higher safety-behavior accuracy with less human annotation than existing methods.
RoPINN: Region Optimized Physics-Informed Neural Networks
·2557 words·13 mins· loading · loading
AI Theory Optimization 🏢 Tsinghua University
ROPINN: Revolutionizing Physics-Informed Neural Networks with Region Optimization
Robust Sparse Regression with Non-Isotropic Designs
·239 words·2 mins· loading · loading
AI Theory Robustness 🏢 National Taiwan University
New algorithms achieve near-optimal error rates for sparse linear regression, even under adversarial data corruption and heavy-tailed noise distributions.
Robust Neural Contextual Bandit against Adversarial Corruptions
·1411 words·7 mins· loading · loading
AI Generated AI Theory Robustness 🏢 University of Illinois at Urbana-Champaign
R-NeuralUCB: A robust neural contextual bandit algorithm uses a context-aware gradient descent training to defend against adversarial reward corruptions, achieving better performance with theoretical …
Robust Mixture Learning when Outliers Overwhelm Small Groups
·2570 words·13 mins· loading · loading
AI Generated AI Theory Robustness 🏢 ETH Zurich
Outlier-robust mixture learning gets order-optimal error guarantees, even when outliers massively outnumber small groups, via a novel meta-algorithm leveraging mixture structure.
Robust Graph Neural Networks via Unbiased Aggregation
·2885 words·14 mins· loading · loading
AI Theory Robustness 🏢 North Carolina State University
RUNG: a novel GNN architecture boasting superior robustness against adaptive attacks by employing an unbiased aggregation technique.
Robust and Faster Zeroth-Order Minimax Optimization: Complexity and Applications
·1498 words·8 mins· loading · loading
AI Generated AI Theory Optimization 🏢 Peking University
ZO-GDEGA: A unified algorithm achieves faster, more robust zeroth-order minimax optimization with lower complexity and weaker conditions, solving stochastic nonconvex-concave problems.
Revisiting K-mer Profile for Effective and Scalable Genome Representation Learning
·1651 words·8 mins· loading · loading
AI Theory Representation Learning 🏢 Aalborg University
This paper proposes a lightweight and scalable k-mer based model for metagenomic binning, achieving comparable performance to computationally expensive genome foundation models while significantly imp…