Skip to main content

AI Theory

Optimal Scalarizations for Sublinear Hypervolume Regret
·1664 words·8 mins· loading · loading
AI Theory Optimization 🏒 Google DeepMind
Optimal multi-objective optimization achieved via hypervolume scalarization, offering sublinear regret bounds and outperforming existing methods.
Optimal Parallelization of Boosting
·228 words·2 mins· loading · loading
AI Theory Optimization 🏒 Aarhus University
This paper closes the performance gap in parallel boosting algorithms by presenting improved lower bounds and a novel algorithm matching these bounds, settling the parallel complexity of sample-optima…
Optimal Multiclass U-Calibration Error and Beyond
·366 words·2 mins· loading · loading
AI Generated AI Theory Optimization 🏒 University of Southern California
This paper proves the minimax optimal U-calibration error is Θ(√KT) for online multiclass prediction, resolving an open problem and showing logarithmic error is achievable for specific loss functions.
Optimal Hypothesis Selection in (Almost) Linear Time
·1628 words·8 mins· loading · loading
AI Theory Optimization 🏒 Rice University
This paper presents the first almost linear-time algorithm achieving the optimal accuracy parameter for hypothesis selection, solving a decades-long open problem.
Optimal Classification under Performative Distribution Shift
·1647 words·8 mins· loading · loading
AI Theory Robustness 🏒 Univ. Lille
This paper introduces a novel push-forward model for performative learning, proving the convexity of performative risk under new assumptions and linking performative learning to adversarial robustness…
Optimal Algorithms for Online Convex Optimization with Adversarial Constraints
·1266 words·6 mins· loading · loading
AI Theory Optimization 🏒 Tata Institute of Fundamental Research
Optimal algorithms for online convex optimization with adversarial constraints are developed, achieving O(√T) regret and Γ•(√T) constraint violationβ€”a breakthrough in the field.
Optimal Algorithms for Learning Partitions with Faulty Oracles
·1450 words·7 mins· loading · loading
AI Theory Optimization 🏒 University of Chicago
Optimal algorithms for learning partitions are designed, achieving minimum query complexity even with up to l faulty oracle responses.
Optimal Algorithms for Augmented Testing of Discrete Distributions
·1848 words·9 mins· loading · loading
AI Theory Optimization 🏒 Rice University
Leveraging predictions, this research presents novel algorithms for uniformity, identity, and closeness testing of discrete distributions, achieving information-theoretically optimal sample complexity…
Optimal ablation for interpretability
·3425 words·17 mins· loading · loading
AI Theory Interpretability 🏒 Harvard University
Optimal ablation (OA) improves model interpretability by precisely measuring component importance, outperforming existing methods. OA-based importance shines in circuit discovery, factual recall, and …
Only Strict Saddles in the Energy Landscape of Predictive Coding Networks?
·2012 words·10 mins· loading · loading
AI Theory Optimization 🏒 University of Sussex
Predictive coding networks learn faster than backpropagation by changing the loss landscape’s geometry, making saddles easier to escape and improving robustness to vanishing gradients.
Online Weighted Paging with Unknown Weights
·1583 words·8 mins· loading · loading
AI Theory Optimization 🏒 Tel Aviv University
First algorithm for online weighted paging that learns page weights from samples, achieving optimal O(log k) competitiveness and sublinear regret.
Online Learning of Delayed Choices
·1433 words·7 mins· loading · loading
AI Theory Optimization 🏒 University of Waterloo
New algorithms conquer delayed feedback in online choice modeling, achieving optimal decision-making even with unknown customer preferences and delayed responses.
Online Estimation via Offline Estimation: An Information-Theoretic Framework
·1315 words·7 mins· loading · loading
AI Theory Optimization 🏒 Microsoft Research
This paper introduces a novel information-theoretic framework, showing how to convert offline into online estimation algorithms efficiently, impacting interactive decision-making.
Online Convex Optimisation: The Optimal Switching Regret for all Segmentations Simultaneously
·344 words·2 mins· loading · loading
AI Theory Optimization 🏒 Alan Turing Institute
Algorithm RESET achieves optimal switching regret simultaneously across all segmentations, offering efficiency and parameter-free operation.
Online Consistency of the Nearest Neighbor Rule
·1388 words·7 mins· loading · loading
AI Theory Optimization 🏒 UC San Diego
The 1-nearest neighbor rule achieves online consistency under surprisingly broad conditions: measurable label functions and mild assumptions on instance generation in doubling metric spaces.
Online Composite Optimization Between Stochastic and Adversarial Environments
·1450 words·7 mins· loading · loading
AI Generated AI Theory Optimization 🏒 Nanjing University
Researchers achieve optimal regret bounds in online composite optimization under stochastic and adversarial settings using a novel optimistic composite mirror descent algorithm and a universal strateg…
Online Budgeted Matching with General Bids
·1940 words·10 mins· loading · loading
AI Theory Optimization 🏒 University of Houston
MetaAd, a novel meta-algorithm, achieves provable competitive ratios for online budgeted matching with general bids, removing prior restrictive assumptions.
Online Bayesian Persuasion Without a Clue
·1780 words·9 mins· loading · loading
AI Theory Optimization 🏒 Politecnico Di Milano
Researchers developed a novel online Bayesian persuasion algorithm that achieves sublinear regret without prior knowledge of the receiver or the state distribution, providing tight theoretical guarant…
One-Layer Transformer Provably Learns One-Nearest Neighbor In Context
·1344 words·7 mins· loading · loading
AI Theory Optimization 🏒 Princeton University
One-layer transformers provably learn the one-nearest neighbor prediction rule, offering theoretical insights into their in-context learning capabilities.
One Sample Fits All: Approximating All Probabilistic Values Simultaneously and Efficiently
·1941 words·10 mins· loading · loading
AI Generated AI Theory Interpretability 🏒 National University of Singapore
One-Sample-Fits-All (OFA) framework efficiently approximates all probabilistic values simultaneously, achieving faster convergence rates than existing methods.