AI Theory
Piecewise-Stationary Bandits with Knapsacks
·380 words·2 mins·
loading
·
loading
AI Theory
Optimization
π’ National University of Singapore
A novel inventory reserving algorithm achieves near-optimal performance for bandit problems with knapsacks in piecewise-stationary settings, offering a competitive ratio of O(log(nmax/min)).
Persistent Homology for High-dimensional Data Based on Spectral Methods
·8023 words·38 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
π’ TΓΌbingen AI Center
Spectral distances on k-nearest neighbor graphs enable robust topological analysis of high-dimensional noisy data using persistent homology, overcoming limitations of Euclidean distance.
Paths to Equilibrium in Games
·265 words·2 mins·
loading
·
loading
AI Theory
Optimization
π’ University of Toronto
In n-player games, a satisficing path always exists leading from any initial strategy profile to a Nash equilibrium by allowing unsatisfied players to explore suboptimal strategies.
Partial Transportability for Domain Generalization
·2485 words·12 mins·
loading
·
loading
AI Theory
Generalization
π’ Columbia University
This paper introduces a novel technique to bound prediction risks in new domains using causal diagrams, enabling reliable AI performance guarantees.
Partial Structure Discovery is Sufficient for No-regret Learning in Causal Bandits
·1628 words·8 mins·
loading
·
loading
AI Theory
Causality
π’ Purdue University
Learning optimal interventions in causal bandits with unknown causal graphs is now efficient; this paper identifies the minimal causal knowledge needed and offers a two-stage algorithm with sublinear …
Partial observation can induce mechanistic mismatches in data-constrained models of neural dynamics
·1877 words·9 mins·
loading
·
loading
AI Theory
Generalization
π’ Harvard University
Partially observing neural circuits during experiments can create misleading models, even if single neuron activity matches; researchers need better validation methods.
Parameterized Approximation Schemes for Fair-Range Clustering
·1307 words·7 mins·
loading
·
loading
AI Theory
Fairness
π’ School of Advanced Interdisciplinary Studies, Hunan University of Technology and Business
First parameterized approximation schemes for fair-range k-median & k-means in Euclidean spaces are presented, offering faster (1+Ξ΅)-approximation algorithms.
Parameter Symmetry and Noise Equilibrium of Stochastic Gradient Descent
·1617 words·8 mins·
loading
·
loading
AI Theory
Optimization
π’ Massachusetts Institute of Technology
SGD’s dynamics are precisely characterized by the interplay of noise and symmetry in loss functions, leading to unique, initialization-independent fixed points.
PAC-Bayes-Chernoff bounds for unbounded losses
·358 words·2 mins·
loading
·
loading
AI Theory
Generalization
π’ Basque Center for Applied Mathematics (BCAM)
New PAC-Bayes oracle bound extends CramΓ©r-Chernoff to unbounded losses, enabling exact parameter optimization and richer assumptions for tighter generalization bounds.
OxonFair: A Flexible Toolkit for Algorithmic Fairness
·3793 words·18 mins·
loading
·
loading
AI Theory
Fairness
π’ University of Oxford
OxonFair: a new open-source toolkit for enforcing fairness in binary classification, supporting NLP, Computer Vision, and tabular data, optimizing any fairness metric, and minimizing performance degra…
Overfitting Behaviour of Gaussian Kernel Ridgeless Regression: Varying Bandwidth or Dimensionality
·1931 words·10 mins·
loading
·
loading
AI Generated
AI Theory
Generalization
π’ University of Chicago
Ridgeless regression, surprisingly, generalizes well even with noisy data if dimension scales sub-polynomially with sample size.
Overcoming Brittleness in Pareto-Optimal Learning Augmented Algorithms
·1976 words·10 mins·
loading
·
loading
AI Theory
Optimization
π’ Sorbonne University
This research introduces a novel framework that overcomes the brittleness of Pareto-optimal learning-augmented algorithms by enforcing smoothness in performance using user-specified profiles and devel…
Outlier-Robust Distributionally Robust Optimization via Unbalanced Optimal Transport
·2832 words·14 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
π’ KTH Royal Institute of Technology
Outlier-robust distributionally robust optimization achieved via a novel Unbalanced Optimal Transport (UOT) distance, improving efficiency and accuracy.
OT4P: Unlocking Effective Orthogonal Group Path for Permutation Relaxation
·2531 words·12 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
π’ School of Artificial Intelligence, Jilin University
OT4P: a novel temperature-controlled differentiable transformation efficiently relaxes permutation matrices onto the orthogonal group for gradient-based optimization.
OSLO: One-Shot Label-Only Membership Inference Attacks
·2719 words·13 mins·
loading
·
loading
AI Theory
Privacy
π’ University of Massachusetts Amherst
One-shot label-only attack (OSLO) achieves high membership inference accuracy with only one query, surpassing existing methods by a large margin.
Ordering-Based Causal Discovery for Linear and Nonlinear Relations
·2689 words·13 mins·
loading
·
loading
AI Generated
AI Theory
Causality
π’ Central South University
Causal discovery algorithm CaPS efficiently handles mixed linear and nonlinear relationships in observational data, outperforming existing methods on synthetic and real-world datasets.
Oracle-Efficient Differentially Private Learning with Public Data
·293 words·2 mins·
loading
·
loading
AI Theory
Privacy
π’ MIT
This paper introduces computationally efficient algorithms for differentially private learning by leveraging public data, overcoming previous computational limitations and enabling broader practical a…
Optimizing the coalition gain in Online Auctions with Greedy Structured Bandits
·1842 words·9 mins·
loading
·
loading
AI Theory
Optimization
π’ Department of Statistics, University of Oxford
Two novel algorithms, Local-Greedy and Greedy-Grid, optimize coalition gain in online auctions with limited observations, achieving constant regret and problem-independent guarantees while respecting …
Optimization Can Learn Johnson Lindenstrauss Embeddings
·412 words·2 mins·
loading
·
loading
AI Theory
Optimization
π’ University of Texas at Austin
Optimization can learn optimal Johnson-Lindenstrauss embeddings, avoiding the limitations of randomized methods and achieving comparable theoretical guarantees.
Optimization Algorithm Design via Electric Circuits
·3889 words·19 mins·
loading
·
loading
AI Theory
Optimization
π’ Stanford University
Design provably convergent optimization algorithms swiftly using electric circuit analogies; a novel methodology automating discretization for diverse algorithms.