AI Theory
Cost-aware Bayesian Optimization via the Pandora's Box Gittins Index
·2385 words·12 mins·
loading
·
loading
AI Theory
Optimization
🏢 Cornell University
Cost-aware Bayesian optimization gets a boost with the Pandora’s Box Gittins Index, a novel acquisition function that efficiently balances exploration and exploitation while considering evaluation cos…
Corruption-Robust Linear Bandits: Minimax Optimality and Gap-Dependent Misspecification
·375 words·2 mins·
loading
·
loading
AI Theory
Robustness
🏢 University of Virginia
This paper presents novel algorithms for linear bandits that are robust to corrupted rewards, achieving minimax optimality and optimal scaling for gap-dependent misspecification, extending to reinforc…
Convergence of No-Swap-Regret Dynamics in Self-Play
·1267 words·6 mins·
loading
·
loading
AI Theory
Optimization
🏢 Google Research
In symmetric zero-sum games, no-swap-regret dynamics guarantee strong convergence to Nash Equilibrium under symmetric initial conditions, but this advantage disappears when constraints are relaxed.
Convergence of $ ext{log}(1/psilon)$ for Gradient-Based Algorithms in Zero-Sum Games without the Condition Number: A Smoothed Analysis
·262 words·2 mins·
loading
·
loading
AI Theory
Optimization
🏢 Carnegie Mellon University
Gradient-based methods for solving large zero-sum games achieve polynomial smoothed complexity, demonstrating efficiency even in high-precision scenarios without condition number dependence.
Controlling Multiple Errors Simultaneously with a PAC-Bayes Bound
·547 words·3 mins·
loading
·
loading
AI Generated
AI Theory
Generalization
🏢 University College London
New PAC-Bayes bound controls multiple error types simultaneously, providing richer generalization guarantees.
Controlling Counterfactual Harm in Decision Support Systems Based on Prediction Sets
·2473 words·12 mins·
loading
·
loading
AI Theory
Causality
🏢 Max Planck Institute for Software Systems
AI decision support systems can unintentionally harm users; this paper introduces a novel framework to design systems that minimize this counterfactual harm, balancing accuracy and user well-being.
Contrastive losses as generalized models of global epistasis
·3227 words·16 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
🏢 Dyno Therapeutics
Contrastive losses unlock efficient fitness function modeling by leveraging the ranking information inherent in global epistasis, significantly improving accuracy and data efficiency in protein engine…
Contracting with a Learning Agent
·2554 words·12 mins·
loading
·
loading
AI Theory
Optimization
🏢 Google Research
Repeated contracts with learning agents are optimized by a simple dynamic contract: initially linear, then switching to zero-cost, causing the agent’s actions to ‘free-fall’ and yield non-zero rewards…
Continual learning with the neural tangent ensemble
·1983 words·10 mins·
loading
·
loading
AI Theory
Generalization
🏢 Cold Spring Harbor Laboratory
Neural networks, viewed as Bayesian ensembles of fixed classifiers, enable continual learning without forgetting; posterior updates mirror stochastic gradient descent, offering insights into optimizat…
Continual Counting with Gradual Privacy Expiration
·2038 words·10 mins·
loading
·
loading
AI Generated
AI Theory
Privacy
🏢 Basic Algorithms Research Copenhagen
Continual counting with gradual privacy expiration: A new algorithm achieves optimal accuracy with exponentially decaying privacy!
Contextual Linear Optimization with Bandit Feedback
·1748 words·9 mins·
loading
·
loading
AI Theory
Optimization
🏢 Tsinghua University
This paper introduces induced empirical risk minimization for contextual linear optimization with bandit feedback, providing theoretical guarantees and computationally tractable solutions for improved…
Contextual Decision-Making with Knapsacks Beyond the Worst Case
·450 words·3 mins·
loading
·
loading
AI Theory
Optimization
🏢 Peking University
This work unveils a novel algorithm for contextual decision-making with knapsacks, achieving significantly improved regret bounds beyond worst-case scenarios, thereby offering a more practical and eff…
Constrained Sampling with Primal-Dual Langevin Monte Carlo
·2374 words·12 mins·
loading
·
loading
AI Theory
Optimization
🏢 University of Stuttgart
Constrained sampling made easy! Primal-Dual Langevin Monte Carlo efficiently samples from complex probability distributions while satisfying statistical constraints.
Constrained Binary Decision Making
·1365 words·7 mins·
loading
·
loading
AI Theory
Optimization
🏢 Czech Technical University in Prague
This paper presents a unified framework for solving binary statistical decision-making problems, enabling efficient derivation of optimal strategies for diverse applications like OOD detection and sel…
Constrained Adaptive Attack: Effective Adversarial Attack Against Deep Neural Networks for Tabular Data
·2605 words·13 mins·
loading
·
loading
AI Theory
Robustness
🏢 University of Luxembourg
Constrained Adaptive Attack (CAA) significantly improves adversarial attacks on deep learning models for tabular data by combining gradient and search-based methods, achieving up to 96.1% accuracy dro…
Consistency of Neural Causal Partial Identification
·2971 words·14 mins·
loading
·
loading
AI Generated
AI Theory
Causality
🏢 Stanford University
Neural causal models consistently estimate partial causal effects, even with continuous/categorical variables, thanks to Lipschitz regularization.
Conformal Inverse Optimization
·1650 words·8 mins·
loading
·
loading
AI Theory
Optimization
🏢 University of Toronto
Conformal inverse optimization learns uncertainty sets for parameters in optimization models, then solves a robust optimization model for high-quality, human-aligned decisions.
Conformal Classification with Equalized Coverage for Adaptively Selected Groups
·7699 words·37 mins·
loading
·
loading
AI Theory
Fairness
🏢 UC Los Angeles
This paper introduces AFCP, a novel conformal inference method that generates prediction sets with valid coverage conditional on adaptively selected features, achieving a practical balance between eff…
Conditional Outcome Equivalence: A Quantile Alternative to CATE
·2270 words·11 mins·
loading
·
loading
AI Theory
Causality
🏢 University of Bristol
Researchers introduce the Conditional Quantile Comparator (CQC) for analyzing heterogeneous treatment effects, offering an improved approach by combining the strengths of CATE and CQTE while overcomin…
Conditional Generative Models are Sufficient to Sample from Any Causal Effect Estimand
·3417 words·17 mins·
loading
·
loading
AI Theory
Causality
🏢 Purdue University
ID-GEN: Sample high-dimensional interventional distributions using any conditional generative model!