Optimization
Derandomizing Multi-Distribution Learning
·204 words·1 min·
loading
·
loading
AI Theory
Optimization
🏢 Aarhus University
Derandomizing multi-distribution learning is computationally hard, but a structural condition allows efficient black-box conversion of randomized predictors to deterministic ones.
Deep linear networks for regression are implicitly regularized towards flat minima
·2602 words·13 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
🏢 Institute of Mathematics
Deep linear networks implicitly regularize towards flat minima, with sharpness (Hessian’s largest eigenvalue) of minimizers linearly increasing with depth but bounded by a constant times the lower bou…
Decision-Focused Learning with Directional Gradients
·1724 words·9 mins·
loading
·
loading
AI Theory
Optimization
🏢 UC Los Angeles
New Perturbation Gradient losses connect expected decisions with directional derivatives, enabling Lipschitz continuous surrogates for predict-then-optimize, asymptotically yielding best-in-class poli…
Data subsampling for Poisson regression with pth-root-link
·657 words·4 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
🏢 University Potsdam
Sublinear coresets for Poisson regression are developed, offering 1±ε approximation guarantees, with complexity analyzed using a novel parameter and domain shifting.
CSPG: Crossing Sparse Proximity Graphs for Approximate Nearest Neighbor Search
·2426 words·12 mins·
loading
·
loading
AI Theory
Optimization
🏢 Fudan University
CSPG: a novel framework boosting Approximate Nearest Neighbor Search speed by 1.5-2x, using sparse proximity graphs and efficient two-staged search.
Cryptographic Hardness of Score Estimation
·386 words·2 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
🏢 University of Washington
Score estimation, crucial for diffusion models, is computationally hard even with polynomial sample complexity unless strong distributional assumptions are made.
Cost-aware Bayesian Optimization via the Pandora's Box Gittins Index
·2385 words·12 mins·
loading
·
loading
AI Theory
Optimization
🏢 Cornell University
Cost-aware Bayesian optimization gets a boost with the Pandora’s Box Gittins Index, a novel acquisition function that efficiently balances exploration and exploitation while considering evaluation cos…
Convergence of No-Swap-Regret Dynamics in Self-Play
·1267 words·6 mins·
loading
·
loading
AI Theory
Optimization
🏢 Google Research
In symmetric zero-sum games, no-swap-regret dynamics guarantee strong convergence to Nash Equilibrium under symmetric initial conditions, but this advantage disappears when constraints are relaxed.
Convergence of $ ext{log}(1/psilon)$ for Gradient-Based Algorithms in Zero-Sum Games without the Condition Number: A Smoothed Analysis
·262 words·2 mins·
loading
·
loading
AI Theory
Optimization
🏢 Carnegie Mellon University
Gradient-based methods for solving large zero-sum games achieve polynomial smoothed complexity, demonstrating efficiency even in high-precision scenarios without condition number dependence.
Contrastive losses as generalized models of global epistasis
·3227 words·16 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
🏢 Dyno Therapeutics
Contrastive losses unlock efficient fitness function modeling by leveraging the ranking information inherent in global epistasis, significantly improving accuracy and data efficiency in protein engine…
Contracting with a Learning Agent
·2554 words·12 mins·
loading
·
loading
AI Theory
Optimization
🏢 Google Research
Repeated contracts with learning agents are optimized by a simple dynamic contract: initially linear, then switching to zero-cost, causing the agent’s actions to ‘free-fall’ and yield non-zero rewards…
Contextual Linear Optimization with Bandit Feedback
·1748 words·9 mins·
loading
·
loading
AI Theory
Optimization
🏢 Tsinghua University
This paper introduces induced empirical risk minimization for contextual linear optimization with bandit feedback, providing theoretical guarantees and computationally tractable solutions for improved…
Contextual Decision-Making with Knapsacks Beyond the Worst Case
·450 words·3 mins·
loading
·
loading
AI Theory
Optimization
🏢 Peking University
This work unveils a novel algorithm for contextual decision-making with knapsacks, achieving significantly improved regret bounds beyond worst-case scenarios, thereby offering a more practical and eff…
Constrained Sampling with Primal-Dual Langevin Monte Carlo
·2374 words·12 mins·
loading
·
loading
AI Theory
Optimization
🏢 University of Stuttgart
Constrained sampling made easy! Primal-Dual Langevin Monte Carlo efficiently samples from complex probability distributions while satisfying statistical constraints.
Constrained Binary Decision Making
·1365 words·7 mins·
loading
·
loading
AI Theory
Optimization
🏢 Czech Technical University in Prague
This paper presents a unified framework for solving binary statistical decision-making problems, enabling efficient derivation of optimal strategies for diverse applications like OOD detection and sel…
Conformal Inverse Optimization
·1650 words·8 mins·
loading
·
loading
AI Theory
Optimization
🏢 University of Toronto
Conformal inverse optimization learns uncertainty sets for parameters in optimization models, then solves a robust optimization model for high-quality, human-aligned decisions.
Communication Efficient Distributed Training with Distributed Lion
·1698 words·8 mins·
loading
·
loading
Machine Learning
Optimization
🏢 University of Texas at Austin
Distributed Lion: Training large AI models efficiently by communicating only binary or low-precision vectors between workers and a server, significantly reducing communication costs and maintaining co…
Communication Bounds for the Distributed Experts Problem
·2565 words·13 mins·
loading
·
loading
AI Theory
Optimization
🏢 Carnegie Mellon University
This paper presents communication-efficient protocols for the distributed experts problem, achieving near-optimal regret with theoretical and empirical validation.
Collaboration! Towards Robust Neural Methods for Routing Problems
·2519 words·12 mins·
loading
·
loading
AI Theory
Optimization
🏢 Eindhoven University of Technology
A novel Collaborative Neural Framework (CNF) enhances the robustness of neural vehicle routing methods against adversarial attacks by collaboratively training multiple models and intelligently distrib…
Coherence-free Entrywise Estimation of Eigenvectors in Low-rank Signal-plus-noise Matrix Models
·1535 words·8 mins·
loading
·
loading
AI Theory
Optimization
🏢 University of Wisconsin-Madison
New method for eigenvector estimation achieves optimal rates without coherence dependence, improving low-rank matrix denoising and related tasks.