Skip to main content

Optimization

Matrix Denoising with Doubly Heteroscedastic Noise: Fundamental Limits and Optimal Spectral Methods
·1782 words·9 mins· loading · loading
AI Generated AI Theory Optimization 🏢 Institute of Science and Technology Austria
Optimal matrix denoising with doubly heteroscedastic noise achieved!
Matching the Statistical Query Lower Bound for $k$-Sparse Parity Problems with Sign Stochastic Gradient Descent
·2323 words·11 mins· loading · loading
AI Generated AI Theory Optimization 🏢 UC Los Angeles
Sign Stochastic Gradient Descent (SGD) achieves optimal sample complexity for solving k-sparse parity problems, matching Statistical Query lower bounds.
Lower Bounds of Uniform Stability in Gradient-Based Bilevel Algorithms for Hyperparameter Optimization
·1822 words·9 mins· loading · loading
Machine Learning Optimization 🏢 Gaoling School of Artificial Intelligence, Renmin University of China
This paper establishes tight lower bounds for the uniform stability of gradient-based bilevel programming algorithms used for hyperparameter optimization, resolving a key open problem regarding the ti…
Lower Bounds and Optimal Algorithms for Non-Smooth Convex Decentralized Optimization over Time-Varying Networks
·1231 words·6 mins· loading · loading
Machine Learning Optimization 🏢 Yandex Research
First optimal algorithms matching lower bounds for non-smooth convex decentralized optimization over time-varying networks are presented, substantially improving theoretical performance.
Low-Rank Optimal Transport through Factor Relaxation with Latent Coupling
·2606 words·13 mins· loading · loading
Machine Learning Optimization 🏢 Princeton University
FRLC: a novel algorithm for low-rank optimal transport using latent coupling, enabling faster computation and better interpretability for diverse applications.
Low Degree Hardness for Broadcasting on Trees
·1545 words·8 mins· loading · loading
AI Theory Optimization 🏢 University of Missouri
Low-degree polynomials fail to efficiently infer roots in broadcasting tree problems below the Kesten-Stigum bound.
Loss Landscape Characterization of Neural Networks without Over-Parametrization
·2958 words·14 mins· loading · loading
AI Theory Optimization 🏢 University of Basel
Deep learning optimization is revolutionized by a new function class, enabling convergence guarantees without over-parameterization and accommodating saddle points.
Looks Too Good To Be True: An Information-Theoretic Analysis of Hallucinations in Generative Restoration Models
·2111 words·10 mins· loading · loading
AI Generated AI Theory Optimization 🏢 Verily AI (Google Life Sciences)
Generative image restoration models face a critical trade-off: higher perceptual quality often leads to increased hallucinations (unreliable predictions).
Lookback Prophet Inequalities
·467 words·3 mins· loading · loading
AI Theory Optimization 🏢 ENSAE, Ecole Polytechnique
This paper enhances prophet inequalities by allowing lookback, improving competitive ratios and providing algorithms for diverse observation orders, thereby bridging theory and real-world online selec…
Log-concave Sampling from a Convex Body with a Barrier: a Robust and Unified Dikin Walk
·1308 words·7 mins· loading · loading
AI Theory Optimization 🏢 New York University
This paper introduces robust Dikin walks for log-concave sampling, achieving faster mixing times and lower iteration costs than existing methods, particularly for high-dimensional settings.
Linear Transformers are Versatile In-Context Learners
·1783 words·9 mins· loading · loading
Machine Learning Optimization 🏢 Google Research
Linear transformers surprisingly learn intricate optimization algorithms, even surpassing baselines on noisy regression problems, showcasing their unexpected learning capabilities.
Light Unbalanced Optimal Transport
·2953 words·14 mins· loading · loading
Machine Learning Optimization 🏢 Skolkovo Institute of Science and Technology
LightUnbalancedOptimalTransport: A fast, theoretically-justified solver for continuous unbalanced optimal transport problems, enabling efficient analysis of large datasets with imbalanced classes.
Learning-Augmented Priority Queues
·2516 words·12 mins· loading · loading
AI Theory Optimization 🏢 ENSAE, Ecole Polytechnique
This paper introduces learning-augmented priority queues, using predictions to boost efficiency and optimality, achieving significant performance gains over traditional methods.
Learning-Augmented Dynamic Submodular Maximization
·387 words·2 mins· loading · loading
AI Theory Optimization 🏢 Indian Institute of Technology Bombay
Leveraging predictions, this paper presents a novel algorithm for dynamic submodular maximization achieving significantly faster update times (O(poly(log n, log w, log k)) amortized) compared to exist…
Learning-Augmented Approximation Algorithms for Maximum Cut and Related Problems
·249 words·2 mins· loading · loading
AI Theory Optimization 🏢 Google Research
This paper shows how noisy predictions about optimal solutions can improve approximation algorithms for NP-hard problems like MAX-CUT, exceeding classical hardness bounds.
Learning-Augmented Algorithms with Explicit Predictors
·3004 words·15 mins· loading · loading
AI Generated AI Theory Optimization 🏢 Bocconi University
This paper introduces a novel framework for learning-augmented algorithms that improves performance by integrating the learning process into the algorithm itself, rather than treating the predictor as…
Learning-Augmented Algorithms for the Bahncard Problem
·3280 words·16 mins· loading · loading
AI Theory Optimization 🏢 Zhejiang University
PFSUM, a novel learning-augmented algorithm, leverages short-term predictions to achieve superior performance in solving the Bahncard problem, outperforming existing methods with improved consistency …
Learning with Fitzpatrick Losses
·2495 words·12 mins· loading · loading
AI Generated AI Theory Optimization 🏢 Ecole Des Ponts
Tighter losses than Fenchel-Young losses are presented, refining Fenchel-Young inequalities using the Fitzpatrick function to improve model accuracy while preserving prediction link functions.
Learning to Solve Quadratic Unconstrained Binary Optimization in a Classification Way
·3329 words·16 mins· loading · loading
AI Theory Optimization 🏢 National University of Defence Technology
Researchers developed Value Classification Model (VCM), a neural solver that swiftly solves quadratic unconstrained binary optimization (QUBO) problems by directly generating solutions using a classif…
Learning to Mitigate Externalities: the Coase Theorem with Hindsight Rationality
·314 words·2 mins· loading · loading
AI Theory Optimization 🏢 University of California, Berkeley
Economists learn to resolve externalities efficiently even when players lack perfect information, maximizing social welfare by leveraging bargaining and online learning.