Skip to main content

Optimization

Inversion-based Latent Bayesian Optimization
·4093 words·20 mins· loading · loading
AI Generated Machine Learning Optimization 🏒 Korea University
InvBO: Inversion-based Latent Bayesian Optimization solves the misalignment problem in LBO, boosting optimization accuracy and efficiency.
Invariant subspaces and PCA in nearly matrix multiplication time
·336 words·2 mins· loading · loading
AI Theory Optimization 🏒 IBM Research
Generalized eigenvalue problems get solved in nearly matrix multiplication time, providing new, faster PCA algorithms!
Information-theoretic Limits of Online Classification with Noisy Labels
·481 words·3 mins· loading · loading
AI Theory Optimization 🏒 CSOI, Purdue University
This paper unveils the information-theoretic limits of online classification with noisy labels, showing that the minimax risk is tightly characterized by the Hellinger gap of noisy label distributions…
Inexact Augmented Lagrangian Methods for Conic Optimization: Quadratic Growth and Linear Convergence
·1589 words·8 mins· loading · loading
AI Theory Optimization 🏒 UC San Diego
This paper proves that inexact ALMs applied to SDPs achieve linear convergence for both primal and dual iterates, contingent solely on strict complementarity and a bounded solution set, thus resol…
Incorporating Surrogate Gradient Norm to Improve Offline Optimization Techniques
·2087 words·10 mins· loading · loading
AI Theory Optimization 🏒 Washington State University
IGNITE improves offline optimization by incorporating surrogate gradient norm to reduce model sharpness, boosting performance up to 9.6%
Improved Sample Complexity for Multiclass PAC Learning
·258 words·2 mins· loading · loading
Machine Learning Optimization 🏒 Purdue University
This paper significantly improves our understanding of multiclass PAC learning by reducing the sample complexity gap and proposing two novel approaches to fully resolve the optimal sample complexity.
Improved Regret for Bandit Convex Optimization with Delayed Feedback
·324 words·2 mins· loading · loading
AI Theory Optimization 🏒 Zhejiang University
A novel algorithm, D-FTBL, achieves improved regret bounds for bandit convex optimization with delayed feedback, tightly matching existing lower bounds in worst-case scenarios.
Improved Guarantees for Fully Dynamic $k$-Center Clustering with Outliers in General Metric Spaces
·1694 words·8 mins· loading · loading
AI Theory Optimization 🏒 Eindhoven University of Technology
A novel fully dynamic algorithm achieves a (4+Ξ΅)-approximate solution for the k-center clustering problem with outliers in general metric spaces, boasting an efficient update time.
Improved Analysis for Bandit Learning in Matching Markets
·707 words·4 mins· loading · loading
AI Generated AI Theory Optimization 🏒 Shanghai Jiao Tong University
A new algorithm, AOGS, achieves significantly lower regret in two-sided matching markets by cleverly integrating exploration and exploitation, thus removing the dependence on the number of arms (K) in…
Improved Algorithms for Contextual Dynamic Pricing
·515 words·3 mins· loading · loading
AI Generated AI Theory Optimization 🏒 CREST, ENSAE
New algorithms achieve optimal regret bounds for contextual dynamic pricing under minimal assumptions, improving revenue management with better price adjustments.
Implicit Regularization of Decentralized Gradient Descent for Sparse Regression
·1887 words·9 mins· loading · loading
AI Generated AI Theory Optimization 🏒 Pennsylvania State University
Decentralized Gradient Descent achieves statistically optimal sparse model learning via implicit regularization, even with communication-efficient truncation.
Implicit Bias of Mirror Flow on Separable Data
·1523 words·8 mins· loading · loading
AI Theory Optimization 🏒 EPFL
Mirror descent’s implicit bias on separable data is formally characterized, revealing convergence towards a maximum margin classifier determined by the potential’s ‘horizon function’.
Identifying Equivalent Training Dynamics
·2147 words·11 mins· loading · loading
AI Theory Optimization 🏒 University of California, Santa Barbara
New framework uses Koopman operator theory to identify equivalent training dynamics in deep neural networks, enabling quantitative comparison of different architectures and optimization methods.
Identification of Analytic Nonlinear Dynamical Systems with Non-asymptotic Guarantees
·1307 words·7 mins· loading · loading
AI Theory Optimization 🏒 Coordinated Science Laboratory
This paper proves that non-active exploration suffices for identifying linearly parameterized nonlinear systems with real-analytic features, providing non-asymptotic guarantees for least-squares and s…
How to Boost Any Loss Function
·3432 words·17 mins· loading · loading
AI Generated AI Theory Optimization 🏒 Google Research
Boosting, traditionally limited by assumptions about loss functions, is proven in this paper to efficiently optimize any loss function regardless of differentiability or convexity.
How does PDE order affect the convergence of PINNs?
·2124 words·10 mins· loading · loading
AI Generated AI Theory Optimization 🏒 University of California, Los Angeles
Higher-order PDEs hinder Physics-Informed Neural Network (PINN) convergence; this paper provides theoretical explanation and proposes variable splitting for improved accuracy.
How does Gradient Descent Learn Features --- A Local Analysis for Regularized Two-Layer Neural Networks
·1326 words·7 mins· loading · loading
AI Theory Optimization 🏒 University of Washington
Neural networks learn features effectively through gradient descent, not just at the beginning, but also at the end of training, even with carefully regularized objectives.
How Does Black-Box Impact the Learning Guarantee of Stochastic Compositional Optimization?
·1242 words·6 mins· loading · loading
AI Theory Optimization 🏒 Huazhong Agricultural University
This study reveals how black-box settings affect the learning guarantee of stochastic compositional optimization, offering sharper generalization bounds and novel learning guarantees for derivative-fr…
High-probability complexity bounds for stochastic non-convex minimax optimization
·1500 words·8 mins· loading · loading
AI Theory Optimization 🏒 Université Côte D'Azur
First high-probability complexity guarantees for solving stochastic nonconvex minimax problems using a single-loop method are established.
High-dimensional (Group) Adversarial Training in Linear Regression
·1556 words·8 mins· loading · loading
AI Generated Machine Learning Optimization 🏒 Georgia Institute of Technology
Adversarial training achieves minimax-optimal prediction error in high-dimensional linear regression under l∞-perturbation, improving upon existing methods.