Skip to main content

Optimization

Near-Optimal Streaming Heavy-Tailed Statistical Estimation with Clipped SGD
·397 words·2 mins· loading · loading
AI Generated AI Theory Optimization 🏒 Stanford University
Clipped SGD achieves near-optimal sub-Gaussian rates for high-dimensional heavy-tailed statistical estimation in streaming settings, improving upon existing state-of-the-art results.
Near-Optimal Distributed Minimax Optimization under the Second-Order Similarity
·1858 words·9 mins· loading · loading
AI Generated Machine Learning Optimization 🏒 School of Data Science, Fudan University
SVOGS: Near-optimal distributed minimax optimization is achieved under second-order similarity, balancing communication, computation, and achieving near-optimal complexities.
Navigable Graphs for High-Dimensional Nearest Neighbor Search: Constructions and Limits
·495 words·3 mins· loading · loading
AI Generated AI Theory Optimization 🏒 New York University
Sparse navigable graphs enable efficient nearest neighbor search, but their construction and limits in high dimensions remain unclear. This paper presents an efficient method to construct navigable gr…
Nature-Inspired Local Propagation
·1601 words·8 mins· loading · loading
AI Theory Optimization 🏒 IMT School for Advanced Studies
Inspired by nature, researchers introduce a novel spatiotemporal local algorithm for machine learning that outperforms backpropagation in online learning scenarios with limited data or long video stre…
Multiclass Transductive Online Learning
·270 words·2 mins· loading · loading
AI Theory Optimization 🏒 Purdue University
Unbounded label spaces conquered! New algorithm achieves optimal mistake bounds in multiclass transductive online learning.
Multi-Winner Reconfiguration
·1937 words·10 mins· loading · loading
AI Theory Optimization 🏒 TU Wien
This paper introduces a novel model for multi-winner reconfiguration, analyzing the computational complexity of transitioning between committees using four approval-based voting rules, demonstrating b…
Multi-Stage Predict+Optimize for (Mixed Integer) Linear Programs
·2926 words·14 mins· loading · loading
AI Generated Machine Learning Optimization 🏒 Chinese University of Hong Kong
Multi-Stage Predict+Optimize tackles optimization problems where parameters are revealed sequentially, improving predictions and decisions through stage-wise updates.
Multi-Label Learning with Stronger Consistency Guarantees
·239 words·2 mins· loading · loading
Machine Learning Optimization 🏒 Courant Institute
Novel surrogate losses with label-independent H-consistency bounds enable stronger guarantees for multi-label learning.
Motif-oriented influence maximization for viral marketing in large-scale social networks
·1750 words·9 mins· loading · loading
AI Theory Optimization 🏒 Shenzhen University
Motif-oriented influence maximization tackles viral marketing’s challenge of reaching groups by proving a greedy algorithm with guaranteed approximation ratio and near-linear time complexity.
Mixed Dynamics In Linear Networks: Unifying the Lazy and Active Regimes
·521 words·3 mins· loading · loading
AI Generated AI Theory Optimization 🏒 Courant Institute
A new formula unifies lazy and active neural network training regimes, revealing a mixed regime that combines their strengths for faster convergence and low-rank bias.
Mirror and Preconditioned Gradient Descent in Wasserstein Space
·1610 words·8 mins· loading · loading
AI Theory Optimization 🏒 CREST, ENSAE, IP Paris
This paper presents novel mirror and preconditioned gradient descent algorithms for optimizing functionals over Wasserstein space, offering improved convergence and efficiency for various machine lear…
Minimum Entropy Coupling with Bottleneck
·2823 words·14 mins· loading · loading
AI Theory Optimization 🏒 University of Toronto
A novel lossy compression framework, Minimum Entropy Coupling with Bottleneck (MEC-B), extends existing methods by integrating a bottleneck for controlled stochasticity, enhancing performance in scen…
Minimizing UCB: a Better Local Search Strategy in Local Bayesian Optimization
·1728 words·9 mins· loading · loading
Machine Learning Optimization 🏒 Academy of Mathematics and Systems Science, Chinese Academy of Sciences
MinUCB and LA-MinUCB, novel local Bayesian optimization algorithms, replace gradient descent with UCB minimization for efficient, theoretically-sound local search.
MILP-StuDio: MILP Instance Generation via Block Structure Decomposition
·3596 words·17 mins· loading · loading
AI Theory Optimization 🏒 University of Science and Technology of China
MILP-StuDio generates high-quality mixed-integer linear programming instances by preserving crucial block structures, significantly improving learning-based solver performance.
MG-Net: Learn to Customize QAOA with Circuit Depth Awareness
·2515 words·12 mins· loading · loading
AI Theory Optimization 🏒 School of Computer Science, Faculty of Engineering, University of Sydney
MG-Net dynamically designs optimal mixer Hamiltonians for QAOA, overcoming the limitation of fixed-depth quantum circuits and significantly improving approximation ratios.
Metric Transforms and Low Rank Representations of Kernels for Fast Attention
·275 words·2 mins· loading · loading
AI Theory Optimization 🏒 University of California, Berkeley
Researchers unveil novel linear-algebraic tools revealing the limits of fast attention, classifying positive definite kernels for Manhattan distance, and fully characterizing metric transforms for Man…
Mechanism design augmented with output advice
·374 words·2 mins· loading · loading
AI Theory Optimization 🏒 Aristotle University of Thessaloniki
Mechanism design enhanced with output advice improves approximation guarantees by using imperfect predictions of the output, not agent types, offering robust, practical solutions.
Mean-Field Langevin Dynamics for Signed Measures via a Bilevel Approach
·350 words·2 mins· loading · loading
AI Theory Optimization 🏒 Γ‰cole Polytechnique FΓ©dΓ©rale De Lausanne
This paper presents a novel bilevel approach to extend mean-field Langevin dynamics to solve convex optimization problems over signed measures, achieving stronger guarantees and faster convergence rat…
Mean-Field Analysis for Learning Subspace-Sparse Polynomials with Gaussian Input
·272 words·2 mins· loading · loading
AI Theory Optimization 🏒 MIT
Researchers establish basis-free conditions for SGD learnability in two-layer neural networks learning subspace-sparse polynomials with Gaussian input, offering insights into training dynamics.
Maximizing utility in multi-agent environments by anticipating the behavior of other learners
·1732 words·9 mins· loading · loading
AI Theory Optimization 🏒 MIT
Optimizing against learning agents: New algorithms and computational limits revealed!