Skip to main content

Optimization

Freya PAGE: First Optimal Time Complexity for Large-Scale Nonconvex Finite-Sum Optimization with Heterogeneous Asynchronous Computations
·1988 words·10 mins· loading · loading
AI Generated Machine Learning Optimization 🏢 KAUST AIRI
Freya PAGE achieves optimal time complexity for large-scale nonconvex finite-sum optimization using asynchronous and heterogeneous computations, overcoming limitations of prior methods.
First-Order Methods for Linearly Constrained Bilevel Optimization
·392 words·2 mins· loading · loading
AI Theory Optimization 🏢 Weizmann Institute of Science
First-order methods conquer linearly constrained bilevel optimization, achieving near-optimal convergence rates and enhancing high-dimensional applicability.
FERERO: A Flexible Framework for Preference-Guided Multi-Objective Learning
·2495 words·12 mins· loading · loading
AI Theory Optimization 🏢 Rensselaer Polytechnic Institute
FERERO, a novel framework, tackles multi-objective learning by efficiently finding preference-guided Pareto solutions using flexible preference modeling and convergent algorithms.
Feedback control guides credit assignment in recurrent neural networks
·1962 words·10 mins· loading · loading
AI Theory Optimization 🏢 Imperial College London
Brain-inspired recurrent neural networks learn efficiently by using feedback control to approximate optimal gradients, enabling rapid movement corrections and efficient adaptation to persistent errors…
Faster Accelerated First-order Methods for Convex Optimization with Strongly Convex Function Constraints
·1492 words·8 mins· loading · loading
AI Theory Optimization 🏢 Shanghai University of Finance and Economics
Faster primal-dual algorithms achieve order-optimal complexity for convex optimization with strongly convex constraints, improving convergence rates and solving large-scale problems efficiently.
Fast Tree-Field Integrators: From Low Displacement Rank to Topological Transformers
·3010 words·15 mins· loading · loading
AI Generated AI Theory Optimization 🏢 Google DeepMind
Fast Tree-Field Integrators (FTFIs) revolutionize graph processing by enabling polylog-linear time computation for integrating tensor fields on trees, providing significant speedups for various machin…
Fast T2T: Optimization Consistency Speeds Up Diffusion-Based Training-to-Testing Solving for Combinatorial Optimization
·2382 words·12 mins· loading · loading
AI Theory Optimization 🏢 Shanghai Jiao Tong University
Fast T2T: Optimization Consistency Boosts Diffusion-Based Combinatorial Optimization!
Fast Rates in Stochastic Online Convex Optimization by Exploiting the Curvature of Feasible Sets
·1343 words·7 mins· loading · loading
AI Theory Optimization 🏢 University of Tokyo
This paper introduces a novel approach for fast rates in online convex optimization by exploiting the curvature of feasible sets, achieving logarithmic regret bounds under specific conditions.
Fast Last-Iterate Convergence of Learning in Games Requires Forgetful Algorithms
·2007 words·10 mins· loading · loading
AI Generated AI Theory Optimization 🏢 Yale
Forgetful algorithms are essential for fast last-iterate convergence in learning games; otherwise, even popular methods like OMWU fail.
Fast Iterative Hard Thresholding Methods with Pruning Gradient Computations
·1788 words·9 mins· loading · loading
AI Generated Machine Learning Optimization 🏢 NTT Computer and Data Science Laboratories
Accelerate iterative hard thresholding (IHT) up to 73x by safely pruning unnecessary gradient computations without accuracy loss.
Fast Channel Simulation via Error-Correcting Codes
·2814 words·14 mins· loading · loading
AI Generated AI Theory Optimization 🏢 Cornell University
Polar codes revolutionize channel simulation, offering scalable, high-performance schemes that significantly outperform existing methods.
Exploring Jacobian Inexactness in Second-Order Methods for Variational Inequalities: Lower Bounds, Optimal Algorithms and Quasi-Newton Approximations
·349 words·2 mins· loading · loading
AI Theory Optimization 🏢 Mohamed Bin Zayed University of Artificial Intelligence
VIJI, a novel second-order algorithm, achieves optimal convergence rates for variational inequalities even with inexact Jacobian information, bridging the gap between theory and practice in machine le…
Expectile Regularization for Fast and Accurate Training of Neural Optimal Transport
·2426 words·12 mins· loading · loading
Optimization 🏢 AIRI
ENOT, a new Neural Optimal Transport training method, achieves 3x quality and 10x speed improvements by using expectile regularization to stabilize the learning process.
Exact, Tractable Gauss-Newton Optimization in Deep Reversible Architectures Reveal Poor Generalization
·2136 words·11 mins· loading · loading
AI Theory Optimization 🏢 MediaTek Research
Exact Gauss-Newton optimization in deep reversible networks surprisingly reveals poor generalization, despite faster training, challenging existing deep learning optimization theories.
Exact Gradients for Stochastic Spiking Neural Networks Driven by Rough Signals
·528 words·3 mins· loading · loading
AI Generated AI Theory Optimization 🏢 University of Copenhagen
New framework uses rough path theory to enable gradient-based training of SSNNs driven by rough signals, allowing for noise in spike timing and network dynamics.
Estimating Generalization Performance Along the Trajectory of Proximal SGD in Robust Regression
·1899 words·9 mins· loading · loading
AI Theory Optimization 🏢 Rutgers University
New consistent estimators precisely track generalization error during robust regression’s iterative model training, enabling optimal stopping iteration for minimized error.
Error Analysis of Spherically Constrained Least Squares Reformulation in Solving the Stackelberg Prediction Game
·344 words·2 mins· loading · loading
AI Generated Machine Learning Optimization 🏢 School of Computer Science, Wuhan University
This research paper presents a novel theoretical error analysis for the spherically constrained least squares (SCLS) method used to solve Stackelberg prediction games (SPGs). SPGs model strategic int…
Entrywise error bounds for low-rank approximations of kernel matrices
·1461 words·7 mins· loading · loading
AI Theory Optimization 🏢 Imperial College London
This paper provides novel entrywise error bounds for low-rank kernel matrix approximations, showing how many data points are needed to get statistically consistent results for low-rank approximations.
Entropy testing and its application to testing Bayesian networks
·328 words·2 mins· loading · loading
AI Theory Optimization 🏢 University of Sydney
This paper presents near-optimal algorithms for entropy identity testing, significantly improving Bayesian network testing efficiency.
Energy-Guided Continuous Entropic Barycenter Estimation for General Costs
·2659 words·13 mins· loading · loading
Optimization 🏢 Skolkovo Institute of Science and Technology
New algorithm approximates continuous Entropic Optimal Transport (EOT) barycenters for any cost function, offering quality bounds and seamless EBM integration.