Skip to main content

Optimization

Harnessing Multiple Correlated Networks for Exact Community Recovery
·2191 words·11 mins· loading · loading
AI Generated AI Theory Optimization 🏢 Northwestern University
Unlocking latent community structures from multiple correlated networks is now possible with greater precision, as this research pinpoints the information-theoretic threshold for exact recovery, even …
HardCore Generation: Generating Hard UNSAT Problems for Data Augmentation
·2159 words·11 mins· loading · loading
AI Theory Optimization 🏢 McGill University
HardCore: Fast generation of hard, realistic UNSAT problems for improved SAT solver runtime prediction.
Guided Trajectory Generation with Diffusion Models for Offline Model-based Optimization
·3001 words·15 mins· loading · loading
Machine Learning Optimization 🏢 Korea Advanced Institute of Science and Technology (KAIST)
GTG, a novel conditional generative modeling approach, leverages diffusion models to generate high-scoring design trajectories for offline model-based optimization, outperforming existing methods on b…
Gradient-Variation Online Learning under Generalized Smoothness
·271 words·2 mins· loading · loading
AI Generated AI Theory Optimization 🏢 National Key Laboratory for Novel Software Technology, Nanjing University, China
This paper presents a novel optimistic mirror descent algorithm achieving optimal gradient-variation regret under generalized smoothness, applicable across convex, strongly convex functions, and fast-…
Gradient-Free Methods for Nonconvex Nonsmooth Stochastic Compositional Optimization
·355 words·2 mins· loading · loading
AI Generated Machine Learning Optimization 🏢 Department of Computer Science, National University of Singapore
Gradient-free methods conquer nonconvex nonsmooth stochastic compositional optimization, providing non-asymptotic convergence rates and improved efficiency for real-world applications.
Gradient Methods for Online DR-Submodular Maximization with Stochastic Long-Term Constraints
·346 words·2 mins· loading · loading
AI Theory Optimization 🏢 Iowa State University
Novel gradient-based algorithms achieve O(√T) regret and O(T3/4) constraint violation for online DR-submodular maximization with stochastic long-term constraints.
Gradient Guidance for Diffusion Models: An Optimization Perspective
·2233 words·11 mins· loading · loading
AI Theory Optimization 🏢 Princeton University
This paper provides a novel optimization framework for guided diffusion models, proving Õ(1/K) convergence for concave objective functions and demonstrating structure-preserving guidance.
Globally Q-linear Gauss-Newton Method for Overparameterized Non-convex Matrix Sensing
·1454 words·7 mins· loading · loading
Machine Learning Optimization 🏢 School of Mathematics and Statistics, Xidian University
A globally Q-linearly converging Gauss-Newton method (AGN) is introduced for overparameterized non-convex low-rank matrix sensing, significantly improving convergence compared to existing gradient des…
Globally Convergent Variational Inference
·2036 words·10 mins· loading · loading
AI Generated AI Theory Optimization 🏢 University of Michigan
Researchers achieve globally convergent variational inference by minimizing the expected forward KL divergence, overcoming the limitations of traditional methods.
Global Convergence in Training Large-Scale Transformers
·398 words·2 mins· loading · loading
AI Generated AI Theory Optimization 🏢 Princeton University
Large-scale Transformer training’s global convergence is proven using weight decay regularization and a refined mean-field analysis, bridging theory and practice.
GLinSAT: The General Linear Satisfiability Neural Network Layer By Accelerated Gradient Descent
·1911 words·9 mins· loading · loading
AI Theory Optimization 🏢 Tsinghua University
GLinSAT: A novel neural network layer efficiently solves general linear constraint satisfaction problems via accelerated gradient descent, enabling differentiable backpropagation and improved GPU perf…
Gliding over the Pareto Front with Uniform Designs
·2556 words·12 mins· loading · loading
AI Theory Optimization 🏢 Computer Science, City University of Hong Kong
UMOD: a novel multi-objective optimization algorithm efficiently generates uniformly distributed Pareto-optimal solutions by maximizing minimal pairwise distances, providing high-quality representatio…
Get rich quick: exact solutions reveal how unbalanced initializations promote rapid feature learning
·2253 words·11 mins· loading · loading
AI Theory Optimization 🏢 Stanford University
Unbalanced initializations dramatically accelerate neural network feature learning by modifying the geometry of learning trajectories, enabling faster feature extraction and improved generalization.
Generative Adversarial Model-Based Optimization via Source Critic Regularization
·2282 words·11 mins· loading · loading
Machine Learning Optimization 🏢 University of Pennsylvania
Generative adversarial model-based optimization via adaptive source critic regularization (aSCR) significantly boosts offline optimization accuracy.
Generalization Bound and Learning Methods for Data-Driven Projections in Linear Programming
·1748 words·9 mins· loading · loading
AI Generated AI Theory Optimization 🏢 University of Tokyo
Learn to project, solve faster! This paper introduces data-driven projections for solving high-dimensional linear programs, proving theoretical guarantees and demonstrating significant improvements in…
General bounds on the quality of Bayesian coresets
·1364 words·7 mins· loading · loading
AI Theory Optimization 🏢 University of British Columbia
New theoretical bounds on Bayesian coreset approximation errors enable efficient large-scale Bayesian inference, overcoming prior limitations and improving coreset construction methods.
Fundamental Convergence Analysis of Sharpness-Aware Minimization
·2234 words·11 mins· loading · loading
AI Theory Optimization 🏢 Ho Chi Minh City University of Education
This research establishes fundamental convergence properties for the widely-used SAM optimization algorithm, significantly advancing our theoretical understanding and practical applications.
Functionally Constrained Algorithm Solves Convex Simple Bilevel Problem
·310 words·2 mins· loading · loading
AI Theory Optimization 🏢 Tsinghua University
Near-optimal algorithms solve convex simple bilevel problems by reformulating them into functionally constrained problems, achieving near-optimal convergence rates.
FUGAL: Feature-fortified Unrestricted Graph Alignment
·2409 words·12 mins· loading · loading
AI Theory Optimization 🏢 IIT Delhi
FUGAL: a groundbreaking graph alignment method surpassing state-of-the-art accuracy without compromising efficiency by directly aligning adjacency matrices.
From Linear to Linearizable Optimization: A Novel Framework with Applications to Stationary and Non-stationary DR-submodular Optimization
·1591 words·8 mins· loading · loading
AI Theory Optimization 🏢 McGill University
A novel framework extends optimization algorithms from linear/quadratic functions to a broader class of ‘upper-linearizable’ functions, providing a unified approach for concave and DR-submodular optim…