Optimization
Emergence of heavy tails in homogenized stochastic gradient descent
·1472 words·7 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
π’ Northwestern Polytechnical University
Homogenized SGD reveals heavy-tailed neural network parameters, offering quantifiable bounds on tail-index and showcasing the interplay between optimization hyperparameters and model generalization.
Efficient Streaming Algorithms for Graphlet Sampling
·1741 words·9 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
π’ Saarland University
STREAM-UGS: a novel semi-streaming algorithm for efficient graphlet sampling, enabling fast analysis of massive graphs with limited memory.
Efficient Sign-Based Optimization: Accelerating Convergence via Variance Reduction
·1787 words·9 mins·
loading
·
loading
Machine Learning
Optimization
π’ National Key Laboratory for Novel Software Technology, Nanjing University
Sign-based optimization gets a speed boost! This paper introduces new algorithms that significantly accelerate convergence in distributed optimization by cleverly using variance reduction and enhanced…
Efficient Graph Matching for Correlated Stochastic Block Models
·2079 words·10 mins·
loading
·
loading
AI Theory
Optimization
π’ Northwestern University
Efficient algorithm achieves near-perfect graph matching in correlated stochastic block models, resolving a key open problem and enabling improved community detection.
Efficient Combinatorial Optimization via Heat Diffusion
·2280 words·11 mins·
loading
·
loading
AI Theory
Optimization
π’ Fudan University
Heat Diffusion Optimization (HeO) framework efficiently solves combinatorial optimization problems by enabling information propagation through heat diffusion, outperforming existing methods.
Efficient $β©hi$-Regret Minimization with Low-Degree Swap Deviations in Extensive-Form Games
·570 words·3 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
π’ Carnegie Mellon University
New efficient algorithms minimize regret in extensive-form games by cleverly using low-degree swap deviations and a relaxed fixed-point concept, improving correlated equilibrium computation.
Efficiency of the First-Price Auction in the Autobidding World
·465 words·3 mins·
loading
·
loading
AI Theory
Optimization
π’ Google Research
First-price auction efficiency in autobidding plummets to 45.7% with mixed bidders, but machine-learned advice restores optimality.
Dynamic Service Fee Pricing under Strategic Behavior: Actions as Instruments and Phase Transition
·2204 words·11 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
π’ MIT
This research introduces novel algorithms to dynamically price third-party platform service fees under strategic buyer behavior, achieving optimal revenue with a theoretically proven regret bound.
Dual Lagrangian Learning for Conic Optimization
·2010 words·10 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
π’ String
Dual Lagrangian Learning (DLL) revolutionizes conic optimization by leveraging machine learning to efficiently learn high-quality dual-feasible solutions, achieving 1000x speedups over traditional sol…
Drago: Primal-Dual Coupled Variance Reduction for Faster Distributionally Robust Optimization
·1908 words·9 mins·
loading
·
loading
AI Theory
Optimization
π’ University of Washington
DRAGO: A novel primal-dual algorithm delivers faster, state-of-the-art convergence for distributionally robust optimization.
DistrictNet: Decision-aware learning for geographical districting
·2460 words·12 mins·
loading
·
loading
AI Theory
Optimization
π’ Polytechnique Montreal
DISTRICTNET: A novel decision-aware learning approach drastically cuts geographical districting costs by integrating combinatorial optimization and graph neural networks.
Distributionally Robust Performative Prediction
·2341 words·11 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
π’ University of Michigan
This research introduces distributionally robust performative prediction, offering a new solution concept (DRPO) that minimizes performative risk even with misspecified distribution maps, ensuring rob…
Distributional regression: CRPS-error bounds for model fitting, model selection and convex aggregation
·348 words·2 mins·
loading
·
loading
AI Theory
Optimization
π’ University of Franche-ComtΓ©
This paper provides the first statistical learning guarantees for distributional regression using CRPS, offering concentration bounds for model fitting, selection, and convex aggregation, applicable t…
Distribution Learning with Valid Outputs Beyond the Worst-Case
·320 words·2 mins·
loading
·
loading
AI Theory
Optimization
π’ UC San Diego
Generative models often produce invalid outputs; this work shows that ensuring validity is easier than expected when using log-loss and carefully selecting model classes and data distributions.
Distributed Least Squares in Small Space via Sketching and Bias Reduction
·1322 words·7 mins·
loading
·
loading
Machine Learning
Optimization
π’ University of Michigan
Researchers developed a novel sparse sketching method for distributed least squares regression, achieving near-unbiased estimates with optimal space and time complexity.
Discretely beyond $1/e$: Guided Combinatorial Algortihms for Submodular Maximization
·3091 words·15 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
π’ Texas A&M University
Researchers surpass the 1/e barrier in submodular maximization with novel combinatorial algorithms!
Directional Smoothness and Gradient Methods: Convergence and Adaptivity
·1502 words·8 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
π’ Stanford University
New sub-optimality bounds for gradient descent leverage directional smoothness, a localized gradient variation measure, achieving tighter convergence guarantees and adapting to optimization paths.
Direct Preference-Based Evolutionary Multi-Objective Optimization with Dueling Bandits
·5743 words·27 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
π’ School of Computer Science and Engineering, University of Electronic Science and Technology of China
D-PBEMO: A novel framework for preference-based multi-objective optimization using clustering-based stochastic dueling bandits to directly leverage human feedback, improving efficiency and managing co…
Diffeomorphic interpolation for efficient persistence-based topological optimization
·2900 words·14 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
π’ DataShape
Diffeomorphic interpolation boosts topological optimization by transforming sparse gradients into smooth vector fields, enabling efficient large-scale point cloud optimization and black-box autoencode…
Derivatives of Stochastic Gradient Descent in parametric optimization
·1733 words·9 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
π’ UniversitΓ© Paul Sabatier
Stochastic gradient descent’s derivatives, crucial for hyperparameter optimization, converge to the solution mapping derivative; rates depend on step size, exhibiting O(log(k)Β²/k) convergence with van…