Optimization
Randomized Strategic Facility Location with Predictions
·1312 words·7 mins·
loading
·
loading
AI Theory
Optimization
π’ Columbia University
Randomized strategies improve truthful learning-augmented mechanisms for strategic facility location, achieving better approximations than deterministic methods.
Random Function Descent
·1682 words·8 mins·
loading
·
loading
AI Theory
Optimization
π’ University of Mannheim
Random Function Descent (RFD) replaces the classical convex function framework with a random function approach, providing a scalable gradient descent method with inherent scale invariance and a theore…
Random Cycle Coding: Lossless Compression of Cluster Assignments via Bits-Back Coding
·1446 words·7 mins·
loading
·
loading
AI Theory
Optimization
π’ University of Toronto
Random Cycle Coding (RCC) optimally compresses cluster assignments in large datasets, saving up to 70% storage in vector databases by eliminating the need for integer IDs.
Queueing Matching Bandits with Preference Feedback
·1365 words·7 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
π’ Seoul National University
Novel algorithms stabilize multi-server queueing systems with unknown service rates, achieving sublinear regret by learning server preferences via preference feedback.
Query-Efficient Correlation Clustering with Noisy Oracle
·1484 words·7 mins·
loading
·
loading
AI Theory
Optimization
π’ CENTAI Institute
Novel algorithms for query-efficient correlation clustering with noisy oracles achieve a balance between query complexity and solution quality, offering theoretical guarantees and outperforming baseli…
Quantum Algorithms for Non-smooth Non-convex Optimization
·360 words·2 mins·
loading
·
loading
AI Theory
Optimization
π’ Chinese University of Hong Kong
Quantum algorithms achieve speedups in non-smooth, non-convex optimization, outperforming classical methods by a factor of Ξ΅β»Β²/Β³ in query complexity for finding (Ξ΄,Ξ΅)-Goldstein stationary points.
Quantum algorithm for large-scale market equilibrium computation
·643 words·4 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
π’ Centre for Quantum Technologies, National University of Singapore
Quantum speedup achieved for large-scale market equilibrium computation!
Quantitative Convergences of Lie Group Momentum Optimizers
·1602 words·8 mins·
loading
·
loading
Machine Learning
Optimization
π’ Georgia Institute of Technology
Accelerated Lie group optimization achieved via a novel momentum algorithm (Lie NAG-SC) with proven convergence rates, surpassing existing methods in efficiency.
Quadratic Quantum Variational Monte Carlo
·1669 words·8 mins·
loading
·
loading
AI Theory
Optimization
π’ University of Texas at Austin
Q2VMC, a novel quantum chemistry algorithm, drastically boosts the efficiency and accuracy of solving the SchrΓΆdinger equation using a quadratic update mechanism and neural network ansatzes.
Putting Gale & Shapley to Work: Guaranteeing Stability Through Learning
·1809 words·9 mins·
loading
·
loading
AI Theory
Optimization
π’ Penn State University
Researchers improve two-sided matching market algorithms by prioritizing stability through novel bandit-learning algorithms, providing theoretical bounds on sample complexity and demonstrating intrigu…
Proving Theorems Recursively
·2409 words·12 mins·
loading
·
loading
AI Theory
Optimization
π’ University of Edinburgh
POETRY: a recursive neural theorem prover achieving 5.1% higher success rate and solving substantially longer proofs.
Provably Faster Algorithms for Bilevel Optimization via Without-Replacement Sampling
·1398 words·7 mins·
loading
·
loading
Machine Learning
Optimization
π’ University of Maryland College Park
Faster bilevel optimization is achieved via without-replacement sampling, improving convergence rates compared to independent sampling methods.
Provable Benefits of Complex Parameterizations for Structured State Space Models
·1827 words·9 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
π’ Tel Aviv University
Complex numbers boost neural network performance! This study proves that complex parameterizations in structured state space models (SSMs) enable more efficient and practical learning of complex mappi…
Provable Acceleration of Nesterov's Accelerated Gradient for Asymmetric Matrix Factorization and Linear Neural Networks
·1572 words·8 mins·
loading
·
loading
AI Theory
Optimization
π’ Georgia Institute of Technology
This paper proves Nesterov’s Accelerated Gradient achieves faster convergence for rectangular matrix factorization and linear neural networks, using a novel unbalanced initialization.
Progressive Entropic Optimal Transport Solvers
·4169 words·20 mins·
loading
·
loading
AI Generated
Machine Learning
Optimization
π’ Apple
Progressive Entropic Optimal Transport (PROGOT) solvers efficiently and robustly compute optimal transport plans and maps, even at large scales, by progressively scheduling parameters.
PRODuctive bandits: Importance Weighting No More
·229 words·2 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
π’ Google Research
Prod-family algorithms achieve optimal regret in adversarial multi-armed bandits, disproving prior suboptimality conjectures.
Private Algorithms for Stochastic Saddle Points and Variational Inequalities: Beyond Euclidean Geometry
·315 words·2 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
π’ Ohio State University
This paper presents novel, privacy-preserving algorithms achieving near-optimal rates for solving stochastic saddle point problems and variational inequalities in non-Euclidean geometries.
Principled Bayesian Optimization in Collaboration with Human Experts
·2248 words·11 mins·
loading
·
loading
AI Theory
Optimization
π’ University of Oxford
COBOL: a novel Bayesian Optimization algorithm leverages human expert advice via binary labels, achieving both fast convergence and robustness to noisy input, while guaranteeing minimal expert effort.
Pretrained Optimization Model for Zero-Shot Black Box Optimization
·4005 words·19 mins·
loading
·
loading
Machine Learning
Optimization
π’ Xidian University
Pretrained Optimization Model (POM) excels at zero-shot black-box optimization, outperforming existing methods, especially in high dimensions, through direct application or few-shot fine-tuning.
Precise asymptotics of reweighted least-squares algorithms for linear diagonal networks
·1447 words·7 mins·
loading
·
loading
Machine Learning
Optimization
π’ Georgia Institute of Technology
New analysis reveals how reweighted least-squares algorithms for linear diagonal networks achieve favorable performance in high-dimensional settings, improving upon existing theoretical guarantees and…