Skip to main content

Optimization

Sample-efficient Bayesian Optimisation Using Known Invariances
·1812 words·9 mins· loading · loading
AI Theory Optimization 🏢 University College London
Boost Bayesian Optimization’s efficiency by leveraging known invariances in objective functions for faster, more effective solutions.
Sample Complexity of Posted Pricing for a Single Item
·273 words·2 mins· loading · loading
AI Theory Optimization 🏢 Cornell University
This paper reveals how many buyer samples are needed to set near-optimal posted prices for a single item, resolving a fundamental problem in online markets and offering both theoretical and practical …
Sample Complexity of Algorithm Selection Using Neural Networks and Its Applications to Branch-and-Cut
·428 words·3 mins· loading · loading
AI Theory Optimization 🏢 Johns Hopkins University
Neural networks enhance algorithm selection in branch-and-cut, significantly reducing tree sizes and improving efficiency for mixed-integer optimization, as proven by rigorous theoretical bounds and e…
SAMPa: Sharpness-aware Minimization Parallelized
·2453 words·12 mins· loading · loading
Machine Learning Optimization 🏢 EPFL
SAMPa: Parallelizing gradient computations in Sharpness-Aware Minimization (SAM) achieves a 2x speedup and superior generalization.
Safe Exploitative Play with Untrusted Type Beliefs
·1930 words·10 mins· loading · loading
AI Theory Optimization 🏢 School of Data Science, the Chinese University of Hong Kong, Shenzhen
This paper characterizes the fundamental tradeoff between trusting and distrusting learned type beliefs in games, establishing upper and lower bounds for optimal strategies in both normal-form and sto…
Safe and Sparse Newton Method for Entropic-Regularized Optimal Transport
·2040 words·10 mins· loading · loading
AI Generated AI Theory Optimization 🏢 Shanghai University of Finance and Economics
A novel safe & sparse Newton method (SSNS) for entropic-regularized optimal transport boasts strict error control, avoids singularity, needs no hyperparameter tuning, and offers rigorous convergence a…
S-SOS: Stochastic Sum-Of-Squares for Parametric Polynomial Optimization
·1441 words·7 mins· loading · loading
AI Theory Optimization 🏢 University of Chicago
S-SOS: A new algorithm solves complex, parameterized polynomial problems with provable convergence, enabling efficient solutions for high-dimensional applications like sensor network localization.
RoPINN: Region Optimized Physics-Informed Neural Networks
·2557 words·13 mins· loading · loading
AI Theory Optimization 🏢 Tsinghua University
ROPINN: Revolutionizing Physics-Informed Neural Networks with Region Optimization
Robust and Faster Zeroth-Order Minimax Optimization: Complexity and Applications
·1498 words·8 mins· loading · loading
AI Generated AI Theory Optimization 🏢 Peking University
ZO-GDEGA: A unified algorithm achieves faster, more robust zeroth-order minimax optimization with lower complexity and weaker conditions, solving stochastic nonconvex-concave problems.
Rethinking the Capacity of Graph Neural Networks for Branching Strategy
·1678 words·8 mins· loading · loading
AI Generated AI Theory Optimization 🏢 MIT
This paper proves that higher-order GNNs can universally approximate strong branching in MILP solvers, whereas simpler GNNs can only accurately approximate for a specific class of problems.
Rethinking Parity Check Enhanced Symmetry-Preserving Ansatz
·2377 words·12 mins· loading · loading
AI Theory Optimization 🏢 Shanghai Jiao Tong University
Enhanced VQAs via Hamming Weight Preserving ansatz and parity checks achieve superior performance on quantum chemistry and combinatorial problems, showcasing quantum advantage potential in NISQ era.
Reshuffling Resampling Splits Can Improve Generalization of Hyperparameter Optimization
·4058 words·20 mins· loading · loading
AI Theory Optimization 🏢 Munich Center for Machine Learning (MCML)
Reshuffling data splits during hyperparameter optimization surprisingly improves model generalization, offering a computationally cheaper alternative to standard methods.
Replicable Uniformity Testing
·268 words·2 mins· loading · loading
AI Generated AI Theory Optimization 🏢 UC San Diego
This paper presents the first replicable uniformity tester with nearly linear dependence on the replicability parameter, enhancing the reliability of scientific studies using distribution testing algo…
Replicability in Learning: Geometric Partitions and KKM-Sperner Lemma
·301 words·2 mins· loading · loading
AI Theory Optimization 🏢 Sandia National Laboratories
This paper reveals near-optimal relationships between geometric partitions and replicability in machine learning, establishing the optimality of existing algorithms and introducing a new neighborhood …
Remove that Square Root: A New Efficient Scale-Invariant Version of AdaGrad
·1874 words·9 mins· loading · loading
AI Generated Machine Learning Optimization 🏢 MBZUAI
KATE: A new scale-invariant AdaGrad variant achieves state-of-the-art convergence without square roots, outperforming AdaGrad and matching/exceeding Adam’s performance.
ReLIZO: Sample Reusable Linear Interpolation-based Zeroth-order Optimization
·2192 words·11 mins· loading · loading
AI Theory Optimization 🏢 Shanghai Jiao Tong University
ReLIZO boosts zeroth-order optimization by cleverly reusing past queries, drastically cutting computation costs while maintaining gradient estimation accuracy.
Reliable Learning of Halfspaces under Gaussian Marginals
·265 words·2 mins· loading · loading
AI Theory Optimization 🏢 University of Wisconsin-Madison
New algorithm reliably learns Gaussian halfspaces with significantly improved sample and computational complexity compared to existing methods, offering strong computational separation from standard a…
ReEvo: Large Language Models as Hyper-Heuristics with Reflective Evolution
·3978 words·19 mins· loading · loading
AI Theory Optimization 🏢 Peking University
ReEvo, a novel integration of evolutionary search and LLM reflections, generates state-of-the-art heuristics for combinatorial optimization problems, demonstrating superior sample efficiency.
Recurrent neural networks: vanishing and exploding gradients are not the end of the story
·2602 words·13 mins· loading · loading
AI Theory Optimization 🏢 ETH Zurich
Recurrent neural networks struggle with long-term memory due to a newly identified ‘curse of memory’: increasing parameter sensitivity with longer memory. This work provides insights into RNN optimiza…
Randomized Truthful Auctions with Learning Agents
·324 words·2 mins· loading · loading
AI Generated AI Theory Optimization 🏢 Google Research
Randomized truthful auctions outperform deterministic ones when bidders employ learning algorithms, maximizing revenue in repeated interactions.