Skip to main content

Optimization

OptEx: Expediting First-Order Optimization with Approximately Parallelized Iterations
·2570 words·13 mins· loading · loading
AI Generated Machine Learning Optimization 🏢 School of Information Technology, Carleton University
OptEx significantly speeds up first-order optimization by cleverly parallelizing iterations, enabling faster convergence for complex tasks.
Only Strict Saddles in the Energy Landscape of Predictive Coding Networks?
·2012 words·10 mins· loading · loading
AI Theory Optimization 🏢 University of Sussex
Predictive coding networks learn faster than backpropagation by changing the loss landscape’s geometry, making saddles easier to escape and improving robustness to vanishing gradients.
Online Weighted Paging with Unknown Weights
·1583 words·8 mins· loading · loading
AI Theory Optimization 🏢 Tel Aviv University
First algorithm for online weighted paging that learns page weights from samples, achieving optimal O(log k) competitiveness and sublinear regret.
Online Learning of Delayed Choices
·1433 words·7 mins· loading · loading
AI Theory Optimization 🏢 University of Waterloo
New algorithms conquer delayed feedback in online choice modeling, achieving optimal decision-making even with unknown customer preferences and delayed responses.
Online Estimation via Offline Estimation: An Information-Theoretic Framework
·1315 words·7 mins· loading · loading
AI Theory Optimization 🏢 Microsoft Research
This paper introduces a novel information-theoretic framework, showing how to convert offline into online estimation algorithms efficiently, impacting interactive decision-making.
Online Convex Optimisation: The Optimal Switching Regret for all Segmentations Simultaneously
·344 words·2 mins· loading · loading
AI Theory Optimization 🏢 Alan Turing Institute
Algorithm RESET achieves optimal switching regret simultaneously across all segmentations, offering efficiency and parameter-free operation.
Online Consistency of the Nearest Neighbor Rule
·1388 words·7 mins· loading · loading
AI Theory Optimization 🏢 UC San Diego
The 1-nearest neighbor rule achieves online consistency under surprisingly broad conditions: measurable label functions and mild assumptions on instance generation in doubling metric spaces.
Online Composite Optimization Between Stochastic and Adversarial Environments
·1450 words·7 mins· loading · loading
AI Generated AI Theory Optimization 🏢 Nanjing University
Researchers achieve optimal regret bounds in online composite optimization under stochastic and adversarial settings using a novel optimistic composite mirror descent algorithm and a universal strateg…
Online Budgeted Matching with General Bids
·1940 words·10 mins· loading · loading
AI Theory Optimization 🏢 University of Houston
MetaAd, a novel meta-algorithm, achieves provable competitive ratios for online budgeted matching with general bids, removing prior restrictive assumptions.
Online Bayesian Persuasion Without a Clue
·1780 words·9 mins· loading · loading
AI Theory Optimization 🏢 Politecnico Di Milano
Researchers developed a novel online Bayesian persuasion algorithm that achieves sublinear regret without prior knowledge of the receiver or the state distribution, providing tight theoretical guarant…
One-Layer Transformer Provably Learns One-Nearest Neighbor In Context
·1344 words·7 mins· loading · loading
AI Theory Optimization 🏢 Princeton University
One-layer transformers provably learn the one-nearest neighbor prediction rule, offering theoretical insights into their in-context learning capabilities.
On Weak Regret Analysis for Dueling Bandits
·1775 words·9 mins· loading · loading
AI Generated AI Theory Optimization 🏢 KAUST
New algorithms achieve optimal weak regret in K-armed dueling bandits by leveraging the full problem structure, improving upon state-of-the-art methods.
On Tractable $
hi$-Equilibria in Non-Concave Games
·1428 words·7 mins· loading · loading
AI Theory Optimization 🏢 Yale University
This paper presents efficient algorithms for approximating equilibria in non-concave games, focusing on tractable ɸ-equilibria and addressing computational challenges posed by infinite strategy sets.
On the Sparsity of the Strong Lottery Ticket Hypothesis
·1303 words·7 mins· loading · loading
AI Theory Optimization 🏢 Université Côte D'Azur
Researchers rigorously prove the Strong Lottery Ticket Hypothesis, offering the first theoretical guarantees on the sparsity of winning neural network subnetworks.
On the Power of Small-size Graph Neural Networks for Linear Programming
·2361 words·12 mins· loading · loading
AI Generated AI Theory Optimization 🏢 Peking University
Small-size Graph Neural Networks effectively solve Linear Programs!
On the Optimality of Dilated Entropy and Lower Bounds for Online Learning in Extensive-Form Games
·1661 words·8 mins· loading · loading
AI Generated AI Theory Optimization 🏢 MIT
Researchers discover Dilated Entropy is the optimal distance-generating function for solving extensive-form games using first-order methods, achieving near-optimal regret bounds.
On the Optimal Time Complexities in Decentralized Stochastic Asynchronous Optimization
·2010 words·10 mins· loading · loading
Machine Learning Optimization 🏢 KAUST AIRI
Fragile SGD & Amelie SGD achieve near-optimal speed in decentralized asynchronous optimization, handling diverse worker & communication speeds.
On the Expressive Power of Tree-Structured Probabilistic Circuits
·1425 words·7 mins· loading · loading
AI Theory Optimization 🏢 University of Illinois Urbana-Champaign
Tree-structured probabilistic circuits are surprisingly efficient: this paper proves a quasi-polynomial upper bound on their size, showing they’re almost as expressive as more complex DAG structures.
On the Computational Landscape of Replicable Learning
·348 words·2 mins· loading · loading
AI Theory Optimization 🏢 Yale University
This paper reveals surprising computational connections between algorithmic replicability and other learning paradigms, offering novel algorithms and demonstrating separations between replicability an…
On the cohesion and separability of average-link for hierarchical agglomerative clustering
·1663 words·8 mins· loading · loading
AI Theory Optimization 🏢 Departmento De Informática, PUC-RIO
Average-link hierarchical clustering gets a comprehensive evaluation using new criteria, showing it outperforms other methods when both cohesion and separability matter.