Optimization
Accelerating Nash Equilibrium Convergence in Monte Carlo Settings Through Counterfactual Value Based Fictitious Play
·1841 words·9 mins·
loading
·
loading
AI Theory
Optimization
π’ Huazhong University of Science and Technology
MCCFVFP, a novel Monte Carlo-based algorithm, accelerates Nash equilibrium convergence in large-scale games by combining CFR’s counterfactual value calculations with fictitious play’s best response st…
Accelerating Matroid Optimization through Fast Imprecise Oracles
·485 words·3 mins·
loading
·
loading
AI Theory
Optimization
π’ Technical University of Berlin
Fast imprecise oracles drastically reduce query times in matroid optimization, achieving near-optimal performance with few accurate queries.
Accelerating ERM for data-driven algorithm design using output-sensitive techniques
·366 words·2 mins·
loading
·
loading
AI Theory
Optimization
π’ Carnegie Mellon University
Accelerating ERM for data-driven algorithm design using output-sensitive techniques achieves computationally efficient learning by scaling with the actual number of pieces in the dual loss function, n…
Accelerated Regularized Learning in Finite N-Person Games
·1352 words·7 mins·
loading
·
loading
AI Theory
Optimization
π’ Stanford University
Accelerated learning in games achieved! FTXL algorithm exponentially speeds up convergence to Nash equilibria in finite N-person games, even under limited feedback.
Abductive Reasoning in Logical Credal Networks
·1676 words·8 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
π’ IBM Research
This paper presents efficient algorithms for abductive reasoning in Logical Credal Networks (LCNs), addressing the MAP and Marginal MAP inference tasks to enable scalable solutions for complex real-wo…
A Universal Growth Rate for Learning with Smooth Surrogate Losses
·1364 words·7 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
π’ Courant Institute
This paper reveals a universal square-root growth rate for H-consistency bounds of smooth surrogate losses in classification, significantly advancing our understanding of loss function selection.
A Theory of Optimistically Universal Online Learnability for General Concept Classes
·411 words·2 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
π’ Purdue University
This paper fully characterizes concept classes optimistically universally learnable online, introducing novel algorithms and revealing equivalences between agnostic and realizable settings.
A Simple and Optimal Approach for Universal Online Learning with Gradient Variations
·244 words·2 mins·
loading
·
loading
AI Theory
Optimization
π’ Nanjing University
A novel universal online learning algorithm achieves optimal gradient-variation regret across diverse function curvatures, boasting efficiency with only one gradient query per round.
A Simple and Adaptive Learning Rate for FTRL in Online Learning with Minimax Regret of Ξ(T^{2/3}) and its Application to Best-of-Both-Worlds
·334 words·2 mins·
loading
·
loading
AI Theory
Optimization
π’ University of Tokyo
A new adaptive learning rate for FTRL achieves minimax regret of O(TΒ²/Β³) in online learning, improving existing best-of-both-worlds algorithms for various hard problems.
A Separation in Heavy-Tailed Sampling: Gaussian vs. Stable Oracles for Proximal Samplers
·1758 words·9 mins·
loading
·
loading
AI Theory
Optimization
π’ Georgia Institute of Technology
Stable oracles outperform Gaussian oracles in high-accuracy heavy-tailed sampling, overcoming limitations of Gaussian-based proximal samplers.
A provable control of sensitivity of neural networks through a direct parameterization of the overall bi-Lipschitzness
·4589 words·22 mins·
loading
·
loading
AI Theory
Optimization
π’ University of Tokyo
New framework directly controls neural network sensitivity by precisely parameterizing overall bi-Lipschitzness, offering improved robustness and generalization.
A Primal-Dual-Assisted Penalty Approach to Bilevel Optimization with Coupled Constraints
·2217 words·11 mins·
loading
·
loading
AI Theory
Optimization
π’ Rensselaer Polytechnic Institute
BLOCC, a novel first-order algorithm, efficiently solves bilevel optimization problems with coupled constraints, offering improved scalability and convergence for machine learning applications.
A Neural Network Approach for Efficiently Answering Most Probable Explanation Queries in Probabilistic Models
·11719 words·56 mins·
loading
·
loading
AI Theory
Optimization
π’ University of Texas at Dallas
A novel neural network efficiently answers arbitrary Most Probable Explanation (MPE) queries in large probabilistic models, eliminating the need for slow inference algorithms.
A Fast Convoluted Story: Scaling Probabilistic Inference for Integer Arithmetics
·2476 words·12 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
π’ KU Leuven
Revolutionizing probabilistic inference, PLIAβ uses tensor operations and FFT to scale integer arithmetic, achieving orders-of-magnitude speedup in inference and learning times.
A Combinatorial Algorithm for the Semi-Discrete Optimal Transport Problem
·1938 words·10 mins·
loading
·
loading
AI Theory
Optimization
π’ Duke University
A new combinatorial algorithm dramatically speeds up semi-discrete optimal transport calculations, offering an efficient solution for large datasets and higher dimensions.
A Boosting-Type Convergence Result for AdaBoost.MH with Factorized Multi-Class Classifiers
·358 words·2 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
π’ Wuhan University
Solved a long-standing open problem: Factorized ADABOOST.MH now has a proven convergence rate!
4+3 Phases of Compute-Optimal Neural Scaling Laws
·3282 words·16 mins·
loading
·
loading
AI Theory
Optimization
π’ McGill University
Researchers discovered four distinct compute-optimal phases for training neural networks, offering new predictions for resource-efficient large model training.