↗ OpenReview ↗ NeurIPS Homepage ↗ Chat
TL;DR#
Many algorithms struggle to efficiently solve large zero-sum games, especially when high accuracy is needed. This is because their performance is often analyzed in worst-case scenarios, which are unrealistic and can lead to overly pessimistic estimations. A key problem is dependence on “condition number” like quantities which can be exponentially large in game size.
This paper tackles these issues by employing “smoothed analysis,” a more realistic way to evaluate algorithms that considers small, random perturbations of the game. The authors show that several gradient-based algorithms (OGDA, EGDA, Iterative smoothing) have a polynomial smoothed complexity, meaning their number of iterations increases polynomially with game size, accuracy, and the magnitude of the perturbation. This implies they are practical even for high-accuracy solutions, unlike what worst-case analysis suggests. The study also connects convergence rate to the stability of the game’s equilibrium under perturbations.
Key Takeaways#
Why does it matter?#
This paper is crucial because it addresses the limitations of existing gradient-based algorithms for solving large zero-sum games, which often struggle in high-precision regimes due to their dependence on condition numbers. By using smoothed analysis, the authors provide a more realistic and practical assessment of these algorithms’ performance and offer insights into their convergence rates. This paves the way for developing more efficient and robust algorithms for solving large-scale zero-sum games.