This was part of The Multifaceted Complexity of Machine Learning

On Min-Max Optimization and Halpern Iteration

Jelena Diakonikolas, University of Wisconsin - Madison

Wednesday, April 14, 2021



Abstract: Min-max optimization constitutes a fundamental class of problems with a range of applications in areas such as game theory, engineering, finance, and training of adversarial models in machine learning. In this talk, I will provide an overview of smooth convex-concave min-max optimization. I will then discuss connections between smooth min-max optimization and the problem of finding a fixed point of a nonexpansive (1-Lipschitz) map. This connection allows us to use variants of Halpern iteration – a classical method for finding fixed points of nonexpansive maps – to solve broad classes of min-max optimization problems. The resulting algorithms attain near-optimal convergence rates for different classes of smooth min-max problems, are parameter-free, and lead to guarantees both in terms of approximating the associated optimality gap and the appropriate notion of stationarity (in unconstrained optimization, an approximate stationary point is a point with small gradient). These are the first results that can simultaneously achieve all of the aforementioned guarantees and, up to logarithmic factors, completely resolve the complexity of smooth min-max optimization.