Beyond the Golden Ratio for Variational Inequality Algorithms
Ahmet Alacaoglu, Axel Böhm, Yura Malitsky; 24(172):1−33, 2023.
Abstract
This study aims to enhance the understanding of the golden ratio algorithm, which is used to solve monotone variational inequalities (VI) and convex-concave min-max problems. The algorithm stands out for its ability to adapt step sizes to local Lipschitz constants, eliminating the need for hyperparameter selection and global Lipschitz continuity. We establish the equivalence of this algorithm with popular VI methods in the unconstrained case with constant step sizes, such as reflected gradient, Popov, or optimistic gradient descent-ascent (OGDA). We then extend our analysis to the constrained setting, enabling the use of larger step sizes and bridging the gap between the golden ratio algorithm and existing algorithms in the literature. In doing so, we eliminate the link between the golden ratio (1+√5)/2 and the algorithm. Furthermore, we improve the adaptive version of the algorithm by removing the maximum step size hyperparameter and adjusting it to nonmonotone problems with weak Minty solutions, resulting in superior empirical performance.
[abs]
[code]