Online Stochastic Gradient Descent with Arbitrary Initialization Solves Non-smooth, Non-convex Phase Retrieval

Authors: Yan Shuo Tan, Roman Vershynin; Published in 24(58):1−47, 2023.

Abstract

The problem of phase retrieval can be solved using a two-step procedure. First, a spectral technique is used to obtain an initial estimate with constant error. Then, the estimate is refined to arbitrary precision by optimizing a non-convex loss function using first-order optimization. However, numerical experiments suggest that running iterative schemes from a random initialization can also lead to convergence, although with slightly higher sample complexity. In this paper, we prove that constant step size online stochastic gradient descent (SGD) can indeed converge from arbitrary initializations for the non-smooth, non-convex amplitude squared loss objective. In this setting, online SGD is equivalent to the randomized Kaczmarz algorithm from numerical analysis. Our analysis can be extended to other single index models and incorporates new ideas from stochastic process theory, including the concept of a summary state space, which we believe will be valuable for the broader field of non-convex optimization.

[abs]

[pdf][bib]