Global Convergence of Sub-gradient Method for Robust Matrix Recovery: Small Initialization, Noisy Measurements, and Over-parameterization

Jianhao Ma, Salar Fattahi; 24(96):1−84, 2023.

Abstract

This study examines the performance of the sub-gradient method (SubGM) on a natural nonconvex and nonsmooth formulation of low-rank matrix recovery with an l1-loss. The objective is to recover a low-rank matrix from a limited number of measurements, some of which may be corrupted with noise. The study focuses on a scenario where the rank of the true solution is unknown and overestimated, leading to an over-parameterized model. The aim is to determine if the SubGM algorithm can still perform well in the presence of over-parameterization and noise. The study proves that a simple SubGM with small initialization is agnostic to both over-parameterization and noise in the measurements. It is shown that small initialization nullifies the effect of over-parameterization on the performance of SubGM, resulting in an exponential improvement in its convergence rate. Furthermore, the study provides a unified framework for analyzing the behavior of SubGM under both outlier and Gaussian noise models. It demonstrates that SubGM converges to the true solution, even in the presence of arbitrarily large and dense noise values, and even if the globally optimal solutions do not correspond to the ground truth. The key to these results is the introduction of a robust variant of the restricted isometry property called Sign-RIP, which controls the deviation of the sub-differential of the l1-loss from that of an ideal expected loss. Additionally, the study considers a subclass of robust low-rank matrix recovery with Gaussian measurements and shows that the number of required samples to guarantee the global convergence of SubGM is independent of the over-parameterized rank.

[abs]

[pdf][bib]