Entropic Fictitious Play for Mean Field Optimization Problem
By Fan Chen, Zhenjie Ren, and Songbo Wang; Volume 24, Issue 211, Pages 1-36, 2023.
Abstract
This study focuses on the mean field limit of two-layer neural networks, where the number of neurons tends to infinity. In this scenario, the optimization of neuron parameters becomes an optimization problem over probability measures. By introducing an entropic regularizer, we identify the minimizer of the problem as a fixed point. To recover this fixed point, we propose a novel training algorithm called entropic fictitious play. This algorithm is inspired by the classical fictitious play in game theory, which is used to learn Nash equilibriums. The entropic fictitious play algorithm exhibits a two-loop iteration structure. In this paper, we prove exponential convergence and validate our theoretical results through simple numerical examples.
[abs]