Quasi-Equivalence between Width and Depth of Neural Networks

Fenglei Fan, Rongjie Lai, Ge Wang; 24(183):1−22, 2023.

Abstract

The classic studies have shown that wide networks have the ability to universally approximate, while recent advancements in deep learning have demonstrated the power of deep networks. In this study, we investigate whether artificial neural networks should have a directional preference in their design, and explore the interaction between the width and depth of a network. Drawing inspiration from the De Morgan law, we address this fundamental question by establishing a quasi-equivalence between the width and depth of ReLU networks. We introduce two transforms that can map any ReLU network to a wide ReLU network or a deep ReLU network, respectively, while preserving the essential capabilities of the original network. Our findings reveal that a deep network has an equivalent wide network, and vice versa, with only a negligible error.

[abs]

[pdf][bib]