Learning Curves for Deep Neural Networks: A Gaussian Field Theory Perspective

12 Jun 2019  ·  Omry Cohen, Or Malka, Zohar Ringel ·

In the past decade, deep neural networks (DNNs) came to the fore as the leading machine learning algorithms for a variety of tasks. Their raise was founded on market needs and engineering craftsmanship, the latter based more on trial and error than on theory. While still far behind the application forefront, the theoretical study of DNNs has recently made important advancements in analyzing the highly over-parameterized regime where some exact results have been obtained. Leveraging these ideas and adopting a more physics-like approach, here we construct a versatile field-theory formalism for supervised deep learning, involving renormalization group, Feynman diagrams and replicas. In particular we show that our approach leads to highly accurate predictions of learning curves of truly deep DNNs trained on polynomial regression tasks and that these predictions can be used for efficient hyper-parameter optimization. In addition, they explain how DNNs generalize well despite being highly over-parameterized, this due to an entropic bias to simple functions which, for the case of fully-connected DNNs with data sampled on the hypersphere, are low order polynomials in the input vector. Being a complex interacting system of artificial neurons, we believe that such tools and methodologies borrowed from condensed matter physics would prove essential for obtaining an accurate quantitative understanding of deep learning.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods