no code implementations • 28 Jan 2023 • Tomer Galanti, Mengjia Xu, Liane Galanti, Tomaso Poggio
In this paper, we investigate the Rademacher complexity of deep sparse neural networks, where each neuron receives a small number of inputs.
no code implementations • 18 Feb 2022 • Tomer Galanti, Liane Galanti, Ido Ben-Shaul
Finally, we empirically show that the effective depth of a trained neural network monotonically increases when increasing the number of random labels in data.