1 code implementation • 28 Jan 2023 • Matteo Gamba, Hossein Azizpour, Mårten Björkman
Existing bounds on the generalization error of deep networks assume some form of smooth or bounded dependence on the input variable, falling short of investigating the mechanisms controlling such factors in practice.
1 code implementation • 21 Sep 2022 • Matteo Gamba, Erik Englesson, Mårten Björkman, Hossein Azizpour
The ability of overparameterized deep networks to interpolate noisy data, while at the same time showing good generalization performance, has been recently characterized in terms of the double descent curve for the test error.
1 code implementation • 23 Feb 2022 • Matteo Gamba, Adrian Chmielewski-Anders, Josephine Sullivan, Hossein Azizpour, Mårten Björkman
The number of linear regions has been studied as a proxy of complexity for ReLU networks.
1 code implementation • 17 Mar 2020 • Matteo Gamba, Stefan Carlsson, Hossein Azizpour, Mårten Björkman
We investigate the geometric properties of the functions learned by trained ConvNets in the preactivation space of their convolutional layers, by performing an empirical study of hyperplane arrangements induced by a convolutional layer.