no code implementations • 26 May 2024 • Samuel Lippl, Kim Stachenfeld
To address this gap, we present a general theory of compositional generalization in kernel models with fixed, potentially nonlinear representations (which also applies to neural networks in the "lazy regime").
no code implementations • 11 Apr 2024 • Samuel Lippl, Raphael Gerraty, John Morrison, Nikolaus Kriegeskorte
As animals interact with their environments, they must infer properties of their surroundings.
no code implementations • 3 Oct 2023 • Jack W. Lindsey, Samuel Lippl
Our findings hold qualitatively for a deep architecture trained on image classification tasks, and our characterization of the nested feature selection regime motivates a modification to PT+FT that we find empirically improves performance.
1 code implementation • 5 Feb 2022 • Samuel Lippl, L. F. Abbott, SueYeon Chung
Understanding the asymptotic behavior of gradient-descent training of deep neural networks is essential for revealing inductive biases and improving network performance.
no code implementations • 1 Jan 2021 • Samuel Lippl, Benjamin Peters, Nikolaus Kriegeskorte
To test this hypothesis, we manipulate the degree of weight sharing across layers in ResNets using soft gradient coupling.