no code implementations • 3 Oct 2023 • Shirui Chen, Stefano Recanatesi, Eric Shea-Brown
The generalization capacity of deep neural networks has been studied in a variety of ways, including at least two distinct categories of approach: one based on the shape of the loss landscape in parameter space, and the other based on the structure of the representation manifold in feature space (that is, in the space of unit activities).
no code implementations • NeurIPS Workshop Neuro_AI 2019 • Matthew Farrell, Stefano Recanatesi, Guillaume Lajoie, Eric Shea-Brown
What determines the dimensionality of activity in neural circuits?
no code implementations • 2 Jun 2019 • Stefano Recanatesi, Matthew Farrell, Madhu Advani, Timothy Moore, Guillaume Lajoie, Eric Shea-Brown
Datasets such as images, text, or movies are embedded in high-dimensional spaces.