Search Results for author: Francisco Acosta

Found 2 papers, 1 papers with code

Identifying Interpretable Visual Features in Artificial and Biological Neural Systems

no code implementations17 Oct 2023 David Klindt, Sophia Sanborn, Francisco Acosta, Frédéric Poitevin, Nina Miolane

Single neurons in neural networks are often interpretable in that they represent individual, intuitively meaningful features.

Disentanglement

Quantifying Extrinsic Curvature in Neural Manifolds

1 code implementation20 Dec 2022 Francisco Acosta, Sophia Sanborn, Khanh Dao Duc, Manu Madhav, Nina Miolane

The neural manifold hypothesis postulates that the activity of a neural population forms a low-dimensional manifold whose structure reflects that of the encoded task variables.

Dimensionality Reduction Topological Data Analysis

Cannot find the paper you are looking for? You can Submit a new open access paper.