1 code implementation • 13 Dec 2023 • Giovanni Luca Marchetti, Christopher Hillar, Danica Kragic, Sophia Sanborn
In this work, we formally prove that, under certain conditions, if a neural network is invariant to a finite group then its weights recover the Fourier transform on that group.
1 code implementation • 7 Sep 2022 • Sophia Sanborn, Christian Shewmake, Bruno Olshausen, Christopher Hillar
We present a neural network architecture, Bispectral Neural Networks (BNNs) for learning representations that are invariant to the actions of compact commutative groups on the space over which a signal is defined.
no code implementations • 25 Nov 2019 • Zuozhu Liu, Thiparat Chotibut, Christopher Hillar, Shaowei Lin
Motivated by the celebrated discrete-time model of nervous activity outlined by McCulloch and Pitts in 1943, we propose a novel continuous-time model, the McCulloch-Pitts network (MPN), for sequence learning in spiking neural networks.
no code implementations • NeurIPS 2010 • Guy Isely, Christopher Hillar, Fritz Sommer
A new algorithm is proposed for a) unsupervised learning of sparse representations from subsampled measurements and b) estimating the parameters required for linearly reconstructing signals from the sparse codes.