no code implementations • 20 Feb 2023 • Marc W. Howard, Zahra G. Esfahani, Bao Le, Per B. Sederberg
Spiking across populations of neurons in many regions of the mammalian brain maintains a robust temporal memory, a neural timeline of the recent past.
1 code implementation • 9 Jul 2021 • Brandon G. Jacques, Zoran Tiganj, Aakash Sarkar, Marc W. Howard, Per B. Sederberg
This property, inspired by findings from contemporary neuroscience and consistent with findings from cognitive psychology, may enable networks that learn with fewer training examples, fewer weights and that generalize more robustly to out of sample data.
1 code implementation • NeurIPS 2021 • Brandon Jacques, Zoran Tiganj, Marc W. Howard, Per B. Sederberg
SITH modules respond to their inputs with a geometrically-spaced set of time constants, enabling the DeepSITH network to learn problems along a continuum of time-scales.
no code implementations • 18 Feb 2018 • Zoran Tiganj, Samuel J. Gershman, Per B. Sederberg, Marc W. Howard
Widely used reinforcement learning algorithms discretize continuous time and estimate either transition functions from one step to the next (model-based algorithms) or a scalar value of exponentially-discounted future reward using the Bellman equation (model-free algorithms).
no code implementations • 19 Dec 2017 • Tyler A. Spears, Brandon G. Jacques, Marc W. Howard, Per B. Sederberg
In both the human brain and any general artificial intelligence (AI), a representation of the past is necessary to predict the future.