Search Results for author: Per B. Sederberg

Found 5 papers, 2 papers with code

Foundations of a temporal RL

no code implementations20 Feb 2023 Marc W. Howard, Zahra G. Esfahani, Bao Le, Per B. Sederberg

Spiking across populations of neurons in many regions of the mammalian brain maintains a robust temporal memory, a neural timeline of the recent past.

A deep convolutional neural network that is invariant to time rescaling

1 code implementation9 Jul 2021 Brandon G. Jacques, Zoran Tiganj, Aakash Sarkar, Marc W. Howard, Per B. Sederberg

This property, inspired by findings from contemporary neuroscience and consistent with findings from cognitive psychology, may enable networks that learn with fewer training examples, fewer weights and that generalize more robustly to out of sample data.

Time Series Time Series Analysis +1

DeepSITH: Efficient Learning via Decomposition of What and When Across Time Scales

1 code implementation NeurIPS 2021 Brandon Jacques, Zoran Tiganj, Marc W. Howard, Per B. Sederberg

SITH modules respond to their inputs with a geometrically-spaced set of time constants, enabling the DeepSITH network to learn problems along a continuum of time-scales.

Time Series Time Series Prediction

Estimating scale-invariant future in continuous time

no code implementations18 Feb 2018 Zoran Tiganj, Samuel J. Gershman, Per B. Sederberg, Marc W. Howard

Widely used reinforcement learning algorithms discretize continuous time and estimate either transition functions from one step to the next (model-based algorithms) or a scalar value of exponentially-discounted future reward using the Bellman equation (model-free algorithms).

reinforcement-learning Reinforcement Learning (RL)

Scale-invariant temporal history (SITH): optimal slicing of the past in an uncertain world

no code implementations19 Dec 2017 Tyler A. Spears, Brandon G. Jacques, Marc W. Howard, Per B. Sederberg

In both the human brain and any general artificial intelligence (AI), a representation of the past is necessary to predict the future.

Q-Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.