Consequences of Slow Neural Dynamics for Incremental Learning

12 Dec 2020  ·  Shima Rahimi Moghaddam, Fanjun Bu, Christopher J. Honey ·

In the human brain, internal states are often correlated over time (due to local recurrence and other intrinsic circuit properties), punctuated by abrupt transitions. At first glance, temporal smoothness of internal states presents a problem for learning input-output mappings (e.g. category labels for images), because the internal representation of the input will contain a mixture of current input and prior inputs. However, when training with naturalistic data (e.g. movies) there is also temporal autocorrelation in the input. How does the temporal "smoothness" of internal states affect the efficiency of learning when the training data are also temporally smooth? How does it affect the kinds of representations that are learned? We found that, when trained with temporally smooth data, "slow" neural networks (equipped with linear recurrence and gating mechanisms) learned to categorize more efficiently than feedforward networks. Furthermore, networks with linear recurrence and multi-timescale gating could learn internal representations that "un-mixed" quickly-varying and slowly-varying data sources. Together, these findings demonstrate how a fundamental property of cortical dynamics (their temporal autocorrelation) can serve as an inductive bias, leading to more efficient category learning and to the representational separation of fast and slow sources in the environment.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here