no code implementations • 16 Oct 2021 • Lindsay M. Smith, Jason Z. Kim, Zhixin Lu, Dani S. Bassett
Neural systems are well known for their ability to learn and store information as memories.
no code implementations • 3 May 2020 • Jason Z. Kim, Zhixin Lu, Erfan Nozari, George J. Pappas, Danielle S. Bassett
Here we demonstrate that a recurrent neural network (RNN) can learn to modify its representation of complex information using only examples, and we explain the associated learning mechanism with new theory.
no code implementations • 19 Oct 2017 • Jaideep Pathak, Zhixin Lu, Brian R. Hunt, Michelle Girvan, Edward Ott
For the case of the KS equation, we note that as the system's spatial size is increased, the number of Lyapunov exponents increases, thus yielding a challenging test of our method, which we find the method successfully passes.
Chaotic Dynamics
no code implementations • Chaos 27, 041102 (2017) 2017 • Zhixin Lu, Jaideep Pathak, Brian Hunt, Michelle Girvan, Roger Brockett, and Edward Ott
A scheme that accomplishes this is called an “observer.” We consider the case in which a model of the system is unavailable or insufficiently accurate, but “training” time series data of the desired state variables are available for a short period of time, and a limited number of other system variables are continually measured.