1 code implementation • 8 Nov 2023 • Antonios Georgiou, Tankut Can, Mikhail Katkov, Misha Tsodyks
One of the most impressive achievements of the AI revolution is the development of large language models that can generate meaningful text and respond to instructions in plain English with no additional training necessary.
1 code implementation • 12 Jul 2023 • Timothy Doyeon Kim, Tankut Can, Kamesh Krishnamurthy
Using this measure, we explore how the phase-space dimension of the nODEs and the complexity of the function modeling the flow field contribute to expressivity.
no code implementations • 28 Oct 2022 • Aditya Cowsik, Tankut Can, Paolo Glorioso
Commonly used optimization algorithms often show a trade-off between good generalization and fast training times.
no code implementations • 8 Sep 2021 • Tankut Can, Kamesh Krishnamurthy
The ability to store continuous variables in the state of a biological system (e. g. a neural network) is critical for many behaviours.
no code implementations • 29 Jul 2020 • Kamesh Krishnamurthy, Tankut Can, David J. Schwab
The gate modulating the dimensionality can induce a novel, discontinuous chaotic transition, where inputs push a stable system to strong chaotic activity, in contrast to the typically stabilizing effect of inputs.
no code implementations • 31 Jan 2020 • Tankut Can, Kamesh Krishnamurthy, David J. Schwab
Here, we take the perspective of studying randomly initialized LSTMs and GRUs as dynamical systems, and ask how the salient dynamical properties are shaped by the gates.