1 code implementation • 28 Apr 2024 • Prashant Bhat, Bharath Renjith, Elahe Arani, Bahram Zonooz
To further leverage the global relationship between representations learned using CRL, we propose a regularization strategy to guide the classifier toward the activation correlations in the unit hypersphere of the CRL.
1 code implementation • NeurIPS 2023 • Preetha Vijayan, Prashant Bhat, Elahe Arani, Bahram Zonooz
Continual learning (CL) has remained a persistent challenge for deep neural networks due to catastrophic forgetting (CF) of previously learned tasks.
1 code implementation • 8 May 2023 • Kishaan Jeeveswaran, Prashant Bhat, Bahram Zonooz, Elahe Arani
The ability of deep neural networks to continually learn and adapt to a sequence of tasks has remained challenging due to catastrophic forgetting of previously learned tasks.
1 code implementation • 14 Feb 2023 • Prashant Bhat, Bahram Zonooz, Elahe Arani
Thus, inspired by the Global Workspace Theory of conscious information access in the brain, we propose TAMiL, a continual learning method that entails task-attention modules to capture task-specific information from the common representation space.
1 code implementation • 13 Jul 2022 • Prashant Bhat, Bahram Zonooz, Elahe Arani
Furthermore, the domain shift between pre-training data distribution and the task distribution reduces the generalizability of the learned representations.
1 code implementation • 11 Jul 2022 • Prashant Bhat, Bahram Zonooz, Elahe Arani
Therefore, we examine the role of consistency regularization in ER framework under various continual learning scenarios.
1 code implementation • 20 Apr 2021 • Prashant Bhat, Elahe Arani, Bahram Zonooz
To address the issue of self-supervised pre-training of smaller models, we propose Distill-on-the-Go (DoGo), a self-supervised learning paradigm using single-stage online knowledge distillation to improve the representation quality of the smaller models.