Search Results for author: Prashant Bhat

Found 7 papers, 7 papers with code

IMEX-Reg: Implicit-Explicit Regularization in the Function Space for Continual Learning

1 code implementation28 Apr 2024 Prashant Bhat, Bharath Renjith, Elahe Arani, Bahram Zonooz

To further leverage the global relationship between representations learned using CRL, we propose a regularization strategy to guide the classifier toward the activation correlations in the unit hypersphere of the CRL.

Continual Learning Representation Learning

TriRE: A Multi-Mechanism Learning Paradigm for Continual Knowledge Retention and Promotion

1 code implementation NeurIPS 2023 Preetha Vijayan, Prashant Bhat, Elahe Arani, Bahram Zonooz

Continual learning (CL) has remained a persistent challenge for deep neural networks due to catastrophic forgetting (CF) of previously learned tasks.

Continual Learning

BiRT: Bio-inspired Replay in Vision Transformers for Continual Learning

1 code implementation8 May 2023 Kishaan Jeeveswaran, Prashant Bhat, Bahram Zonooz, Elahe Arani

The ability of deep neural networks to continually learn and adapt to a sequence of tasks has remained challenging due to catastrophic forgetting of previously learned tasks.

Continual Learning

Task-Aware Information Routing from Common Representation Space in Lifelong Learning

1 code implementation14 Feb 2023 Prashant Bhat, Bahram Zonooz, Elahe Arani

Thus, inspired by the Global Workspace Theory of conscious information access in the brain, we propose TAMiL, a continual learning method that entails task-attention modules to capture task-specific information from the common representation space.

Continual Learning

Task Agnostic Representation Consolidation: a Self-supervised based Continual Learning Approach

1 code implementation13 Jul 2022 Prashant Bhat, Bahram Zonooz, Elahe Arani

Furthermore, the domain shift between pre-training data distribution and the task distribution reduces the generalizability of the learned representations.

Continual Learning

Consistency is the key to further mitigating catastrophic forgetting in continual learning

1 code implementation11 Jul 2022 Prashant Bhat, Bahram Zonooz, Elahe Arani

Therefore, we examine the role of consistency regularization in ER framework under various continual learning scenarios.

Continual Learning Self-Supervised Learning

Distill on the Go: Online knowledge distillation in self-supervised learning

1 code implementation20 Apr 2021 Prashant Bhat, Elahe Arani, Bahram Zonooz

To address the issue of self-supervised pre-training of smaller models, we propose Distill-on-the-Go (DoGo), a self-supervised learning paradigm using single-stage online knowledge distillation to improve the representation quality of the smaller models.

Knowledge Distillation Self-Supervised Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.