Negotiated Representations to Prevent Forgetting in Machine Learning Applications

30 Nov 2023  ·  Nuri Korhan, Ceren Öner ·

Catastrophic forgetting is a significant challenge in the field of machine learning, particularly in neural networks. When a neural network learns to perform well on a new task, it often forgets its previously acquired knowledge or experiences. This phenomenon occurs because the network adjusts its weights and connections to minimize the loss on the new task, which can inadvertently overwrite or disrupt the representations that were crucial for the previous tasks. As a result, the the performance of the network on earlier tasks deteriorates, limiting its ability to learn and adapt to a sequence of tasks. In this paper, we propose a novel method for preventing catastrophic forgetting in machine learning applications, specifically focusing on neural networks. Our approach aims to preserve the knowledge of the network across multiple tasks while still allowing it to learn new information effectively. We demonstrate the effectiveness of our method by conducting experiments on various benchmark datasets, including Split MNIST, Split CIFAR10, Split Fashion MNIST, and Split CIFAR100. These datasets are created by dividing the original datasets into separate, non overlapping tasks, simulating a continual learning scenario where the model needs to learn multiple tasks sequentially without forgetting the previous ones. Our proposed method tackles the catastrophic forgetting problem by incorporating negotiated representations into the learning process, which allows the model to maintain a balance between retaining past experiences and adapting to new tasks. By evaluating our method on these challenging datasets, we aim to showcase its potential for addressing catastrophic forgetting and improving the performance of neural networks in continual learning settings.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Image Classification Split CIFAR-10 Model with negotiation paradigm Percentage Average accuracy - 5 tasks 46.5 # 1
Image Classification split CIFAR-100 Model with negotiation paradigm Percentage Average accuracy - 5 tasks 34.9 # 1
Image Classification Split Fashion M-NIST Model with negotiation paradigm Percentage Average accuracy - 5 tasks 54.8 # 1
Image Classification Split M-NIST Model with negotiation paradigm Percentage Average accuracy - 5 tasks 82.3 # 1

Methods