Search Results for author: Amanda Rios

Found 4 papers, 0 papers with code

Lifelong Learning Without a Task Oracle

no code implementations9 Nov 2020 Amanda Rios, Laurent Itti

Supervised deep neural networks are known to undergo a sharp decline in the accuracy of older tasks when new tasks are learned, termed "catastrophic forgetting".

Continual Learning

Beneficial Perturbations Network for Defending Adversarial Examples

no code implementations27 Sep 2020 Shixian Wen, Amanda Rios, Laurent Itti

The reason is that neural networks fail to accommodate the distribution drift of the input data caused by adversarial perturbations.

Beneficial Perturbation Network for designing general adaptive artificial intelligence systems

no code implementations27 Sep 2020 Shixian Wen, Amanda Rios, Yunhao Ge, Laurent Itti

Continual learning of multiple tasks in artificial neural networks using gradient descent leads to catastrophic forgetting, whereby a previously learned mapping of an old task is erased when learning new mappings for new tasks.

Continual Learning

Closed-Loop Memory GAN for Continual Learning

no code implementations3 Nov 2018 Amanda Rios, Laurent Itti

Sequential learning of tasks using gradient descent leads to an unremitting decline in the accuracy of tasks for which training data is no longer available, termed catastrophic forgetting.

Continual Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.