Piggyback: Adapting a Single Network to Multiple Tasks by Learning to Mask Weights

ECCV 2018  ยท  Arun Mallya, Dillon Davis, Svetlana Lazebnik ยท

This work presents a method for adapting a single, fixed deep neural network to multiple tasks without affecting performance on already learned tasks. By building upon ideas from network quantization and pruning, we learn binary masks that piggyback on an existing network, or are applied to unmodified weights of that network to provide good performance on a new task. These masks are learned in an end-to-end differentiable fashion, and incur a low overhead of 1 bit per network parameter, per task. Even though the underlying network is fixed, the ability to mask individual weights allows for the learning of a large number of filters. We show performance comparable to dedicated fine-tuned networks for a variety of classification tasks, including those with large domain shifts from the initial task (ImageNet), and a variety of network architectures. Unlike prior work, we do not suffer from catastrophic forgetting or competition between tasks, and our performance is agnostic to task ordering. Code available at https://github.com/arunmallya/piggyback.

PDF Abstract ECCV 2018 PDF ECCV 2018 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Continual Learning CUBS (Fine-grained 6 Tasks) Piggyback Accuracy 80.5 # 4
Continual Learning Flowers (Fine-grained 6 Tasks) Piggyback Accuracy 94.77 # 4
Continual Learning ImageNet (Fine-grained 6 Tasks) Piggyback Accuracy 76.16 # 1
Continual Learning Sketch (Fine-grained 6 Tasks) Piggyback Accuracy 79.91 # 3
Continual Learning Stanford Cars (Fine-grained 6 Tasks) Piggyback Accuracy 89.62 # 4
Continual Learning visual domain decathlon (10 tasks) Piggyback decathlon discipline (Score) 2838 # 8
Avg. Accuracy 76.60 # 5
Continual Learning Wikiart (Fine-grained 6 Tasks) Piggyback Accuracy 71.33 # 5

Methods


No methods listed for this paper. Add relevant methods here