Distributed Deep Transfer Learning by Basic Probability Assignment

20 Oct 2017  ·  Arash Shahriari ·

Transfer learning is a popular practice in deep neural networks, but fine-tuning of large number of parameters is a hard task due to the complex wiring of neurons between splitting layers and imbalance distributions of data in pretrained and transferred domains. The reconstruction of the original wiring for the target domain is a heavy burden due to the size of interconnections across neurons. We propose a distributed scheme that tunes the convolutional filters individually while backpropagates them jointly by means of basic probability assignment. Some of the most recent advances in evidence theory show that in a vast variety of the imbalanced regimes, optimizing of some proper objective functions derived from contingency matrices prevents biases towards high-prior class distributions. Therefore, the original filters get gradually transferred based on individual contributions to overall performance of the target domain. This largely reduces the expected complexity of transfer learning whilst highly improves precision. Our experiments on standard benchmarks and scenarios confirm the consistent improvement of our distributed deep transfer learning strategy.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here