Transfer Learning

2850 papers with code • 7 benchmarks • 15 datasets

Transfer Learning is a machine learning technique where a model trained on one task is re-purposed and fine-tuned for a related, but different task. The idea behind transfer learning is to leverage the knowledge learned from a pre-trained model to solve a new, but related problem. This can be useful in situations where there is limited data available to train a new model from scratch, or when the new task is similar enough to the original task that the pre-trained model can be adapted to the new problem with only minor modifications.

( Image credit: Subodh Malgonde )

Libraries

Use these libraries to find Transfer Learning models and implementations

Latest papers with no code

Comparison of self-supervised in-domain and supervised out-domain transfer learning for bird species recognition

no code yet • 26 Apr 2024

Transferring the weights of a pre-trained model to assist another task has become a crucial part of modern deep learning, particularly in data-scarce scenarios.

Self-supervised visual learning in the low-data regime: a comparative evaluation

no code yet • 26 Apr 2024

Self-Supervised Learning (SSL) is a valuable and robust training methodology for contemporary Deep Neural Networks (DNNs), enabling unsupervised pretraining on a `pretext task' that does not require ground-truth labels/annotation.

Federated Transfer Component Analysis Towards Effective VNF Profiling

no code yet • 26 Apr 2024

The increasing concerns of knowledge transfer and data privacy challenge the traditional gather-and-analyse paradigm in networks.

Knowledge Transfer for Cross-Domain Reinforcement Learning: A Systematic Review

no code yet • 26 Apr 2024

Reinforcement Learning (RL) provides a framework in which agents can be trained, via trial and error, to solve complex decision-making problems.

Meta-Transfer Derm-Diagnosis: Exploring Few-Shot Learning and Transfer Learning for Skin Disease Classification in Long-Tail Distribution

no code yet • 25 Apr 2024

Moreover, our experiments, ranging from 2-way to 5-way classifications with up to 10 examples, showed a growing success rate for traditional transfer learning methods as the number of examples increased.

OpenDlign: Enhancing Open-World 3D Learning with Depth-Aligned Images

no code yet • 25 Apr 2024

However, the limited color and texture variations in CAD images can compromise the alignment robustness.

Probabilistic Multi-Layer Perceptrons for Wind Farm Condition Monitoring

no code yet • 25 Apr 2024

Its advantages are that (i) it can be trained with SCADA data of at least a few years, (ii) it can incorporate all SCADA data of all wind turbines in a wind farm as features, (iii) it assumes that the output power follows a normal density with heteroscedastic variance and (iv) it can predict the output of one wind turbine by borrowing strength from the data of all other wind turbines in a farm.

MDDD: Manifold-based Domain Adaptation with Dynamic Distribution for Non-Deep Transfer Learning in Cross-subject and Cross-session EEG-based Emotion Recognition

no code yet • 24 Apr 2024

The proposed MDDD includes four main modules: manifold feature transformation, dynamic distribution alignment, classifier learning, and ensemble learning.

No Train but Gain: Language Arithmetic for training-free Language Adapters enhancement

no code yet • 24 Apr 2024

Modular deep learning is the state-of-the-art solution for lifting the curse of multilinguality, preventing the impact of negative interference and enabling cross-lingual performance in Multilingual Pre-trained Language Models.

How we Learn Concepts: A Review of Relevant Advances Since 2010 and Its Inspirations for Teaching

no code yet • 23 Apr 2024

This article reviews the psychological and neuroscience achievements in concept learning since 2010 from the perspectives of individual learning and social learning, and discusses several issues related to concept learning, including the assistance of machine learning about concept learning.