Transfer Learning

2819 papers with code • 7 benchmarks • 14 datasets

Transfer Learning is a machine learning technique where a model trained on one task is re-purposed and fine-tuned for a related, but different task. The idea behind transfer learning is to leverage the knowledge learned from a pre-trained model to solve a new, but related problem. This can be useful in situations where there is limited data available to train a new model from scratch, or when the new task is similar enough to the original task that the pre-trained model can be adapted to the new problem with only minor modifications.

( Image credit: Subodh Malgonde )

Libraries

Use these libraries to find Transfer Learning models and implementations

Latest papers with no code

Metric Learning for 3D Point Clouds Using Optimal Transport

no code yet • Winter Conference on Applications of Computer Vision(WACV 2024) 2024

Learning embeddings of any data largely depends on the ability of the target space to capture semantic rela- tions.

sEMG-based Fine-grained Gesture Recognition via Improved LightGBM Model

no code yet • 18 Apr 2024

Compared with the scheme directly trained on small sample data, the recognition rate of transfer learning was significantly improved from 60. 35% to 78. 54%, effectively solving the problem of insufficient data, and proving the applicability and advantages of transfer learning in fine gesture recognition tasks for disabled people.

Feature Corrective Transfer Learning: End-to-End Solutions to Object Detection in Non-Ideal Visual Conditions

no code yet • 17 Apr 2024

Our study introduces "Feature Corrective Transfer Learning", a novel approach that leverages transfer learning and a bespoke loss function to facilitate the end-to-end detection of objects in these challenging scenarios without the need to convert non-ideal images into their RGB counterparts.

Explainable Lung Disease Classification from Chest X-Ray Images Utilizing Deep Learning and XAI

no code yet • 17 Apr 2024

Lung diseases remain a critical global health concern, and it's crucial to have accurate and quick ways to diagnose them.

Supervised Contrastive Vision Transformer for Breast Histopathological Image Classification

no code yet • 17 Apr 2024

We present a novel approach, Supervised Contrastive Vision Transformer (SupCon-ViT), for improving the classification of invasive ductal carcinoma in terms of accuracy and generalization by leveraging the inherent strengths and advantages of both transfer learning, i. e., pre-trained vision transformer, and supervised contrastive learning.

GenFighter: A Generative and Evolutive Textual Attack Removal

no code yet • 17 Apr 2024

Adversarial attacks pose significant challenges to deep neural networks (DNNs) such as Transformer models in natural language processing (NLP).

Control Theoretic Approach to Fine-Tuning and Transfer Learning

no code yet • 17 Apr 2024

Given a training set in the form of a paired $(\mathcal{X},\mathcal{Y})$, we say that the control system $\dot{x} = f(x, u)$ has learned the paired set via the control $u^*$ if the system steers each point of $\mathcal{X}$ to its corresponding target in $\mathcal{Y}$.

Neuron Specialization: Leveraging intrinsic task modularity for multilingual machine translation

no code yet • 17 Apr 2024

Training a unified multilingual model promotes knowledge transfer but inevitably introduces negative interference.

Lighter, Better, Faster Multi-Source Domain Adaptation with Gaussian Mixture Models and Optimal Transport

no code yet • 16 Apr 2024

Based on this novel algorithm, we propose two new strategies for MSDA: GMM-WBT and GMM-DaDiL.

Privacy-Preserving Training-as-a-Service for On-Device Intelligence: Concept, Architectural Scheme, and Open Problems

no code yet • 16 Apr 2024

On-device intelligence (ODI) enables artificial intelligence (AI) applications to run on end devices, providing real-time and customized AI services without relying on remote servers.