Transfer learning is a methodology where weights from a model trained on one task are taken and either used (a) to construct a fixed feature extractor, (b) as weight initialization and/or fine-tuning.
|Trend||Dataset||Best Method||Paper title||Paper||Code||Compare|
The approach combines, in a black-box fashion, multiple models trained with disjoint datasets, such as records from different subsets of users.
Training large-scale question answering systems is complicated because training sources usually cover a small portion of the range of possible questions.
SOTA for Question Answering on WebQuestions
In this paper, we propose a practically Easy Transfer Learning (EasyTL) approach which requires no model selection and hyperparameter tuning, while achieving competitive performance.
SOTA for Transfer Learning on ImageCLEF-DA
Clone a voice in 5 seconds to generate arbitrary speech in real-time
SOTA for Text-To-Speech Synthesis on LJSpeech (using extra training data)
We evaluate whether features extracted from the activation of a deep convolutional network trained in a fully supervised fashion on a large, fixed set of object recognition tasks can be re-purposed to novel generic tasks.
Much of the recent progress made in image classification research can be credited to training procedure refinements, such as changes in data augmentations and optimization methods.
#46 best model for Image Classification on ImageNet
In this work, we propose to train CNNs from images annotated with multiple tags, to enhance the quality of visual representation of the trained CNN model.
Convolutional Neural Networks (ConvNets) are commonly developed at a fixed resource budget, and then scaled up for better accuracy if more resources are available.
SOTA for Image Classification on Stanford Cars (using extra training data)
Though designed for decaNLP, MQAN also achieves state of the art results on the WikiSQL semantic parsing task in the single-task setting.
DOMAIN ADAPTATION MACHINE TRANSLATION NAMED ENTITY RECOGNITION NATURAL LANGUAGE INFERENCE QUESTION ANSWERING RELATION EXTRACTION SEMANTIC PARSING SEMANTIC ROLE LABELING SENTIMENT ANALYSIS TEXT CLASSIFICATION TRANSFER LEARNING
In this work, we present a simple, effective multi-task learning framework for sentence representations that combines the inductive biases of diverse training objectives in a single model.