Multi-Task Learning
1098 papers with code • 6 benchmarks • 55 datasets
Multi-task learning aims to learn multiple different tasks simultaneously while maximizing performance on one or all of the tasks.
( Image credit: Cross-stitch Networks for Multi-task Learning )
Libraries
Use these libraries to find Multi-Task Learning models and implementationsMost implemented papers
Progressive Layered Extraction (PLE): A Novel Multi-Task Learning (MTL) Model for Personalized Recommendations
Moreover, through extensive experiments across SOTA MTL models, we have observed an interesting seesaw phenomenon that performance of one task is often improved by hurting the performance of some other tasks.
An Overview of Multi-Task Learning in Deep Neural Networks
Multi-task learning (MTL) has led to successes in many applications of machine learning, from natural language processing and speech recognition to computer vision and drug discovery.
Multi-task Learning with Sample Re-weighting for Machine Reading Comprehension
We propose a multi-task learning framework to learn a joint Machine Reading Comprehension (MRC) model that can be applied to a wide range of MRC tasks in different domains.
YOLOP: You Only Look Once for Panoptic Driving Perception
A panoptic driving perception system is an essential part of autonomous driving.
CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation
We present CodeT5, a unified pre-trained encoder-decoder Transformer model that better leverages the code semantics conveyed from the developer-assigned identifiers.
On the Automatic Generation of Medical Imaging Reports
To cope with these challenges, we (1) build a multi-task learning framework which jointly performs the pre- diction of tags and the generation of para- graphs, (2) propose a co-attention mechanism to localize regions containing abnormalities and generate narrations for them, (3) develop a hierarchical LSTM model to generate long paragraphs.
Stacked Conditional Generative Adversarial Networks for Jointly Learning Shadow Detection and Shadow Removal
Specifically, a shadow image is fed into the first generator which produces a shadow detection mask.
End-to-End Multi-Task Learning with Attention
Our design, the Multi-Task Attention Network (MTAN), consists of a single shared network containing a global feature pool, together with a soft-attention module for each task.
Learning General Purpose Distributed Sentence Representations via Large Scale Multi-task Learning
In this work, we present a simple, effective multi-task learning framework for sentence representations that combines the inductive biases of diverse training objectives in a single model.
Self-Supervised Generalisation with Meta Auxiliary Learning
The loss for the label-generation network incorporates the loss of the multi-task network, and so this interaction between the two networks can be seen as a form of meta learning with a double gradient.