Multi-Task Learning
1101 papers with code • 6 benchmarks • 55 datasets
Multi-task learning aims to learn multiple different tasks simultaneously while maximizing performance on one or all of the tasks.
( Image credit: Cross-stitch Networks for Multi-task Learning )
Libraries
Use these libraries to find Multi-Task Learning models and implementationsLatest papers
How does Multi-Task Training Affect Transformer In-Context Capabilities? Investigations with Function Classes
Large language models (LLM) have recently shown the extraordinary ability to perform unseen tasks based on few-shot examples provided as text, also known as in-context learning (ICL).
Multi-Granularity Guided Fusion-in-Decoder
In Open-domain Question Answering (ODQA), it is essential to discern relevant contexts as evidence and avoid spurious ones among retrieved results.
Large Language Models for Expansion of Spoken Language Understanding Systems to New Languages
In the on-device scenario (tiny and not pretrained SLU), our method improved the Overall Accuracy from 5. 31% to 22. 06% over the baseline Global-Local Contrastive Learning Framework (GL-CLeF) method.
EGTR: Extracting Graph from Transformer for Scene Graph Generation
We propose a lightweight one-stage SGG model that extracts the relation graph from the various relationships learned in the multi-head self-attention layers of the DETR decoder.
Joint-Task Regularization for Partially Labeled Multi-Task Learning
Multi-task learning has become increasingly popular in the machine learning field, but its practicality is hindered by the need for large, labeled datasets.
Joint Training on Multiple Datasets With Inconsistent Labeling Criteria for Facial Expression Recognition
In this study, we propose a joint training method for training an FER model using multiple FER datasets.
MTLoRA: A Low-Rank Adaptation Approach for Efficient Multi-Task Learning
Adapting models pre-trained on large-scale datasets to a variety of downstream tasks is a common strategy in deep learning.
JIST: Joint Image and Sequence Training for Sequential Visual Place Recognition
As a mitigation to this problem, we propose a novel Joint Image and Sequence Training protocol (JIST) that leverages large uncurated sets of images through a multi-task learning framework.
SYNCS: Synthetic Data and Contrastive Self-Supervised Training for Central Sulcus Segmentation
Identifying risk markers early is crucial for understanding disease progression and enabling preventive measures.
Volumetric Environment Representation for Vision-Language Navigation
To achieve a comprehensive 3D representation with fine-grained details, we introduce a Volumetric Environment Representation (VER), which voxelizes the physical world into structured 3D cells.