Multi-Task Learning

1101 papers with code • 6 benchmarks • 55 datasets

Multi-task learning aims to learn multiple different tasks simultaneously while maximizing performance on one or all of the tasks.

( Image credit: Cross-stitch Networks for Multi-task Learning )

Libraries

Use these libraries to find Multi-Task Learning models and implementations

How does Multi-Task Training Affect Transformer In-Context Capabilities? Investigations with Function Classes

harmonbhasin/curriculum_learning_icl 4 Apr 2024

Large language models (LLM) have recently shown the extraordinary ability to perform unseen tasks based on few-shot examples provided as text, also known as in-context learning (ICL).

1
04 Apr 2024

Multi-Granularity Guided Fusion-in-Decoder

eunseongc/mgfid 3 Apr 2024

In Open-domain Question Answering (ODQA), it is essential to discern relevant contexts as evidence and avoid spurious ones among retrieved results.

7
03 Apr 2024

Large Language Models for Expansion of Spoken Language Understanding Systems to New Languages

samsung/mt-llm-nlu 3 Apr 2024

In the on-device scenario (tiny and not pretrained SLU), our method improved the Overall Accuracy from 5. 31% to 22. 06% over the baseline Global-Local Contrastive Learning Framework (GL-CLeF) method.

4
03 Apr 2024

EGTR: Extracting Graph from Transformer for Scene Graph Generation

naver-ai/egtr 2 Apr 2024

We propose a lightweight one-stage SGG model that extracts the relation graph from the various relationships learned in the multi-head self-attention layers of the DETR decoder.

13
02 Apr 2024

Joint-Task Regularization for Partially Labeled Multi-Task Learning

kentonishi/jtr-cvpr-2024 2 Apr 2024

Multi-task learning has become increasingly popular in the machine learning field, but its practicality is hindered by the need for large, labeled datasets.

3
02 Apr 2024
1
02 Apr 2024

MTLoRA: A Low-Rank Adaptation Approach for Efficient Multi-Task Learning

scale-lab/mtlora 29 Mar 2024

Adapting models pre-trained on large-scale datasets to a variety of downstream tasks is a common strategy in deep learning.

14
29 Mar 2024

JIST: Joint Image and Sequence Training for Sequential Visual Place Recognition

ga1i13o/jist 28 Mar 2024

As a mitigation to this problem, we propose a novel Joint Image and Sequence Training protocol (JIST) that leverages large uncurated sets of images through a multi-task learning framework.

12
28 Mar 2024

SYNCS: Synthetic Data and Contrastive Self-Supervised Training for Central Sulcus Segmentation

vivikar/central-sulcus-analysis 22 Mar 2024

Identifying risk markers early is crucial for understanding disease progression and enabling preventive measures.

0
22 Mar 2024

Volumetric Environment Representation for Vision-Language Navigation

defaultrui/vln-ver 21 Mar 2024

To achieve a comprehensive 3D representation with fine-grained details, we introduce a Volumetric Environment Representation (VER), which voxelizes the physical world into structured 3D cells.

7
21 Mar 2024