Auxiliary Learning
25 papers with code • 0 benchmarks • 0 datasets
Auxiliary learning aims to find or design auxiliary tasks which can improve the performance on one or some primary tasks.
( Image credit: Self-Supervised Generalisation with Meta Auxiliary Learning )
Benchmarks
These leaderboards are used to track progress in Auxiliary Learning
Latest papers
Improving CTC-based speech recognition via knowledge transferring from pre-trained language models
Recently, end-to-end automatic speech recognition models based on connectionist temporal classification (CTC) have achieved impressive results, especially when fine-tuned from wav2vec2. 0 models.
Auto-Lambda: Disentangling Dynamic Task Relationships
Unlike previous methods where task relationships are assumed to be fixed, Auto-Lambda is a gradient-based meta learning framework which explores continuous, dynamic task relationships via task-specific weightings, and can optimise any choice of combination of tasks through the formulation of a meta-loss; where the validation loss automatically influences task weightings throughout training.
On Exploring Pose Estimation as an Auxiliary Learning Task for Visible-Infrared Person Re-identification
Visible-infrared person re-identification (VI-ReID) has been challenging due to the existence of large discrepancies between visible and infrared modalities.
Auxiliary Learning for Self-Supervised Video Representation via Similarity-based Knowledge Distillation
Our experimental results show superior results to the state of the art on both UCF101 and HMDB51 datasets when pretraining on K100 in apple-to-apple comparisons.
Boost-RS: Boosted Embeddings for Recommender Systems and its Application to Enzyme-Substrate Interaction Prediction
We show that each of our auxiliary tasks boosts learning of the embedding vectors, and that contrastive learning using Boost-RS outperforms attribute concatenation and multi-label learning.
Leveraging Auxiliary Tasks with Affinity Learning for Weakly Supervised Semantic Segmentation
Motivated by the significant inter-task correlation, we propose a novel weakly supervised multi-task framework termed as AuxSegNet, to leverage saliency detection and multi-label image classification as auxiliary tasks to improve the primary task of semantic segmentation using only image-level ground-truth labels.
Auxiliary Tasks and Exploration Enable ObjectNav
We instead re-enable a generic learned agent by adding auxiliary learning tasks and an exploration reward.
Self-supervised Auxiliary Learning for Graph Neural Networks via Meta-Learning
Our method is learning to learn a primary task with various auxiliary tasks to improve generalization performance.
Self-supervised Auxiliary Learning with Meta-paths for Heterogeneous Graphs
Our proposed method is learning to learn a primary task by predicting meta-paths as auxiliary tasks.
Auxiliary Learning by Implicit Differentiation
Two main challenges arise in this multi-task learning setting: (i) designing useful auxiliary tasks; and (ii) combining auxiliary tasks into a single coherent loss.