Multi-Label Learning
82 papers with code • 1 benchmarks • 8 datasets
Multi-label learning (MLL) is a generalization of the binary and multi-category classification problems and deals with tagging a data instance with several possible class labels simultaneously [1]. Each of the assigned labels conveys a specific semantic relationship with the multi-label data instance [2, 3]. Multi-label learning has continued to receive a lot of research interest due to its practical application in many real-world problems such as recommender systems [4], image annotation [5], and text classification [6].
References:
-
Kumar, S., Rastogi, R., Low rank label subspace transformation for multi-label learning with missing labels. Information Sciences 596, 53–72 (2022)
-
Zhang M-L, Zhou Z-H (2013) A review on multi-label learning algorithms. IEEE Trans Knowl Data Eng 26(8):1819–1837
-
Gibaja E, Ventura S (2015) A tutorial on multilabel learning. ACM Comput Surveys (CSUR) 47(3):1–38
-
Bogaert M, Lootens J, Van den Poel D, Ballings M (2019) Evaluating multi-label classifiers and recommender systems in the financial service sector. Eur J Oper Res 279(2):620– 634
-
Jing L, Shen C, Yang L, Yu J, Ng MK (2017) Multi-label classification by semi-supervised singular value decomposition. IEEE Trans Image Process 26(10):4612–4625
-
Chen Z, Ren J (2021) Multi-label text classification with latent word-wise label information. Appl Intell 51(2):966–979
Datasets
Most implemented papers
Semi-supervised multi-label feature selection via label correlation analysis with l1-norm graph embedding
Compared with the previous works, there are two advantages of our algorithm: (1) Manifold learning which leverages the underlying geometric structure of the training data is imposed to utilize both labeled and unlabeled data.
Deep Extreme Multi-label Learning
Extreme multi-label learning (XML) or classification has been a practical and important problem since the boom of big data.
Tips, guidelines and tools for managing multi-label datasets: the mldr.datasets R package and the Cometa data repository
New proposals in the field of multi-label learning algorithms have been growing in number steadily over the last few years.
Incremental Sparse Bayesian Ordinal Regression
Ordinal Regression (OR) aims to model the ordering information between different data categories, which is a crucial topic in multi-label learning.
Learning a Compressed Sensing Measurement Matrix via Gradient Unrolling
Our experiments show that there is indeed additional structure beyond sparsity in the real datasets; our method is able to discover it and exploit it to create excellent reconstructions with fewer measurements (by a factor of 1. 1-3x) compared to the previous state-of-the-art methods.
Few-Shot and Zero-Shot Multi-Label Learning for Structured Label Spaces
Furthermore, we develop few- and zero-shot methods for multi-label text classification when there is a known structure over the label space, and evaluate them on two publicly available medical text datasets: MIMIC II and MIMIC III.
Attention-Based Capsule Networks with Dynamic Routing for Relation Extraction
A capsule is a group of neurons, whose activity vector represents the instantiation parameters of a specific type of entity.
Pedestrian Attribute Recognition: A Survey
We also review some popular network architectures which have been widely applied in the deep learning community.
Variational Autoencoders for Sparse and Overdispersed Discrete Data
Many applications, such as text modelling, high-throughput sequencing, and recommender systems, require analysing sparse, high-dimensional, and overdispersed discrete (count-valued or binary) data.
Atlas of Digital Pathology: A Generalized Hierarchical Histological Tissue Type-Annotated Database for Deep Learning
Quantitative results support the visually consistency of our data and we demonstrate a tissue type-based visual attention aid as a sample tool that could be developed from our database.