HILL: Hierarchy-aware Information Lossless Contrastive Learning for Hierarchical Text Classification

26 Mar 2024  ·  He Zhu, Junran Wu, Ruomei Liu, Yue Hou, Ze Yuan, Shangzhe Li, YiCheng Pan, Ke Xu ·

Existing self-supervised methods in natural language processing (NLP), especially hierarchical text classification (HTC), mainly focus on self-supervised contrastive learning, extremely relying on human-designed augmentation rules to generate contrastive samples, which can potentially corrupt or distort the original information. In this paper, we tend to investigate the feasibility of a contrastive learning scheme in which the semantic and syntactic information inherent in the input sample is adequately reserved in the contrastive samples and fused during the learning process. Specifically, we propose an information lossless contrastive learning strategy for HTC, namely \textbf{H}ierarchy-aware \textbf{I}nformation \textbf{L}ossless contrastive \textbf{L}earning (HILL), which consists of a text encoder representing the input document, and a structure encoder directly generating the positive sample. The structure encoder takes the document embedding as input, extracts the essential syntactic information inherent in the label hierarchy with the principle of structural entropy minimization, and injects the syntactic information into the text representation via hierarchical representation learning. Experiments on three common datasets are conducted to verify the superiority of HILL.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Hierarchical Multi-label Classification New York Times Annotated Corpus HILL Micro F1 80.47 # 1
Macro F1 69.96 # 1
Hierarchical Multi-label Classification RCV1-v2 HILL Micro F1 87.31 # 1
Macro F1 70.12 # 1
Hierarchical Multi-label Classification WOS HILL Micro F1 87.28 # 1
Macro F1 81.77 # 1

Methods