Co-attention network with label embedding for text classification

Most existing methods for text classification focus on extracting a highly discriminative text representation, which, however, is typically computationally inefficient. To alleviate this issue, label embedding frameworks are proposed to adopt the label-to-text attention that directly uses label information to construct the text representation for more efficient text classification. Although these label embedding methods have achieved promising results, there is still much space for exploring how to se the label information more effectively. In this paper, we seek to exploit the label information by further constructing the text-attended label representation with text-to-label attention. To this end, we propose a Coattention Network with Label Embedding (CNLE) that jointly encodes the text and labels into their mutually attended representations. In this way, the model is able to attend to the relevant parts of both. Experiments show that our approach achieves competitive results compared with previous state-ofthe-art methods on 7 multi-class classification benchmarks and 2 multi-label classification benchmarks.

PDF

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Multi-Label Text Classification AAPD CNLE Micro F1 71.7 # 2
Multi-Label Text Classification Reuters-21578 CNLE Micro-F1 89.9 # 4

Methods


No methods listed for this paper. Add relevant methods here