Interpretable Neural Embeddings with Sparse Self-Representation

25 Jun 2023  ·  Minxue Xia, Hao Zhu ·

Interpretability benefits the theoretical understanding of representations. Existing word embeddings are generally dense representations. Hence, the meaning of latent dimensions is difficult to interpret. This makes word embeddings like a black-box and prevents them from being human-readable and further manipulation. Many methods employ sparse representation to learn interpretable word embeddings for better interpretability. However, they also suffer from the unstable issue of grouped selection in $\ell1$ and online dictionary learning. Therefore, they tend to yield different results each time. To alleviate this challenge, we propose a novel method to associate data self-representation with a shallow neural network to learn expressive, interpretable word embeddings. In experiments, we report that the resulting word embeddings achieve comparable and even slightly better interpretability than baseline embeddings. Besides, we also evaluate that our approach performs competitively well on all downstream tasks and outperforms benchmark embeddings on a majority of them.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here