no code implementations • WS 2018 • Valentin Trifonov, Octavian-Eugen Ganea, Anna Potapenko, Thomas Hofmann
Previous research on word embeddings has shown that sparse representations, which can be either learned on top of existing dense embeddings or obtained through model constraints during training time, have the benefit of increased interpretability properties: to some degree, each dimension can be understood by a human and associated with a recognizable feature in the data.