Paper

NMT-based Cross-lingual Document Embeddings

This paper investigates a cross-lingual document embedding method that improves the current Neural machine Translation framework based Document Vector (NTDV or simply NV). NV is developed with a self-attention mechanism under the neural machine translation (NMT) framework. In NV, each pair of parallel documents in different languages are projected to the same shared layer in the model. However, the pair of NV embeddings are not guaranteed to be similar. This paper further adds a distance constraint to the training objective function of NV so that the two embeddings of a parallel document are required to be as close as possible. The new method will be called constrained NV (cNV). In a cross-lingual document classification task, the new cNV performs as well as NV and outperforms other published studies that require forward-pass decoding. Compared with the previous NV, cNV does not need a translator during testing, and so the method is lighter and more flexible.

Results in Papers With Code
(↓ scroll down to see all results)