Cross-Lingual Natural Language Inference
16 papers with code • 4 benchmarks • 2 datasets
Using data and models available for one language for which ample such resources are available (e.g., English) to solve a natural language inference task in another, commonly more low-resource, language.
Libraries
Use these libraries to find Cross-Lingual Natural Language Inference models and implementationsLatest papers with no code
Robust Unsupervised Cross-Lingual Word Embedding using Domain Flow Interpolation
Further experiments on the downstream task of Cross-Lingual Natural Language Inference show that the proposed model achieves significant performance improvement for distant language pairs in downstream tasks compared to state-of-the-art adversarial and non-adversarial models.
Alexa Teacher Model: Pretraining and Distilling Multi-Billion-Parameter Encoders for Natural Language Understanding Systems
We present results from a large-scale experiment on pretraining encoders with non-embedding parameter counts ranging from 700M to 9. 3B, their subsequent distillation into smaller models ranging from 17M-170M parameters, and their application to the Natural Language Understanding (NLU) component of a virtual assistant system.
Data Augmentation with Adversarial Training for Cross-Lingual NLI
Due to recent pretrained multilingual representation models, it has become feasible to exploit labeled data from one language to train a cross-lingual model that can then be applied to multiple new languages.
Soft Layer Selection with Meta-Learning for Zero-Shot Cross-Lingual Transfer
Multilingual pre-trained contextual embedding models (Devlin et al., 2019) have achieved impressive performance on zero-shot cross-lingual transfer tasks.
SILT: Efficient transformer training for inter-lingual inference
In this paper, we propose a new architecture called Siamese Inter-Lingual Transformer (SILT), to efficiently align multilingual embeddings for Natural Language Inference, allowing for unmatched language pairs to be processed.
Meta-Learning with MAML on Trees
We show that TreeMAML improves the state of the art results for cross-lingual Natural Language Inference.
On Learning Universal Representations Across Languages
Recent studies have demonstrated the overwhelming advantage of cross-lingual pre-trained models (PTMs), such as multilingual BERT and XLM, on cross-lingual NLP tasks.
Meemi: A Simple Method for Post-processing and Integrating Cross-lingual Word Embeddings
While monolingual word embeddings encode information about words in the context of a particular language, cross-lingual embeddings define a multilingual space where word embeddings from two or more languages are integrated together.
Unicoder: A Universal Language Encoder by Pre-training with Multiple Cross-lingual Tasks
On XNLI, 1. 8% averaged accuracy improvement (on 15 languages) is obtained.
XLDA: Cross-Lingual Data Augmentation for Natural Language Inference and Question Answering
XLDA is in contrast to, and performs markedly better than, a more naive approach that aggregates examples in various languages in a way that each example is solely in one language.