Cross-Lingual Natural Language Inference

16 papers with code • 4 benchmarks • 2 datasets

Using data and models available for one language for which ample such resources are available (e.g., English) to solve a natural language inference task in another, commonly more low-resource, language.

Libraries

Use these libraries to find Cross-Lingual Natural Language Inference models and implementations
2 papers
396

Latest papers with no code

Robust Unsupervised Cross-Lingual Word Embedding using Domain Flow Interpolation

no code yet • 7 Oct 2022

Further experiments on the downstream task of Cross-Lingual Natural Language Inference show that the proposed model achieves significant performance improvement for distant language pairs in downstream tasks compared to state-of-the-art adversarial and non-adversarial models.

Alexa Teacher Model: Pretraining and Distilling Multi-Billion-Parameter Encoders for Natural Language Understanding Systems

no code yet • 15 Jun 2022

We present results from a large-scale experiment on pretraining encoders with non-embedding parameter counts ranging from 700M to 9. 3B, their subsequent distillation into smaller models ranging from 17M-170M parameters, and their application to the Natural Language Understanding (NLU) component of a virtual assistant system.

Data Augmentation with Adversarial Training for Cross-Lingual NLI

no code yet • ACL 2021

Due to recent pretrained multilingual representation models, it has become feasible to exploit labeled data from one language to train a cross-lingual model that can then be applied to multiple new languages.

Soft Layer Selection with Meta-Learning for Zero-Shot Cross-Lingual Transfer

no code yet • ACL (MetaNLP) 2021

Multilingual pre-trained contextual embedding models (Devlin et al., 2019) have achieved impressive performance on zero-shot cross-lingual transfer tasks.

SILT: Efficient transformer training for inter-lingual inference

no code yet • 17 Mar 2021

In this paper, we propose a new architecture called Siamese Inter-Lingual Transformer (SILT), to efficiently align multilingual embeddings for Natural Language Inference, allowing for unmatched language pairs to be processed.

Meta-Learning with MAML on Trees

no code yet • 8 Mar 2021

We show that TreeMAML improves the state of the art results for cross-lingual Natural Language Inference.

On Learning Universal Representations Across Languages

no code yet • ICLR 2021

Recent studies have demonstrated the overwhelming advantage of cross-lingual pre-trained models (PTMs), such as multilingual BERT and XLM, on cross-lingual NLP tasks.

Meemi: A Simple Method for Post-processing and Integrating Cross-lingual Word Embeddings

no code yet • 16 Oct 2019

While monolingual word embeddings encode information about words in the context of a particular language, cross-lingual embeddings define a multilingual space where word embeddings from two or more languages are integrated together.

Unicoder: A Universal Language Encoder by Pre-training with Multiple Cross-lingual Tasks

no code yet • IJCNLP 2019

On XNLI, 1. 8% averaged accuracy improvement (on 15 languages) is obtained.

XLDA: Cross-Lingual Data Augmentation for Natural Language Inference and Question Answering

no code yet • ICLR 2020

XLDA is in contrast to, and performs markedly better than, a more naive approach that aggregates examples in various languages in a way that each example is solely in one language.