Few-Shot NLI

6 papers with code • 3 benchmarks • 2 datasets

This task has no description! Would you like to contribute one?

Most implemented papers

Zero-Shot Cross-Lingual Transfer with Meta Learning

copenlu/X-MAML EMNLP 2020

We show that this challenging setup can be approached using meta-learning, where, in addition to training a source language model, another model learns to select which training instances are the most beneficial to the first.

Language Models for Lexical Inference in Context

mnschmit/lm-lexical-inference EACL 2021

Lexical inference in context (LIiC) is the task of recognizing textual entailment between two very similar sentences, i. e., sentences that only differ in one expression.

Continuous Entailment Patterns for Lexical Inference in Context

mnschmit/conan EMNLP 2021

If we allow for tokens outside the PLM's vocabulary, patterns can be adapted more flexibly to a PLM's idiosyncrasies.

STraTA: Self-Training with Task Augmentation for Better Few-shot Learning

google-research/google-research EMNLP 2021

Despite their recent successes in tackling many NLP tasks, large-scale pre-trained language models do not perform as well in few-shot settings where only a handful of training examples are available.

Investigating the Effect of Natural Language Explanations on Out-of-Distribution Generalization in Few-shot NLI

chicagohai/hans-explanations EMNLP (insights) 2021

Although neural models have shown strong performance in datasets such as SNLI, they lack the ability to generalize out-of-distribution (OOD).

Instructive Decoding: Instruction-Tuned Large Language Models are Self-Refiner from Noisy Instructions

joonkeekim/Instructive-Decoding 1 Nov 2023

Notably, utilizing 'opposite' as the noisy instruction in ID, which exhibits the maximum divergence from the original instruction, consistently produces the most significant performance gains across multiple models and tasks.