SWAG (Situations With Adversarial Generations)

Introduced by Zellers et al. in SWAG: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference

Given a partial description like "she opened the hood of the car," humans can reason about the situation and anticipate what might come next ("then, she examined the engine"). SWAG (Situations With Adversarial Generations) is a large-scale dataset for this task of grounded commonsense inference, unifying natural language inference and physically grounded reasoning.

The dataset consists of 113k multiple choice questions about grounded situations. Each question is a video caption from LSMDC or ActivityNet Captions, with four answer choices about what might happen next in the scene. The correct answer is the (real) video caption for the next event in the video; the three incorrect answers are adversarially generated and human verified, so as to fool machines but not humans. The authors aim for SWAG to be a benchmark for evaluating grounded commonsense NLI and for learning representations.

Source: SWAG

Papers


Paper Code Results Date Stars

Tasks


Similar Datasets


Source: Zellers et al.

License


Modalities


Languages