Testing Pre-trained Language Models' Understanding of Distributivity via Causal Mediation Analysis

11 Sep 2022  ·  Pangbo Ban, Yifan Jiang, Tianran Liu, Shane Steinert-Threlkeld ·

To what extent do pre-trained language models grasp semantic knowledge regarding the phenomenon of distributivity? In this paper, we introduce DistNLI, a new diagnostic dataset for natural language inference that targets the semantic difference arising from distributivity, and employ the causal mediation analysis framework to quantify the model behavior and explore the underlying mechanism in this semantically-related task. We find that the extent of models' understanding is associated with model size and vocabulary size. We also provide insights into how models encode such high-level semantic knowledge.

PDF Abstract

Datasets


Introduced in the Paper:

DistNLI

Used in the Paper:

MultiNLI

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here