Search Results for author: Emily McMilin

Found 3 papers, 2 papers with code

Underspecification in Language Modeling Tasks: A Causality-Informed Study of Gendered Pronoun Resolution

2 code implementations30 Sep 2022 Emily McMilin

Modern language modeling tasks are often underspecified: for a given token prediction, many words may satisfy the user's intent of producing natural language at inference time, however only one word will minimize the task's loss function at training time.

Language Modelling Selection bias

Selection Collider Bias in Large Language Models

1 code implementation22 Aug 2022 Emily McMilin

In this paper we motivate the causal mechanisms behind sample selection induced collider bias (selection collider bias) that can cause Large Language Models (LLMs) to learn unconditional dependence between entities that are unconditionally independent in the real world.

Selection Bias Induced Spurious Correlations in Large Language Models

no code implementations18 Jul 2022 Emily McMilin

In this work we show how large language models (LLMs) can learn statistical dependencies between otherwise unconditionally independent variables due to dataset selection bias.

Selection bias

Cannot find the paper you are looking for? You can Submit a new open access paper.