Search Results for author: Victoria Lin

Found 8 papers, 4 papers with code

Optimizing Language Models for Human Preferences is a Causal Inference Problem

no code implementations22 Feb 2024 Victoria Lin, Eli Ben-Michael, Louis-Philippe Morency

In this paper, we present an initial exploration of language model optimization for human preferences from direct outcome datasets, where each sample consists of a text and an associated numerical outcome measuring the reader's response.

Causal Inference Language Modelling +1

Text-Transport: Toward Learning Causal Effects of Natural Language

1 code implementation31 Oct 2023 Victoria Lin, Louis-Philippe Morency, Eli Ben-Michael

To address this issue, we leverage the notion of distribution shift to describe an estimator that transports causal effects between domains, bypassing the need for strong assumptions in the target domain.

Attribute Causal Inference +1

SenteCon: Leveraging Lexicons to Learn Human-Interpretable Language Representations

1 code implementation24 May 2023 Victoria Lin, Louis-Philippe Morency

Moreover, we find that SenteCon outperforms existing interpretable language representations with respect to both its downstream performance and its agreement with human characterizations of the text.

Decision Making

Counterfactual Augmentation for Multimodal Learning Under Presentation Bias

1 code implementation23 May 2023 Victoria Lin, Louis-Philippe Morency, Dimitrios Dimitriadis, Srinagesh Sharma

In real-world machine learning systems, labels are often derived from user behaviors that the system wishes to encourage.

counterfactual

SeedBERT: Recovering Annotator Rating Distributions from an Aggregated Label

no code implementations23 Nov 2022 Aneesha Sampath, Victoria Lin, Louis-Philippe Morency

However, machine learning datasets commonly have just one "ground truth" label for each sample, so models trained on these labels may not perform well on tasks that are subjective in nature.

Stage-wise Fine-tuning for Graph-to-Text Generation

1 code implementation ACL 2021 Qingyun Wang, Semih Yavuz, Victoria Lin, Heng Ji, Nazneen Rajani

Graph-to-text generation has benefited from pre-trained language models (PLMs) in achieving better performance than structured graph encoders.

Ranked #3 on Data-to-Text Generation on WebNLG (using extra training data)

Data-to-Text Generation KB-to-Language Generation +2

Context-Dependent Models for Predicting and Characterizing Facial Expressiveness

no code implementations10 Dec 2019 Victoria Lin, Jeffrey M. Girard, Louis-Philippe Morency

In recent years, extensive research has emerged in affective computing on topics like automatic emotion recognition and determining the signals that characterize individual emotions.

Emotion Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.