no code implementations • 22 Feb 2024 • Victoria Lin, Eli Ben-Michael, Louis-Philippe Morency
In this paper, we present an initial exploration of language model optimization for human preferences from direct outcome datasets, where each sample consists of a text and an associated numerical outcome measuring the reader's response.
1 code implementation • 31 Oct 2023 • Victoria Lin, Louis-Philippe Morency, Eli Ben-Michael
To address this issue, we leverage the notion of distribution shift to describe an estimator that transports causal effects between domains, bypassing the need for strong assumptions in the target domain.
1 code implementation • 24 May 2023 • Victoria Lin, Louis-Philippe Morency
Moreover, we find that SenteCon outperforms existing interpretable language representations with respect to both its downstream performance and its agreement with human characterizations of the text.
1 code implementation • 23 May 2023 • Victoria Lin, Louis-Philippe Morency, Dimitrios Dimitriadis, Srinagesh Sharma
In real-world machine learning systems, labels are often derived from user behaviors that the system wishes to encourage.
no code implementations • 23 Nov 2022 • Aneesha Sampath, Victoria Lin, Louis-Philippe Morency
However, machine learning datasets commonly have just one "ground truth" label for each sample, so models trained on these labels may not perform well on tasks that are subjective in nature.
1 code implementation • ACL 2021 • Qingyun Wang, Semih Yavuz, Victoria Lin, Heng Ji, Nazneen Rajani
Graph-to-text generation has benefited from pre-trained language models (PLMs) in achieving better performance than structured graph encoders.
Ranked #3 on Data-to-Text Generation on WebNLG (using extra training data)
no code implementations • 10 Dec 2019 • Victoria Lin, Jeffrey M. Girard, Louis-Philippe Morency
In recent years, extensive research has emerged in affective computing on topics like automatic emotion recognition and determining the signals that characterize individual emotions.