Search Results for author: Jillian Fisher

Found 6 papers, 4 papers with code

JAMDEC: Unsupervised Authorship Obfuscation using Constrained Decoding over Small Language Models

1 code implementation13 Feb 2024 Jillian Fisher, Ximing Lu, JaeHun Jung, Liwei Jiang, Zaid Harchaoui, Yejin Choi

The permanence of online content combined with the enhanced authorship identification techniques calls for stronger computational methods to protect the identity and privacy of online authorship when needed, e. g., blind reviews for scientific papers, anonymous online reviews, or anonymous interactions in the mental health forums.

A Roadmap to Pluralistic Alignment

1 code implementation7 Feb 2024 Taylor Sorensen, Jared Moore, Jillian Fisher, Mitchell Gordon, Niloofar Mireshghallah, Christopher Michael Rytting, Andre Ye, Liwei Jiang, Ximing Lu, Nouha Dziri, Tim Althoff, Yejin Choi

We identify and formalize three possible ways to define and operationalize pluralism in AI systems: 1) Overton pluralistic models that present a spectrum of reasonable responses; 2) Steerably pluralistic models that can steer to reflect certain perspectives; and 3) Distributionally pluralistic models that are well-calibrated to a given population in distribution.

The Generative AI Paradox: "What It Can Create, It May Not Understand"

no code implementations31 Oct 2023 Peter West, Ximing Lu, Nouha Dziri, Faeze Brahman, Linjie Li, Jena D. Hwang, Liwei Jiang, Jillian Fisher, Abhilasha Ravichander, Khyathi Chandu, Benjamin Newman, Pang Wei Koh, Allyson Ettinger, Yejin Choi

Specifically, we propose and test the Generative AI Paradox hypothesis: generative models, having been trained directly to reproduce expert-like outputs, acquire generative capabilities that are not contingent upon -- and can therefore exceed -- their ability to understand those same types of outputs.

Impossible Distillation: from Low-Quality Model to High-Quality Dataset & Model for Summarization and Paraphrasing

no code implementations26 May 2023 JaeHun Jung, Peter West, Liwei Jiang, Faeze Brahman, Ximing Lu, Jillian Fisher, Taylor Sorensen, Yejin Choi

We present Impossible Distillation, a novel framework for paraphrasing and sentence summarization, that distills a high-quality dataset and model from a low-quality teacher that itself cannot perform these tasks.

Paraphrase Generation Sentence +1

Inference-Time Policy Adapters (IPA): Tailoring Extreme-Scale LMs without Fine-tuning

1 code implementation24 May 2023 Ximing Lu, Faeze Brahman, Peter West, Jaehun Jang, Khyathi Chandu, Abhilasha Ravichander, Lianhui Qin, Prithviraj Ammanabrolu, Liwei Jiang, Sahana Ramnath, Nouha Dziri, Jillian Fisher, Bill Yuchen Lin, Skyler Hallinan, Xiang Ren, Sean Welleck, Yejin Choi

While extreme-scale language models have demonstrated exceptional performance on a variety of language tasks, the degree of control over these language models through pure prompting can often be limited.

GPT-4 Language Modelling +2

Statistical and Computational Guarantees for Influence Diagnostics

1 code implementation8 Dec 2022 Jillian Fisher, Lang Liu, Krishna Pillutla, Yejin Choi, Zaid Harchaoui

Influence diagnostics such as influence functions and approximate maximum influence perturbations are popular in machine learning and in AI domain applications.

Cannot find the paper you are looking for? You can Submit a new open access paper.