Search Results for author: Navita Goyal

Found 9 papers, 1 papers with code

CaM-Gen: Causally Aware Metric-Guided Text Generation

no code implementations Findings (ACL) 2022 Navita Goyal, Roodram Paneri, Ayush Agarwal, Udit Kalani, Abhilasha Sancheti, Niyati Chhaya

We leverage causal inference techniques to identify causally significant aspects of a text that lead to the target metric and then explicitly guide generative models towards these by a feedback mechanism.

Causal Inference Text Generation

DynamicTOC: Persona-based Table of Contents for Consumption of Long Documents

no code implementations NAACL 2022 Himanshu Maheshwari, Nethraa Sivakumar, Shelly Jain, Tanvi Karandikar, Vinay Aggarwal, Navita Goyal, Sumit Shekhar

Linearly consuming (via scrolling or navigation through default table of content) these documents is time-consuming and challenging.

Large Language Models Help Humans Verify Truthfulness -- Except When They Are Convincingly Wrong

no code implementations19 Oct 2023 Chenglei Si, Navita Goyal, Sherry Tongshuang Wu, Chen Zhao, Shi Feng, Hal Daumé III, Jordan Boyd-Graber

To reduce over-reliance on LLMs, we ask LLMs to provide contrastive information - explain both why the claim is true and false, and then we present both sides of the explanation to users.

Fact Checking Information Retrieval

The Impact of Explanations on Fairness in Human-AI Decision-Making: Protected vs Proxy Features

no code implementations12 Oct 2023 Navita Goyal, Connor Baumler, Tin Nguyen, Hal Daumé III

In this work, we study the effect of the presence of protected and proxy features on participants' perception of model fairness and their ability to improve demographic parity over an AI alone.

Decision Making Fairness

Personalized Detection of Cognitive Biases in Actions of Users from Their Logs: Anchoring and Recency Biases

no code implementations30 Jun 2022 Atanu R Sinha, Navita Goyal, Sunny Dhamnani, Tanay Asija, Raja K Dubey, M V Kaarthik Raja, Georgios Theocharous

The recognition of cognitive bias in computer science is largely in the domain of information retrieval, and bias is identified at an aggregate level with the help of annotated data.

Bias Detection Ethics +3

CaM-Gen:Causally-aware Metric-guided Text Generation

no code implementations24 Oct 2020 Navita Goyal, Roodram Paneri, Ayush Agarwal, Udit Kalani, Abhilasha Sancheti, Niyati Chhaya

We leverage causal inference techniques to identify causally significant aspects of a text that lead to the target metric and then explicitly guide generative models towards these by a feedback mechanism.

Causal Inference Text Generation

Multi-Style Transfer with Discriminative Feedback on Disjoint Corpus

no code implementations NAACL 2021 Navita Goyal, Balaji Vasan Srinivasan, Anandhavelu Natarajan, Abhilasha Sancheti

Style transfer has been widely explored in natural language generation with non-parallel corpus by directly or indirectly extracting a notion of style from source and target domain corpus.

Language Modelling Style Transfer +1

Cannot find the paper you are looking for? You can Submit a new open access paper.