Search Results for author: Tuhin Chakrabarty

Found 34 papers, 23 papers with code

Identifying Self-Disclosures of Use, Misuse and Addiction in Community-based Social Media Posts

1 code implementation15 Nov 2023 Chenghao Yang, Tuhin Chakrabarty, Karli R Hochstatter, Melissa N Slavin, Nabila El-Bassel, Smaranda Muresan

In the last decade, the United States has lost more than 500, 000 people from an overdose involving prescription and illicit opioids making it a national public health emergency (USDHHS, 2017).

Learning to Follow Object-Centric Image Editing Instructions Faithfully

1 code implementation29 Oct 2023 Tuhin Chakrabarty, Kanishk Singh, Arkadiy Saakyan, Smaranda Muresan

Current approaches focusing on image editing with natural language instructions rely on automatically generated paired data, which, as shown in our investigation, is noisy and sometimes nonsensical, exacerbating the above issues.

Object Question Answering +1

Art or Artifice? Large Language Models and the False Promise of Creativity

no code implementations25 Sep 2023 Tuhin Chakrabarty, Philippe Laban, Divyansh Agarwal, Smaranda Muresan, Chien-Sheng Wu

Inspired by the Torrance Test of Creative Thinking (TTCT), which measures creativity as a process, we use the Consensual Assessment Technique [3] and propose the Torrance Test of Creative Writing (TTCW) to evaluate creativity as a product.

Creativity Support in the Age of Large Language Models: An Empirical Study Involving Emerging Writers

no code implementations22 Sep 2023 Tuhin Chakrabarty, Vishakh Padmakumar, Faeze Brahman, Smaranda Muresan

The development of large language models (LLMs) capable of following instructions and engaging in conversational interactions sparked increased interest in their utilization across various support tools.

I Spy a Metaphor: Large Language Models and Diffusion Models Co-Create Visual Metaphors

1 code implementation24 May 2023 Tuhin Chakrabarty, Arkadiy Saakyan, Olivia Winn, Artemis Panagopoulou, Yue Yang, Marianna Apidianaki, Smaranda Muresan

We propose to solve the task through the collaboration between Large Language Models (LLMs) and Diffusion Models: Instruct GPT-3 (davinci-002) with Chain-of-Thought prompting generates text that represents a visual elaboration of the linguistic metaphor containing the implicit meaning and relevant objects, which is then used as input to the diffusion-based text-to-image models. Using a human-AI collaboration framework, where humans interact both with the LLM and the top-performing diffusion model, we create a high-quality dataset containing 6, 476 visual metaphors for 1, 540 linguistic metaphors and their associated visual elaborations.

Visual Entailment

Multitask Instruction-based Prompting for Fallacy Recognition

no code implementations24 Jan 2023 Tariq Alhindi, Tuhin Chakrabarty, Elena Musi, Smaranda Muresan

To move towards solving the fallacy recognition task, we approach these differences across datasets as multiple tasks and show how instruction-based prompting in a multitask setup based on the T5 model improves the results against approaches built for a specific dataset such as T5, BERT or GPT-3.

Sentence valid

Help me write a poem: Instruction Tuning as a Vehicle for Collaborative Poetry Writing

1 code implementation25 Oct 2022 Tuhin Chakrabarty, Vishakh Padmakumar, He He

The core component of our system is a language model fine-tuned on a diverse collection of instructions for poetry writing.

Language Modelling Sentence

CONSISTENT: Open-Ended Question Generation From News Articles

1 code implementation20 Oct 2022 Tuhin Chakrabarty, Justin Lewis, Smaranda Muresan

Recent work on question generation has largely focused on factoid questions such as who, what, where, when about basic facts.

Question Generation Question-Generation

Fine-tuned Language Models are Continual Learners

1 code implementation24 May 2022 Thomas Scialom, Tuhin Chakrabarty, Smaranda Muresan

In spite of the limited success of Continual Learning we show that Language Models can be continual learners.

Continual Learning

FLUTE: Figurative Language Understanding through Textual Explanations

1 code implementation24 May 2022 Tuhin Chakrabarty, Arkadiy Saakyan, Debanjan Ghosh, Smaranda Muresan

Figurative language understanding has been recently framed as a recognizing textual entailment (RTE) task (a. k. a.

Natural Language Inference RTE

Don't Go Far Off: An Empirical Study on Neural Poetry Translation

1 code implementation7 Sep 2021 Tuhin Chakrabarty, Arkadiy Saakyan, Smaranda Muresan

Moreover, multilingual fine-tuning on poetic data outperforms \emph{bilingual} fine-tuning on poetic data.

Machine Translation Translation

Metaphor Generation with Conceptual Mappings

1 code implementation ACL 2021 Kevin Stowe, Tuhin Chakrabarty, Nanyun Peng, Smaranda Muresan, Iryna Gurevych

Guided by conceptual metaphor theory, we propose to control the generation process by encoding conceptual mappings between cognitive domains to generate meaningful metaphoric expressions.

Sentence

ENTRUST: Argument Reframing with Language Models and Entailment

no code implementations NAACL 2021 Tuhin Chakrabarty, Christopher Hidey, Smaranda Muresan

Framing involves the positive or negative presentation of an argument or issue depending on the audience and goal of the speaker (Entman 1983).

Text Generation

MERMAID: Metaphor Generation with Symbolism and Discriminative Decoding

1 code implementation NAACL 2021 Tuhin Chakrabarty, Xurui Zhang, Smaranda Muresan, Nanyun Peng

Generating metaphors is a challenging task as it requires a proper understanding of abstract concepts, making connections between unrelated concepts, and deviating from the literal meaning.

Language Modelling Masked Language Modeling +1

Content Planning for Neural Story Generation with Aristotelian Rescoring

1 code implementation EMNLP 2020 Seraphina Goldfarb-Tarrant, Tuhin Chakrabarty, Ralph Weischedel, Nanyun Peng

Long-form narrative text generated from large language models manages a fluent impersonation of human writing, but only at the local sentence level, and lacks structure or global cohesion.

Language Modelling Sentence +1

Generating similes effortlessly like a Pro: A Style Transfer Approach for Simile Generation

1 code implementation EMNLP 2020 Tuhin Chakrabarty, Smaranda Muresan, Nanyun Peng

We also show how replacing literal sentences with similes from our best model in machine generated stories improves evocativeness and leads to better acceptance by human judges.

Common Sense Reasoning Sentence +1

AMPERSAND: Argument Mining for PERSuAsive oNline Discussions

1 code implementation IJCNLP 2019 Tuhin Chakrabarty, Christopher Hidey, Smaranda Muresan, Kathy Mckeown, Alyssa Hwang

Our approach for relation prediction uses contextual information in terms of fine-tuning a pre-trained language model and leveraging discourse relations based on Rhetorical Structure Theory.

Argument Mining Language Modelling

DeSePtion: Dual Sequence Prediction and Adversarial Examples for Improved Fact-Checking

1 code implementation ACL 2020 Christopher Hidey, Tuhin Chakrabarty, Tariq Alhindi, Siddharth Varia, Kriste Krstovski, Mona Diab, Smaranda Muresan

The increased focus on misinformation has spurred development of data and systems for detecting the veracity of a claim as well as retrieving authoritative evidence.

Fact Checking Misinformation +1

Pay ``Attention'' to your Context when Classifying Abusive Language

1 code implementation WS 2019 Tuhin Chakrabarty, Kilol Gupta, Smar Muresan, a

The goal of any social media platform is to facilitate healthy and meaningful interactions among its users.

Abuse Detection

ColumbiaNLP at SemEval-2019 Task 8: The Answer is Language Model Fine-tuning

no code implementations SEMEVAL 2019 Tuhin Chakrabarty, Smar Muresan, a

Community Question Answering forums are very popular nowadays, as they represent effective means for communities to share information around particular topics.

Community Question Answering Fact Checking +2

Robust Document Retrieval and Individual Evidence Modeling for Fact Extraction and Verification.

1 code implementation WS 2018 Tuhin Chakrabarty, Tariq Alhindi, Smar Muresan, a

Our team finished 6th out of 24 teams on the leader-board based on the preliminary results with a FEVER score of 49. 06 on the blind test set compared to 27. 45 of the baseline system.

Natural Language Inference Retrieval +1

Context-Aware Attention for Understanding Twitter Abuse

no code implementations24 Sep 2018 Tuhin Chakrabarty, Kilol Gupta

The original goal of any social media platform is to facilitate users to indulge in healthy and meaningful conversations.

Abuse Detection

Cannot find the paper you are looking for? You can Submit a new open access paper.