no code implementations • LREC 2022 • Kokil Jaidka
This paper motivates and presents the Twitter Deliberative Politics dataset, a corpus of political tweets labeled for its deliberative characteristics.
no code implementations • 4 Mar 2024 • Fiona Anting Tan, Gerard Christopher Yeo, Fanyou Wu, Weijie Xu, Vinija Jain, Aman Chadha, Kokil Jaidka, Yang Liu, See-Kiong Ng
Drawing inspiration from psychological research on the links between certain personality traits and Theory-of-Mind (ToM) reasoning, and from prompt engineering research on the hyper-sensitivity of prompts in affecting LLMs capabilities, this study investigates how inducing personalities in LLMs using prompts affects their ToM reasoning capabilities.
no code implementations • 13 Feb 2024 • Preetika Verma, Kokil Jaidka, Svetlana Churina
We audited large language models (LLMs) for their ability to create evidence-based and stylistic counter-arguments to posts from the Reddit ChangeMyView dataset.
1 code implementation • 15 Nov 2023 • Kokil Jaidka, Hansin Ahuja, Lynnette Ng
We annotated a dataset of over 10, 000 chat messages for different negotiation strategies and empirically examined their importance in predicting long- and short-term game outcomes.
no code implementations • 29 Oct 2023 • Ahmad Nasir, Aadish Sharma, Kokil Jaidka
To answer (2), we assessed the performance of 288 out-of-domain classifiers for a given end-domain dataset.
no code implementations • 27 Jan 2023 • Francielle Vargas, Kokil Jaidka, Thiago A. S. Pardo, Fabrício Benevenuto
Automated news credibility and fact-checking at scale require accurately predicting news factuality and media bias.
no code implementations • 31 Dec 2021 • Hansin Ahuja, Lynnette Hui Xian Ng, Kokil Jaidka
We developed a two-tier approach that first encodes sociolinguistic behavior as linguistic features then use reinforcement learning to estimate the advantage afforded to any player.
1 code implementation • 19 Oct 2021 • Jesse Cui, Tingdan Zhang, Kokil Jaidka, Dandan Pang, Garrick Sherman, Vinit Jakhetiya, Lyle Ungar, Sharath Chandra Guntuku
This paper studies linguistic differences in the experiences and expressions of stress in urban-rural China from Weibo posts from over 65, 000 users across 329 counties using hierarchical mixed-effects models.
no code implementations • NAACL 2021 • Kokil Jaidka, Andrea Ceolin, Iknoor Singh, Niyati Chhaya, Lyle Ungar
We show how the data supports the classic understanding of style matching, where positive emotion and the use of first-person pronouns predict a positive emotional change in a Wikipedia contributor.
1 code implementation • 2 Sep 2019 • Kokil Jaidka, Michihiro Yasunaga, Muthu Kumar Chandrasekaran, Dragomir Radev, Min-Yen Kan
This overview describes the official results of the CL-SciSumm Shared Task 2018 -- the first medium-scale shared task on scientific document summarization in the computational linguistics (CL) domain.
1 code implementation • 19 Nov 2018 • Sharath Chandra Guntuku, Anneke Buffone, Kokil Jaidka, Johannes Eichstaedt, Lyle Ungar
In this paper, we explore the language of psychological stress with a dataset of 601 social media users, who answered the Perceived Stress Scale questionnaire and also consented to share their Facebook and Twitter data.
no code implementations • EMNLP 2018 • Masoud Rouhizadeh, Kokil Jaidka, Laura Smith, H. Andrew Schwartz, Anneke Buffone, Lyle Ungar
Individuals express their locus of control, or {``}control{''}, in their language when they identify whether or not they are in control of their circumstances.
no code implementations • ACL 2018 • Kokil Jaidka, Niyati Chhaya, Lyle Ungar
It asks the question: given that the social media platform and its users remain the same, how is language changing over time?
no code implementations • ICLR 2018 • Kushal Chawla, Sopan Khosla, Niyati Chhaya, Kokil Jaidka
Our work addresses the question: can affect lexica improve the word representations learnt from a corpus?
no code implementations • IJCNLP 2017 • Daniel Rieman, Kokil Jaidka, H. Andrew Schwartz, Lyle Ungar
Several studies have demonstrated how language models of user attributes, such as personality, can be built by using the Facebook language of social media users in conjunction with their responses to psychology questionnaires.