1 code implementation • NLPerspectives (LREC) 2022 • Laura Biester, Vanita Sharma, Ashkan Kazemi, Naihao Deng, Steven Wilson, Rada Mihalcea
Recent studies have shown that for subjective annotation tasks, the demographics, lived experiences, and identity of annotators can have a large impact on how items are labeled.
no code implementations • 21 Jun 2023 • Ashkan Kazemi, Rada Mihalcea
Social media feed algorithms are designed to optimize online social engagements for the purpose of maximizing advertising profits, and therefore have an incentive to promote controversial posts including misinformation.
no code implementations • 21 May 2023 • Oana Ignat, Zhijing Jin, Artem Abzaliev, Laura Biester, Santiago Castro, Naihao Deng, Xinyi Gao, Aylin Gunal, Jacky He, Ashkan Kazemi, Muhammad Khalifa, Namho Koh, Andrew Lee, Siyang Liu, Do June Min, Shinka Mori, Joan Nwatu, Veronica Perez-Rosas, Siqi Shen, Zekun Wang, Winston Wu, Rada Mihalcea
Not surprisingly, this has, in turn, made many NLP researchers -- especially those at the beginning of their careers -- worry about what NLP research area they should focus on.
no code implementations • 14 Oct 2022 • Ashkan Kazemi, Artem Abzaliev, Naihao Deng, Rui Hou, Scott A. Hale, Verónica Pérez-Rosas, Rada Mihalcea
We propose a novel system to help fact-checkers formulate search queries for known misinformation claims and effectively search across multiple social media platforms.
no code implementations • 14 Feb 2022 • Ashkan Kazemi, Zehua Li, Verónica Pérez-Rosas, Scott A. Hale, Rada Mihalcea
We conduct both classification and retrieval experiments, in monolingual (English only), multilingual (Spanish, Portuguese), and cross-lingual (Hindi-English) settings using multilingual transformer models such as XLM-RoBERTa and multilingual embeddings such as LaBSE and SBERT.
no code implementations • 8 Jun 2021 • Ashkan Kazemi, Kiran Garimella, Gautam Kishore Shahi, Devin Gaffney, Scott A. Hale
There is currently no easy way to fact-check content on WhatsApp and other end-to-end encrypted platforms at scale.
no code implementations • ACL 2021 • Ashkan Kazemi, Kiran Garimella, Devin Gaffney, Scott A. Hale
We train our own embedding model using knowledge distillation and a high-quality "teacher" model in order to address the imbalance in embedding quality between the low- and high-resource languages in our dataset.
no code implementations • NAACL (NLP4IF) 2021 • Ashkan Kazemi, Zehua Li, Verónica Pérez-Rosas, Rada Mihalcea
In this paper, we explore the construction of natural language explanations for news claims, with the goal of assisting fact-checking and news evaluation applications.
no code implementations • COLING 2020 • Ashkan Kazemi, Verónica Pérez-Rosas, Rada Mihalcea
We introduce Biased TextRank, a graph-based content extraction method inspired by the popular TextRank algorithm that ranks text spans according to their importance for language processing tasks and according to their relevance to an input "focus."