Search Results for author: Ashkan Kazemi

Found 9 papers, 1 papers with code

Analyzing the Effects of Annotator Gender across NLP Tasks

1 code implementation NLPerspectives (LREC) 2022 Laura Biester, Vanita Sharma, Ashkan Kazemi, Naihao Deng, Steven Wilson, Rada Mihalcea

Recent studies have shown that for subjective annotation tasks, the demographics, lived experiences, and identity of annotators can have a large impact on how items are labeled.

Natural Language Inference

Misinformation as Information Pollution

no code implementations21 Jun 2023 Ashkan Kazemi, Rada Mihalcea

Social media feed algorithms are designed to optimize online social engagements for the purpose of maximizing advertising profits, and therefore have an incentive to promote controversial posts including misinformation.

Misinformation

Query Rewriting for Effective Misinformation Discovery

no code implementations14 Oct 2022 Ashkan Kazemi, Artem Abzaliev, Naihao Deng, Rui Hou, Scott A. Hale, Verónica Pérez-Rosas, Rada Mihalcea

We propose a novel system to help fact-checkers formulate search queries for known misinformation claims and effectively search across multiple social media platforms.

Misinformation reinforcement-learning +2

Matching Tweets With Applicable Fact-Checks Across Languages

no code implementations14 Feb 2022 Ashkan Kazemi, Zehua Li, Verónica Pérez-Rosas, Scott A. Hale, Rada Mihalcea

We conduct both classification and retrieval experiments, in monolingual (English only), multilingual (Spanish, Portuguese), and cross-lingual (Hindi-English) settings using multilingual transformer models such as XLM-RoBERTa and multilingual embeddings such as LaBSE and SBERT.

Fact Checking Retrieval

Claim Matching Beyond English to Scale Global Fact-Checking

no code implementations ACL 2021 Ashkan Kazemi, Kiran Garimella, Devin Gaffney, Scott A. Hale

We train our own embedding model using knowledge distillation and a high-quality "teacher" model in order to address the imbalance in embedding quality between the low- and high-resource languages in our dataset.

Fact Checking Knowledge Distillation

Extractive and Abstractive Explanations for Fact-Checking and Evaluation of News

no code implementations NAACL (NLP4IF) 2021 Ashkan Kazemi, Zehua Li, Verónica Pérez-Rosas, Rada Mihalcea

In this paper, we explore the construction of natural language explanations for news claims, with the goal of assisting fact-checking and news evaluation applications.

Fact Checking Language Modelling +1

Biased TextRank: Unsupervised Graph-Based Content Extraction

no code implementations COLING 2020 Ashkan Kazemi, Verónica Pérez-Rosas, Rada Mihalcea

We introduce Biased TextRank, a graph-based content extraction method inspired by the popular TextRank algorithm that ranks text spans according to their importance for language processing tasks and according to their relevance to an input "focus."

Cannot find the paper you are looking for? You can Submit a new open access paper.