no code implementations • 8 May 2024 • Amrita Bhattacharjee, Raha Moraffah, Joshua Garland, Huan Liu
Counterfactual examples are frequently used for model development and evaluation in many natural language processing (NLP) tasks.
no code implementations • 17 Apr 2024 • Paras Sheth, Tharindu Kumarage, Raha Moraffah, Aman Chadha, Huan Liu
Content moderation faces a challenging task as social media's ability to spread hate speech contrasts with its role in promoting global connectivity.
no code implementations • 23 Mar 2024 • Amrita Bhattacharjee, Raha Moraffah, Joshua Garland, Huan Liu
With the advancement in capabilities of Large Language Models (LLMs), one major step in the responsible and safe use of such LLMs is to be able to detect text generated by these models.
no code implementations • 2 Mar 2024 • Tharindu Kumarage, Garima Agrawal, Paras Sheth, Raha Moraffah, Aman Chadha, Joshua Garland, Huan Liu
We have witnessed lately a rapid proliferation of advanced Large Language Models (LLMs) capable of generating high-quality text.
1 code implementation • 20 Feb 2024 • Zhen Tan, Chengshuai Zhao, Raha Moraffah, YiFan Li, Yu Kong, Tianlong Chen, Huan Liu
Unlike direct harmful output generation for MLLMs, our research demonstrates how a single MLLM agent can be subtly influenced to generate prompts that, in turn, induce other MLLM agents in the society to output malicious content.
no code implementations • 5 Feb 2024 • Raha Moraffah, Huan Liu
Different from the discriminative approach, we propose a generative surrogate that learns the distribution of samples residing on or close to the target's decision boundaries.
no code implementations • 5 Feb 2024 • Raha Moraffah, Paras Sheth, Saketh Vishnubhatla, Huan Liu
This survey focuses on the current study of causal feature selection: what it is and how it can reinforce the four aspects of responsible ML.
no code implementations • 5 Feb 2024 • Raha Moraffah, Huan Liu
Sentence-level attacks craft adversarial sentences that are synonymous with correctly-classified sentences but are misclassified by the text classifiers.
no code implementations • 5 Feb 2024 • Raha Moraffah, Shubh Khandelwal, Amrita Bhattacharjee, Huan Liu
Adversarial purification is a defense mechanism for safeguarding classifiers against adversarial attacks without knowing the type of attacks or training of the classifier.
no code implementations • 1 Nov 2023 • Suraj Jyothi Unni, Raha Moraffah, Huan Liu
validating that comprehensive multi-modal shifts are critical for robust VQA generalization.
no code implementations • 8 Oct 2023 • Tharindu Kumarage, Paras Sheth, Raha Moraffah, Joshua Garland, Huan Liu
The novel universal evasive prompt is achieved in two steps: First, we create an evasive soft prompt tailored to a specific PLM through prompt tuning; and then, we leverage the transferability of soft prompts to transfer the learned evasive soft prompt from one PLM to another.
no code implementations • 23 Sep 2023 • Amrita Bhattacharjee, Raha Moraffah, Joshua Garland, Huan Liu
Inspired by recent endeavors to utilize Large Language Models (LLMs) as experts, in this work, we aim to leverage the instruction-following and textual understanding capabilities of recent state-of-the-art LLMs to facilitate causal explainability via counterfactual explanation generation for black-box text classifiers.
1 code implementation • 7 Sep 2023 • Amrita Bhattacharjee, Tharindu Kumarage, Raha Moraffah, Huan Liu
Given the potential malicious nature in which these LLMs can be used to generate disinformation at scale, it is important to build effective detectors for such AI-generated text.
1 code implementation • 3 Aug 2023 • Paras Sheth, Tharindu Kumarage, Raha Moraffah, Aman Chadha, Huan Liu
By disentangling input into platform-dependent features (useful for predicting hate targets) and platform-independent features (used to predict the presence of hate), we learn invariant representations resistant to distribution shifts.
1 code implementation • 15 Jun 2023 • Paras Sheth, Tharindu Kumarage, Raha Moraffah, Aman Chadha, Huan Liu
Hate speech detection refers to the task of detecting hateful content that aims at denigrating an individual or a group based on their religion, gender, sexual orientation, or other characteristics.
no code implementations • 30 Sep 2022 • Paras Sheth, Raha Moraffah, K. Selçuk Candan, Adrienne Raglin, Huan Liu
As a result models that rely on this assumption exhibit poor generalization capabilities.
no code implementations • 7 Feb 2022 • Lu Cheng, Ruocheng Guo, Raha Moraffah, Paras Sheth, K. Selcuk Candan, Huan Liu
To bridge from conventional causal inference (i. e., based on statistical methods) to causal learning with big data (i. e., the intersection of causal inference and machine learning), in this survey, we review commonly-used datasets, evaluation methods, and measures for causal learning using an evaluation pipeline similar to conventional machine learning.
no code implementations • 11 Feb 2021 • Raha Moraffah, Paras Sheth, Mansooreh Karami, Anchit Bhattacharya, Qianru Wang, Anique Tahir, Adrienne Raglin, Huan Liu
In this paper, we focus on two causal inference tasks, i. e., treatment effect estimation and causal discovery for time series data, and provide a comprehensive review of the approaches in each task.
no code implementations • 30 Nov 2020 • Bahman Moraffah, Christ Richmond, Raha Moraffah, Antonia Papandreou-Suppappola
We robustly and accurately estimate the trajectory of the moving target in a high clutter environment with an unknown number of clutters by employing Bayesian nonparametric modeling.
no code implementations • 26 Aug 2020 • Raha Moraffah, Bahman Moraffah, Mansooreh Karami, Adrienne Raglin, Huan Liu
The LGN is a GAN-based architecture which learns and samples from the causal model over labels.
no code implementations • 9 Mar 2020 • Raha Moraffah, Mansooreh Karami, Ruocheng Guo, Adrienne Raglin, Huan Liu
In this work, models that aim to answer causal questions are referred to as causal interpretable models.
no code implementations • 28 Oct 2019 • Raha Moraffah, Kai Shu, Adrienne Raglin, Huan Liu
Recent research on deep domain adaptation proposed to mitigate this problem by forcing the deep model to learn more transferable feature representations across domains.
1 code implementation • 9 Aug 2018 • Vineeth Rakesh, Ruocheng Guo, Raha Moraffah, Nitin Agarwal, Huan Liu
Modeling spillover effects from observational data is an important problem in economics, business, and other fields of research.