no code implementations • 8 May 2024 • Amrita Bhattacharjee, Raha Moraffah, Joshua Garland, Huan Liu
Counterfactual examples are frequently used for model development and evaluation in many natural language processing (NLP) tasks.
no code implementations • 23 Mar 2024 • Amrita Bhattacharjee, Raha Moraffah, Joshua Garland, Huan Liu
With the advancement in capabilities of Large Language Models (LLMs), one major step in the responsible and safe use of such LLMs is to be able to detect text generated by these models.
1 code implementation • 19 Mar 2024 • Ayushi Nirmal, Amrita Bhattacharjee, Paras Sheth, Huan Liu
Although social media platforms are a prominent arena for users to engage in interpersonal discussions and express opinions, the facade and anonymity offered by social media may allow users to spew hate speech and offensive content.
no code implementations • 12 Mar 2024 • Tharindu Kumarage, Amrita Bhattacharjee, Joshua Garland
Large language models (LLMs) excel in many diverse applications beyond language generation, e. g., translation, summarization, and sentiment analysis.
1 code implementation • 21 Feb 2024 • Zhen Tan, Alimohammad Beigi, Song Wang, Ruocheng Guo, Amrita Bhattacharjee, Bohan Jiang, Mansooreh Karami, Jundong Li, Lu Cheng, Huan Liu
Furthermore, the paper includes an in-depth taxonomy of methodologies employing LLMs for data annotation, a comprehensive review of learning strategies for models incorporating LLM-generated annotations, and a detailed discussion on primary challenges and limitations associated with using LLMs for data annotation.
1 code implementation • 9 Feb 2024 • Saurabh Bhausaheb Zinjad, Amrita Bhattacharjee, Amey Bhilegaonkar, Huan Liu
While it is highly recommended that applicants tailor their resume to the specific role they are applying for, manually tailoring resumes to job descriptions and role-specific requirements is often (1) extremely time-consuming, and (2) prone to human errors.
no code implementations • 5 Feb 2024 • Raha Moraffah, Shubh Khandelwal, Amrita Bhattacharjee, Huan Liu
Adversarial purification is a defense mechanism for safeguarding classifiers against adversarial attacks without knowing the type of attacks or training of the classifier.
no code implementations • 23 Sep 2023 • Amrita Bhattacharjee, Raha Moraffah, Joshua Garland, Huan Liu
Inspired by recent endeavors to utilize Large Language Models (LLMs) as experts, in this work, we aim to leverage the instruction-following and textual understanding capabilities of recent state-of-the-art LLMs to facilitate causal explainability via counterfactual explanation generation for black-box text classifiers.
1 code implementation • 7 Sep 2023 • Amrita Bhattacharjee, Tharindu Kumarage, Raha Moraffah, Huan Liu
Given the potential malicious nature in which these LLMs can be used to generate disinformation at scale, it is important to build effective detectors for such AI-generated text.
2 code implementations • 6 Sep 2023 • Tharindu Kumarage, Amrita Bhattacharjee, Djordje Padejski, Kristy Roschke, Dan Gillmor, Scott Ruston, Huan Liu, Joshua Garland
The rapid proliferation of AI-generated text online is profoundly reshaping the information landscape.
1 code implementation • 2 Aug 2023 • Amrita Bhattacharjee, Huan Liu
Large language models (LLMs) such as ChatGPT are increasingly being used for various use cases, including text content generation at scale.
1 code implementation • 7 Mar 2023 • Tharindu Kumarage, Joshua Garland, Amrita Bhattacharjee, Kirill Trapeznikov, Scott Ruston, Huan Liu
However, tweets are inherently short, thus making it difficult for current state-of-the-art pre-trained language model-based detectors to accurately detect at what point the AI starts to generate tweets in a given Twitter timeline.
1 code implementation • 31 Jan 2023 • Melanie Subbiah, Amrita Bhattacharjee, Yilun Hua, Tharindu Kumarage, Huan Liu, Kathleen McKeown
Manipulated news online is a growing problem which necessitates the use of automated systems to curtail its spread.
no code implementations • 15 Nov 2022 • Siddhant Bhambri, Amrita Bhattacharjee, Dimitri Bertsekas
In this paper we address the solution of the popular Wordle puzzle, using new reinforcement learning methods, which apply more generally to adaptive control of dynamic systems and to classes of Partially Observable Markov Decision Process (POMDP) problems.
no code implementations • 22 Mar 2022 • Amrita Bhattacharjee, Mansooreh Karami, Huan Liu
Contrastive self-supervised learning has become a prominent technique in representation learning.
no code implementations • 18 Oct 2020 • Amrita Bhattacharjee, Kai Shu, Min Gao, Huan Liu
We then proceed to discuss the inherent challenges in disinformation research, and then elaborate on the computational and interdisciplinary approaches towards mitigation of disinformation, after a short overview of the various directions explored in detection efforts.
no code implementations • 14 Jul 2020 • Kai Shu, Amrita Bhattacharjee, Faisal Alatawi, Tahora Nazer, Kaize Ding, Mansooreh Karami, Huan Liu
The creation, dissemination, and consumption of disinformation and fabricated content on social media is a growing concern, especially with the ease of access to such sources, and the lack of awareness of the existence of such false information.