no code implementations • 1 May 2024 • Prateek Verma, Minh-Hao Van, Xintao Wu
VLMs such as LLaVA, ChatGPT-4, and Gemini have recently shown impressive performance on tasks such as natural image captioning, visual question answering (VQA), and spatial reasoning.
no code implementations • 15 Mar 2024 • Minh-Hao Van, Alycia N. Carey, Xintao Wu
In this work, we study a difficult but realistic setting of training a deep learning model on noisy MR images to classify brain tumors.
no code implementations • 21 Feb 2024 • Minh-Hao Van, Prateek Verma, Xintao Wu
Visual language models (VLMs), such as LLaVA, Flamingo, or CLIP, have demonstrated impressive performance on various visio-linguistic tasks.
no code implementations • 19 Feb 2024 • Vinay M. S., Minh-Hao Van, Xintao Wu
Despite its multiple benefits, ICL generalization performance is sensitive to the selected demonstrations.
no code implementations • 12 Nov 2023 • Minh-Hao Van, Xintao Wu
In this work, we study the ability of VLMs on hateful meme detection and hateful meme correction tasks with zero-shot prompting.
no code implementations • 15 Sep 2023 • Alycia N. Carey, Minh-Hao Van, Xintao Wu
How to properly set the privacy parameter in differential privacy (DP) has been an open question in DP research since it was first proposed in 2006.
1 code implementation • 15 Sep 2023 • Minh-Hao Van, Alycia N. Carey, Xintao Wu
While numerous defense methods have been proposed to prohibit potential poisoning attacks from untrusted data sources, most research works only defend against specific attacks, which leaves many avenues for an adversary to exploit.
no code implementations • 17 Oct 2021 • Minh-Hao Van, Wei Du, Xintao Wu, Aidong Lu
Our framework enables attackers to flexibly adjust the attack's focus on prediction accuracy or fairness and accurately quantify the impact of each candidate point to both accuracy loss and fairness violation, thus producing effective poisoning samples.