Search Results for author: Minh-Hao Van

Found 8 papers, 1 papers with code

Beyond Human Vision: The Role of Large Vision Language Models in Microscope Image Analysis

no code implementations1 May 2024 Prateek Verma, Minh-Hao Van, Xintao Wu

VLMs such as LLaVA, ChatGPT-4, and Gemini have recently shown impressive performance on tasks such as natural image captioning, visual question answering (VQA), and spatial reasoning.

Image Captioning Question Answering +2

Robust Influence-based Training Methods for Noisy Brain MRI

no code implementations15 Mar 2024 Minh-Hao Van, Alycia N. Carey, Xintao Wu

In this work, we study a difficult but realistic setting of training a deep learning model on noisy MR images to classify brain tumors.

On Large Visual Language Models for Medical Imaging Analysis: An Empirical Study

no code implementations21 Feb 2024 Minh-Hao Van, Prateek Verma, Xintao Wu

Visual language models (VLMs), such as LLaVA, Flamingo, or CLIP, have demonstrated impressive performance on various visio-linguistic tasks.

In-Context Learning Demonstration Selection via Influence Analysis

no code implementations19 Feb 2024 Vinay M. S., Minh-Hao Van, Xintao Wu

Despite its multiple benefits, ICL generalization performance is sensitive to the selected demonstrations.

Few-Shot Learning In-Context Learning

Detecting and Correcting Hate Speech in Multimodal Memes with Large Visual Language Model

no code implementations12 Nov 2023 Minh-Hao Van, Xintao Wu

In this work, we study the ability of VLMs on hateful meme detection and hateful meme correction tasks with zero-shot prompting.

Language Modelling

Evaluating the Impact of Local Differential Privacy on Utility Loss via Influence Functions

no code implementations15 Sep 2023 Alycia N. Carey, Minh-Hao Van, Xintao Wu

How to properly set the privacy parameter in differential privacy (DP) has been an open question in DP research since it was first proposed in 2006.

HINT: Healthy Influential-Noise based Training to Defend against Data Poisoning Attacks

1 code implementation15 Sep 2023 Minh-Hao Van, Alycia N. Carey, Xintao Wu

While numerous defense methods have been proposed to prohibit potential poisoning attacks from untrusted data sources, most research works only defend against specific attacks, which leaves many avenues for an adversary to exploit.

Data Poisoning

Poisoning Attacks on Fair Machine Learning

no code implementations17 Oct 2021 Minh-Hao Van, Wei Du, Xintao Wu, Aidong Lu

Our framework enables attackers to flexibly adjust the attack's focus on prediction accuracy or fairness and accurately quantify the impact of each candidate point to both accuracy loss and fairness violation, thus producing effective poisoning samples.

BIG-bench Machine Learning Fairness

Cannot find the paper you are looking for? You can Submit a new open access paper.