Search Results for author: Xintao Wu

Found 56 papers, 12 papers with code

Privacy Preserving Prompt Engineering: A Survey

no code implementations9 Apr 2024 Kennedy Edemacu, Xintao Wu

As a result, the sizes of these models have notably expanded in recent years, persuading researchers to adopt the term large language models (LLMs) to characterize the larger-sized PLMs.

In-Context Learning Privacy Preserving +1

Robust Influence-based Training Methods for Noisy Brain MRI

no code implementations15 Mar 2024 Minh-Hao Van, Alycia N. Carey, Xintao Wu

In this work, we study a difficult but realistic setting of training a deep learning model on noisy MR images to classify brain tumors.

DP-TabICL: In-Context Learning with Differentially Private Tabular Data

no code implementations8 Mar 2024 Alycia N. Carey, Karuna Bhaila, Kennedy Edemacu, Xintao Wu

In-context learning (ICL) enables large language models (LLMs) to adapt to new tasks by conditioning on demonstrations of question-answer pairs and it has been shown to have comparable performance to costly model retraining and fine-tuning.

In-Context Learning

On Large Visual Language Models for Medical Imaging Analysis: An Empirical Study

no code implementations21 Feb 2024 Minh-Hao Van, Prateek Verma, Xintao Wu

Visual language models (VLMs), such as LLaVA, Flamingo, or CLIP, have demonstrated impressive performance on various visio-linguistic tasks.

In-Context Learning Demonstration Selection via Influence Analysis

no code implementations19 Feb 2024 Vinay M. S., Minh-Hao Van, Xintao Wu

Despite its multiple benefits, ICL generalization performance is sensitive to the selected demonstrations.

Few-Shot Learning In-Context Learning

Dynamic Environment Responsive Online Meta-Learning with Fairness Awareness

no code implementations19 Feb 2024 Chen Zhao, Feng Mi, Xintao Wu, Kai Jiang, Latifur Khan, Feng Chen

Theoretical analysis yields sub-linear upper bounds for both loss regret and the cumulative violation of fairness constraints.

Fairness Meta-Learning

Supervised Algorithmic Fairness in Distribution Shifts: A Survey

no code implementations2 Feb 2024 Yujie Lin, Dong Li, Chen Zhao, Xintao Wu, Qin Tian, Minglai Shao

Supervised fairness-aware machine learning under distribution shifts is an emerging field that addresses the challenge of maintaining equitable and unbiased predictions when faced with changes in data distributions from source to target domains.

Fairness

Robustly Improving Bandit Algorithms with Confounded and Selection Biased Offline Data: A Causal Approach

no code implementations20 Dec 2023 Wen Huang, Xintao Wu

A major obstacle in this setting is the existence of compound biases from the observational data.

Selection bias

Fairness-Aware Domain Generalization under Covariate and Dependence Shifts

no code implementations23 Nov 2023 Chen Zhao, Kai Jiang, Xintao Wu, Haoliang Wang, Latifur Khan, Christan Grant, Feng Chen

Achieving the generalization of an invariant classifier from source domains to shifted target domains while simultaneously considering model fairness is a substantial and complex challenge in machine learning.

Domain Generalization Fairness

Detecting and Correcting Hate Speech in Multimodal Memes with Large Visual Language Model

no code implementations12 Nov 2023 Minh-Hao Van, Xintao Wu

In this work, we study the ability of VLMs on hateful meme detection and hateful meme correction tasks with zero-shot prompting.

Language Modelling

HINT: Healthy Influential-Noise based Training to Defend against Data Poisoning Attacks

1 code implementation15 Sep 2023 Minh-Hao Van, Alycia N. Carey, Xintao Wu

While numerous defense methods have been proposed to prohibit potential poisoning attacks from untrusted data sources, most research works only defend against specific attacks, which leaves many avenues for an adversary to exploit.

Data Poisoning

Evaluating the Impact of Local Differential Privacy on Utility Loss via Influence Functions

no code implementations15 Sep 2023 Alycia N. Carey, Minh-Hao Van, Xintao Wu

How to properly set the privacy parameter in differential privacy (DP) has been an open question in DP research since it was first proposed in 2006.

Local Differential Privacy in Graph Neural Networks: a Reconstruction Approach

1 code implementation15 Sep 2023 Karuna Bhaila, Wen Huang, Yongkai Wu, Xintao Wu

We focus on a decentralized notion of Differential Privacy, namely Local Differential Privacy, and apply randomization mechanisms to perturb both feature and label data at the node level before the data is collected by a central server for model training.

On Prediction Feature Assignment in the Heckman Selection Model

no code implementations14 Sep 2023 Huy Mai, Xintao Wu

This paper focuses on one classic instance of MNAR sample selection bias where a subset of samples have non-randomly missing outcomes.

Selection bias

Robust Fraud Detection via Supervised Contrastive Learning

no code implementations19 Aug 2023 Vinay M. S., Shuhan Yuan, Xintao Wu

In many real-world scenarios, only a few labeled malicious and a large amount of normal sessions are available.

Contrastive Learning Data Augmentation +1

Learning Causally Disentangled Representations via the Principle of Independent Causal Mechanisms

1 code implementation2 Jun 2023 Aneesh Komanduri, Yongkai Wu, Feng Chen, Xintao Wu

We propose ICM-VAE, a framework for learning causally disentangled representations supervised by causally related observed labels.

counterfactual Disentanglement

Towards Fair Disentangled Online Learning for Changing Environments

no code implementations31 May 2023 Chen Zhao, Feng Mi, Xintao Wu, Kai Jiang, Latifur Khan, Christan Grant, Feng Chen

To this end, in this paper, we propose a novel algorithm under the assumption that data collected at each time can be disentangled with two representations, an environment-invariant semantic factor and an environment-specific variation factor.

Fairness

A Robust Classifier Under Missing-Not-At-Random Sample Selection Bias

no code implementations25 May 2023 Huy Mai, Wen Huang, Wei Du, Xintao Wu

In this paper, we propose BiasCorr, an algorithm that improves on Greene's method by modifying the original training set in order for a classifier to learn under MNAR sample selection bias.

Robust classification Selection bias

Heterogeneous Randomized Response for Differential Privacy in Graph Neural Networks

1 code implementation10 Nov 2022 Khang Tran, Phung Lai, NhatHai Phan, Issa Khalil, Yao Ma, Abdallah Khreishah, My Thai, Xintao Wu

Graph neural networks (GNNs) are susceptible to privacy inference attacks (PIAs), given their ability to learn joint representation from features and edges among nodes in graph data.

Fine-grained Anomaly Detection in Sequential Data via Counterfactual Explanations

no code implementations9 Oct 2022 He Cheng, Depeng Xu, Shuhan Yuan, Xintao Wu

Given a sequence that is detected as anomalous, we can consider anomalous entry detection as an interpretable machine learning task because identifying anomalous entries in the sequence is to provide an interpretation to the detection result.

Anomaly Detection counterfactual +1

Adaptive Fairness-Aware Online Meta-Learning for Changing Environments

no code implementations20 May 2022 Chen Zhao, Feng Mi, Xintao Wu, Kai Jiang, Latifur Khan, Feng Chen

Furthermore, to determine a good model parameter at each round, we propose a novel adaptive fairness-aware online meta-learning algorithm, namely FairSAOML, which is able to adapt to changing environments in both bias control and model precision.

Fairness Meta-Learning

Trustworthy Anomaly Detection: A Survey

no code implementations15 Feb 2022 Shuhan Yuan, Xintao Wu

Anomaly detection has a wide range of real-world applications, such as bank fraud detection and cyber intrusion detection.

Anomaly Detection Fairness +2

How to Backdoor HyperNetwork in Personalized Federated Learning?

no code implementations18 Jan 2022 Phung Lai, NhatHai Phan, Issa Khalil, Abdallah Khreishah, Xintao Wu

This paper explores previously unknown backdoor risks in HyperNet-based personalized federated learning (HyperNetFL) through poisoning attacks.

Data Poisoning Personalized Federated Learning

The Fairness Field Guide: Perspectives from Social and Formal Sciences

no code implementations13 Jan 2022 Alycia N. Carey, Xintao Wu

Over the past several years, a slew of different methods to measure the fairness of a machine learning model have been proposed.

BIG-bench Machine Learning Fairness +2

Poisoning Attacks on Fair Machine Learning

no code implementations17 Oct 2021 Minh-Hao Van, Wei Du, Xintao Wu, Aidong Lu

Our framework enables attackers to flexibly adjust the attack's focus on prediction accuracy or fairness and accurately quantify the impact of each candidate point to both accuracy loss and fairness violation, thus producing effective poisoning samples.

BIG-bench Machine Learning Fairness

Fair Regression under Sample Selection Bias

no code implementations8 Oct 2021 Wei Du, Xintao Wu, Hanghang Tong

However, all previous fair regression research assumed the training data and testing data are drawn from the same distributions.

Attribute Fairness +2

Achieving Counterfactual Fairness for Causal Bandit

no code implementations21 Sep 2021 Wen Huang, Lu Zhang, Xintao Wu

In online recommendation, customers arrive in a sequential and stochastic manner from an underlying distribution and the online decision model recommends a chosen item for each arriving individual based on some strategy.

Causal Inference counterfactual +1

MathBERT: A Pre-trained Language Model for General NLP Tasks in Mathematics Education

1 code implementation2 Jun 2021 Jia Tracy Shen, Michiharu Yamashita, Ethan Prihar, Neil Heffernan, Xintao Wu, Ben Graff, Dongwon Lee

Due to the nature of mathematical texts, which often use domain specific vocabulary along with equations and math symbols, we posit that the development of a new BERT model for mathematics would be useful for many mathematical downstream tasks.

Knowledge Tracing Language Modelling +2

Robust Fairness-aware Learning Under Sample Selection Bias

no code implementations24 May 2021 Wei Du, Xintao Wu

However, the assumption is often violated in real world due to the sample selection bias between the training and test data.

Fairness Selection bias

InfoFair: Information-Theoretic Intersectional Fairness

no code implementations24 May 2021 Jian Kang, Tiankai Xie, Xintao Wu, Ross Maciejewski, Hanghang Tong

The vast majority of the existing works on group fairness, with a few exceptions, primarily focus on debiasing with respect to a single sensitive attribute, despite the fact that the co-existence of multiple sensitive attributes (e. g., gender, race, marital status, etc.)

Attribute BIG-bench Machine Learning +1

Classifying Math KCs via Task-Adaptive Pre-Trained BERT

no code implementations24 May 2021 Jia Tracy Shen, Michiharu Yamashita, Ethan Prihar, Neil Heffernan, Xintao Wu, Sean McGrew, Dongwon Lee

Educational content labeled with proper knowledge components (KCs) are particularly useful to teachers or content organizers.

Math Task 2

Achieving User-Side Fairness in Contextual Bandits

no code implementations22 Oct 2020 Wen Huang, Kevin Labille, Xintao Wu, Dongwon Lee, Neil Heffernan

Personalized recommendation based on multi-arm bandit (MAB) algorithms has shown to lead to high utility and efficiency as it can dynamically adapt the recommendation strategy based on feedback.

Fairness Multi-Armed Bandits

Fairness-aware Agnostic Federated Learning

no code implementations10 Oct 2020 Wei Du, Depeng Xu, Xintao Wu, Hanghang Tong

In this paper, we develop a fairness-aware agnostic federated learning framework (AgnosticFair) to deal with the challenge of unknown testing distribution.

Fairness Federated Learning

Deep Learning for Insider Threat Detection: Review, Challenges and Opportunities

no code implementations25 May 2020 Shuhan Yuan, Xintao Wu

We then discuss such challenges and suggest future research directions that have the potential to address challenges and further boost the performance of deep learning for insider threat detection.

BIG-bench Machine Learning Feature Engineering

Removing Disparate Impact of Differentially Private Stochastic Gradient Descent on Model Accuracy

no code implementations8 Mar 2020 Depeng Xu, Wei Du, Xintao Wu

In this work, we analyze the inequality in utility loss by differential privacy and propose a modified differentially private stochastic gradient descent (DPSGD), called DPSGD-F, to remove the potential disparate impact of differential privacy on the protected group.

Achieving Differential Privacy in Vertically Partitioned Multiparty Learning

no code implementations11 Nov 2019 Depeng Xu, Shuhan Yuan, Xintao Wu

Evaluation on real-world and synthetic datasets for linear and logistic regressions shows the effectiveness of our proposed method.

Privacy Preserving

Fairness through Equality of Effort

no code implementations11 Nov 2019 Wen Huang, Yongkai Wu, Lu Zhang, Xintao Wu

We develop algorithms for determining whether an individual or a group of individuals is discriminated in terms of equality of effort.

BIG-bench Machine Learning counterfactual +1

PC-Fairness: A Unified Framework for Measuring Causality-based Fairness

no code implementations NeurIPS 2019 Yongkai Wu, Lu Zhang, Xintao Wu, Hanghang Tong

A recent trend of fair machine learning is to define fairness as causality-based notions which concern the causal connection between protected attributes and decisions.

counterfactual Fairness

Heterogeneous Gaussian Mechanism: Preserving Differential Privacy in Deep Learning with Provable Robustness

4 code implementations2 Jun 2019 NhatHai Phan, Minh Vu, Yang Liu, Ruoming Jin, Dejing Dou, Xintao Wu, My T. Thai

In this paper, we propose a novel Heterogeneous Gaussian Mechanism (HGM) to preserve differential privacy in deep neural networks, with provable robustness against adversarial examples.

Fairness-aware Classification: Criterion, Convexity, and Bounds

no code implementations13 Sep 2018 Yongkai Wu, Lu Zhang, Xintao Wu

In this paper, we propose a general framework for learning fair classifiers which addresses previous limitations.

Classification Computational Efficiency +2

SAFE: A Neural Survival Analysis Model for Fraud Early Detection

3 code implementations12 Sep 2018 Panpan Zheng, Shuhan Yuan, Xintao Wu

However, there is usually a gap between the time that a user commits a fraudulent action and the time that the user is suspended by the platform.

Survival Analysis

FairGAN: Fairness-aware Generative Adversarial Networks

no code implementations28 May 2018 Depeng Xu, Shuhan Yuan, Lu Zhang, Xintao Wu

In this paper, we focus on fair data generation that ensures the generated data is discrimination free.

Fairness General Classification

One-Class Adversarial Nets for Fraud Detection

1 code implementation5 Mar 2018 Panpan Zheng, Shuhan Yuan, Xintao Wu, Jun Li, Aidong Lu

Currently, most of the fraud detection approaches require a training dataset that contains records of both benign and malicious users.

Fraud Detection One-Class Classification

On Discrimination Discovery and Removal in Ranked Data using Causal Graph

no code implementations5 Mar 2018 Yongkai Wu, Lu Zhang, Xintao Wu

Existing methods in fairness-aware ranking are mainly based on statistical parity that cannot measure the true discriminatory effect since discrimination is causal.

Fairness

Adaptive Laplace Mechanism: Differential Privacy Preservation in Deep Learning

2 code implementations18 Sep 2017 NhatHai Phan, Xintao Wu, Han Hu, Dejing Dou

In this paper, we focus on developing a novel mechanism to preserve differential privacy in deep neural networks, such that: (1) The privacy budget consumption is totally independent of the number of training steps; (2) It has the ability to adaptively inject noise into features based on the contribution of each to the output; and (3) It could be applied in a variety of different deep neural networks.

Preserving Differential Privacy in Convolutional Deep Belief Networks

2 code implementations25 Jun 2017 NhatHai Phan, Xintao Wu, Dejing Dou

However, only a few scientific studies on preserving privacy in deep learning have been conducted.

Wikipedia Vandal Early Detection: from User Behavior to User Embedding

1 code implementation3 Jun 2017 Shuhan Yuan, Panpan Zheng, Xintao Wu, Yang Xiang

In particular, we develop a multi-source long-short term memory network (M-LSTM) to model user behaviors by using a variety of user edit aspects as inputs, including the history of edit reversion information, edit page titles and categories.

Task-specific Word Identification from Short Texts Using a Convolutional Neural Network

no code implementations3 Jun 2017 Shuhan Yuan, Xintao Wu, Yang Xiang

The other case study on fake review detection shows that our approach can identify the fake-review words/phrases.

Spectrum-based deep neural networks for fraud detection

no code implementations3 Jun 2017 Shuhan Yuan, Xintao Wu, Jun Li, Aidong Lu

Due to the small dimension of spectral coordinates (compared with the dimension of the adjacency matrix derived from a graph), training deep neural networks becomes feasible.

Fraud Detection

Achieving non-discrimination in prediction

no code implementations28 Feb 2017 Lu Zhang, Yongkai Wu, Xintao Wu

Based on the results, we develop a two-phase framework for constructing a discrimination-free classifier with a theoretical guarantee.

On Spectral Analysis of Directed Signed Graphs

no code implementations23 Dec 2016 Yuemeng Li, Xintao Wu, Aidong Lu

It has been shown that the adjacency eigenspace of a network contains key information of its underlying structure.

Clustering

A causal framework for discovering and removing direct and indirect discrimination

no code implementations22 Nov 2016 Lu Zhang, Yongkai Wu, Xintao Wu

In this paper, we investigate the problem of discovering both direct and indirect discrimination from the historical data, and removing the discriminatory effects before the data is used for predictive analysis (e. g., building classifiers).

Decision Making

Achieving non-discrimination in data release

no code implementations22 Nov 2016 Lu Zhang, Yongkai Wu, Xintao Wu

Discrimination discovery and prevention/removal are increasingly important tasks in data mining.

Attribute

Cannot find the paper you are looking for? You can Submit a new open access paper.