no code implementations • 9 Apr 2024 • Kennedy Edemacu, Xintao Wu
As a result, the sizes of these models have notably expanded in recent years, persuading researchers to adopt the term large language models (LLMs) to characterize the larger-sized PLMs.
no code implementations • 15 Mar 2024 • Minh-Hao Van, Alycia N. Carey, Xintao Wu
In this work, we study a difficult but realistic setting of training a deep learning model on noisy MR images to classify brain tumors.
no code implementations • 8 Mar 2024 • Alycia N. Carey, Karuna Bhaila, Kennedy Edemacu, Xintao Wu
In-context learning (ICL) enables large language models (LLMs) to adapt to new tasks by conditioning on demonstrations of question-answer pairs and it has been shown to have comparable performance to costly model retraining and fine-tuning.
no code implementations • 21 Feb 2024 • Minh-Hao Van, Prateek Verma, Xintao Wu
Visual language models (VLMs), such as LLaVA, Flamingo, or CLIP, have demonstrated impressive performance on various visio-linguistic tasks.
no code implementations • 19 Feb 2024 • Vinay M. S., Minh-Hao Van, Xintao Wu
Despite its multiple benefits, ICL generalization performance is sensitive to the selected demonstrations.
no code implementations • 19 Feb 2024 • Chen Zhao, Feng Mi, Xintao Wu, Kai Jiang, Latifur Khan, Feng Chen
Theoretical analysis yields sub-linear upper bounds for both loss regret and the cumulative violation of fairness constraints.
no code implementations • 2 Feb 2024 • Yujie Lin, Dong Li, Chen Zhao, Xintao Wu, Qin Tian, Minglai Shao
Supervised fairness-aware machine learning under distribution shifts is an emerging field that addresses the challenge of maintaining equitable and unbiased predictions when faced with changes in data distributions from source to target domains.
no code implementations • 20 Dec 2023 • Wen Huang, Xintao Wu
A major obstacle in this setting is the existence of compound biases from the observational data.
no code implementations • 23 Nov 2023 • Chen Zhao, Kai Jiang, Xintao Wu, Haoliang Wang, Latifur Khan, Christan Grant, Feng Chen
Achieving the generalization of an invariant classifier from source domains to shifted target domains while simultaneously considering model fairness is a substantial and complex challenge in machine learning.
no code implementations • 12 Nov 2023 • Minh-Hao Van, Xintao Wu
In this work, we study the ability of VLMs on hateful meme detection and hateful meme correction tasks with zero-shot prompting.
no code implementations • 17 Oct 2023 • Aneesh Komanduri, Xintao Wu, Yongkai Wu, Feng Chen
Deep generative models have shown tremendous success in data density estimation and data generation from finite samples.
1 code implementation • 15 Sep 2023 • Minh-Hao Van, Alycia N. Carey, Xintao Wu
While numerous defense methods have been proposed to prohibit potential poisoning attacks from untrusted data sources, most research works only defend against specific attacks, which leaves many avenues for an adversary to exploit.
no code implementations • 15 Sep 2023 • Alycia N. Carey, Minh-Hao Van, Xintao Wu
How to properly set the privacy parameter in differential privacy (DP) has been an open question in DP research since it was first proposed in 2006.
1 code implementation • 15 Sep 2023 • Karuna Bhaila, Wen Huang, Yongkai Wu, Xintao Wu
We focus on a decentralized notion of Differential Privacy, namely Local Differential Privacy, and apply randomization mechanisms to perturb both feature and label data at the node level before the data is collected by a central server for model training.
no code implementations • 14 Sep 2023 • Huy Mai, Xintao Wu
This paper focuses on one classic instance of MNAR sample selection bias where a subset of samples have non-randomly missing outcomes.
no code implementations • 19 Aug 2023 • Vinay M. S., Shuhan Yuan, Xintao Wu
In many real-world scenarios, only a few labeled malicious and a large amount of normal sessions are available.
1 code implementation • 2 Jun 2023 • Aneesh Komanduri, Yongkai Wu, Feng Chen, Xintao Wu
We propose ICM-VAE, a framework for learning causally disentangled representations supervised by causally related observed labels.
no code implementations • 31 May 2023 • Chen Zhao, Feng Mi, Xintao Wu, Kai Jiang, Latifur Khan, Christan Grant, Feng Chen
To this end, in this paper, we propose a novel algorithm under the assumption that data collected at each time can be disentangled with two representations, an environment-invariant semantic factor and an environment-specific variation factor.
no code implementations • 25 May 2023 • Huy Mai, Wen Huang, Wei Du, Xintao Wu
In this paper, we propose BiasCorr, an algorithm that improves on Greene's method by modifying the original training set in order for a classifier to learn under MNAR sample selection bias.
1 code implementation • 10 Nov 2022 • Khang Tran, Phung Lai, NhatHai Phan, Issa Khalil, Yao Ma, Abdallah Khreishah, My Thai, Xintao Wu
Graph neural networks (GNNs) are susceptible to privacy inference attacks (PIAs), given their ability to learn joint representation from features and edges among nodes in graph data.
no code implementations • 9 Oct 2022 • He Cheng, Depeng Xu, Shuhan Yuan, Xintao Wu
Given a sequence that is detected as anomalous, we can consider anomalous entry detection as an interpretable machine learning task because identifying anomalous entries in the sequence is to provide an interpretation to the detection result.
no code implementations • 20 May 2022 • Chen Zhao, Feng Mi, Xintao Wu, Kai Jiang, Latifur Khan, Feng Chen
Furthermore, to determine a good model parameter at each round, we propose a novel adaptive fairness-aware online meta-learning algorithm, namely FairSAOML, which is able to adapt to changing environments in both bias control and model precision.
no code implementations • 15 Feb 2022 • Shuhan Yuan, Xintao Wu
Anomaly detection has a wide range of real-world applications, such as bank fraud detection and cyber intrusion detection.
no code implementations • 18 Jan 2022 • Phung Lai, NhatHai Phan, Issa Khalil, Abdallah Khreishah, Xintao Wu
This paper explores previously unknown backdoor risks in HyperNet-based personalized federated learning (HyperNetFL) through poisoning attacks.
no code implementations • 13 Jan 2022 • Alycia N. Carey, Xintao Wu
Over the past several years, a slew of different methods to measure the fairness of a machine learning model have been proposed.
no code implementations • 17 Oct 2021 • Minh-Hao Van, Wei Du, Xintao Wu, Aidong Lu
Our framework enables attackers to flexibly adjust the attack's focus on prediction accuracy or fairness and accurately quantify the impact of each candidate point to both accuracy loss and fairness violation, thus producing effective poisoning samples.
no code implementations • 8 Oct 2021 • Wei Du, Xintao Wu, Hanghang Tong
However, all previous fair regression research assumed the training data and testing data are drawn from the same distributions.
no code implementations • 21 Sep 2021 • Wen Huang, Lu Zhang, Xintao Wu
In online recommendation, customers arrive in a sequential and stochastic manner from an underlying distribution and the online decision model recommends a chosen item for each arriving individual based on some strategy.
1 code implementation • 2 Jun 2021 • Jia Tracy Shen, Michiharu Yamashita, Ethan Prihar, Neil Heffernan, Xintao Wu, Ben Graff, Dongwon Lee
Due to the nature of mathematical texts, which often use domain specific vocabulary along with equations and math symbols, we posit that the development of a new BERT model for mathematics would be useful for many mathematical downstream tasks.
no code implementations • 24 May 2021 • Wei Du, Xintao Wu
However, the assumption is often violated in real world due to the sample selection bias between the training and test data.
no code implementations • 24 May 2021 • Jian Kang, Tiankai Xie, Xintao Wu, Ross Maciejewski, Hanghang Tong
The vast majority of the existing works on group fairness, with a few exceptions, primarily focus on debiasing with respect to a single sensitive attribute, despite the fact that the co-existence of multiple sensitive attributes (e. g., gender, race, marital status, etc.)
no code implementations • 24 May 2021 • Jia Tracy Shen, Michiharu Yamashita, Ethan Prihar, Neil Heffernan, Xintao Wu, Sean McGrew, Dongwon Lee
Educational content labeled with proper knowledge components (KCs) are particularly useful to teachers or content organizers.
1 code implementation • NeurIPS 2020 • Yaowei Hu, Yongkai Wu, Lu Zhang, Xintao Wu
Previous research in fair classification mostly focuses on a single decision model.
no code implementations • 22 Oct 2020 • Wen Huang, Kevin Labille, Xintao Wu, Dongwon Lee, Neil Heffernan
Personalized recommendation based on multi-arm bandit (MAB) algorithms has shown to lead to high utility and efficiency as it can dynamically adapt the recommendation strategy based on feedback.
no code implementations • 10 Oct 2020 • Wei Du, Depeng Xu, Xintao Wu, Hanghang Tong
In this paper, we develop a fairness-aware agnostic federated learning framework (AgnosticFair) to deal with the challenge of unknown testing distribution.
no code implementations • 25 May 2020 • Shuhan Yuan, Xintao Wu
We then discuss such challenges and suggest future research directions that have the potential to address challenges and further boost the performance of deep learning for insider threat detection.
no code implementations • 8 Mar 2020 • Depeng Xu, Wei Du, Xintao Wu
In this work, we analyze the inequality in utility loss by differential privacy and propose a modified differentially private stochastic gradient descent (DPSGD), called DPSGD-F, to remove the potential disparate impact of differential privacy on the protected group.
no code implementations • 12 Nov 2019 • Panpan Zheng, Shuhan Yuan, Xintao Wu, Yubao Wu
The key challenge is that the buyers are anonymized in darknet markets.
no code implementations • 11 Nov 2019 • Depeng Xu, Shuhan Yuan, Xintao Wu
Evaluation on real-world and synthetic datasets for linear and logistic regressions shows the effectiveness of our proposed method.
no code implementations • 11 Nov 2019 • Wen Huang, Yongkai Wu, Lu Zhang, Xintao Wu
We develop algorithms for determining whether an individual or a group of individuals is discriminated in terms of equality of effort.
no code implementations • NeurIPS 2019 • Yongkai Wu, Lu Zhang, Xintao Wu, Hanghang Tong
A recent trend of fair machine learning is to define fairness as causality-based notions which concern the causal connection between protected attributes and decisions.
4 code implementations • 2 Jun 2019 • NhatHai Phan, Minh Vu, Yang Liu, Ruoming Jin, Dejing Dou, Xintao Wu, My T. Thai
In this paper, we propose a novel Heterogeneous Gaussian Mechanism (HGM) to preserve differential privacy in deep neural networks, with provable robustness against adversarial examples.
no code implementations • 13 Sep 2018 • Yongkai Wu, Lu Zhang, Xintao Wu
In this paper, we propose a general framework for learning fair classifiers which addresses previous limitations.
3 code implementations • 12 Sep 2018 • Panpan Zheng, Shuhan Yuan, Xintao Wu
However, there is usually a gap between the time that a user commits a fraudulent action and the time that the user is suspended by the platform.
no code implementations • 28 May 2018 • Depeng Xu, Shuhan Yuan, Lu Zhang, Xintao Wu
In this paper, we focus on fair data generation that ensures the generated data is discrimination free.
1 code implementation • 5 Mar 2018 • Panpan Zheng, Shuhan Yuan, Xintao Wu, Jun Li, Aidong Lu
Currently, most of the fraud detection approaches require a training dataset that contains records of both benign and malicious users.
no code implementations • 5 Mar 2018 • Yongkai Wu, Lu Zhang, Xintao Wu
Existing methods in fairness-aware ranking are mainly based on statistical parity that cannot measure the true discriminatory effect since discrimination is causal.
2 code implementations • 18 Sep 2017 • NhatHai Phan, Xintao Wu, Han Hu, Dejing Dou
In this paper, we focus on developing a novel mechanism to preserve differential privacy in deep neural networks, such that: (1) The privacy budget consumption is totally independent of the number of training steps; (2) It has the ability to adaptively inject noise into features based on the contribution of each to the output; and (3) It could be applied in a variety of different deep neural networks.
2 code implementations • 25 Jun 2017 • NhatHai Phan, Xintao Wu, Dejing Dou
However, only a few scientific studies on preserving privacy in deep learning have been conducted.
1 code implementation • 3 Jun 2017 • Shuhan Yuan, Panpan Zheng, Xintao Wu, Yang Xiang
In particular, we develop a multi-source long-short term memory network (M-LSTM) to model user behaviors by using a variety of user edit aspects as inputs, including the history of edit reversion information, edit page titles and categories.
no code implementations • 3 Jun 2017 • Shuhan Yuan, Xintao Wu, Yang Xiang
The other case study on fake review detection shows that our approach can identify the fake-review words/phrases.
no code implementations • 3 Jun 2017 • Shuhan Yuan, Xintao Wu, Jun Li, Aidong Lu
Due to the small dimension of spectral coordinates (compared with the dimension of the adjacency matrix derived from a graph), training deep neural networks becomes feasible.
no code implementations • 28 Feb 2017 • Lu Zhang, Yongkai Wu, Xintao Wu
Based on the results, we develop a two-phase framework for constructing a discrimination-free classifier with a theoretical guarantee.
no code implementations • 23 Dec 2016 • Yuemeng Li, Xintao Wu, Aidong Lu
It has been shown that the adjacency eigenspace of a network contains key information of its underlying structure.
no code implementations • 22 Nov 2016 • Lu Zhang, Yongkai Wu, Xintao Wu
In this paper, we investigate the problem of discovering both direct and indirect discrimination from the historical data, and removing the discriminatory effects before the data is used for predictive analysis (e. g., building classifiers).
no code implementations • 22 Nov 2016 • Lu Zhang, Yongkai Wu, Xintao Wu
Discrimination discovery and prevention/removal are increasingly important tasks in data mining.