Search Results for author: Mingyan Liu

Found 26 papers, 6 papers with code

Federated Learning with Reduced Information Leakage and Computation

no code implementations10 Oct 2023 Tongxin Yin, Xueru Zhang, Mohammad Mahdi Khalili, Mingyan Liu

Federated learning (FL) is a distributed learning paradigm that allows multiple decentralized clients to collaboratively learn a common model without sharing local data.

Federated Learning Privacy Preserving

Fair Classifiers that Abstain without Harm

no code implementations9 Oct 2023 Tongxin Yin, Jean-François Ton, Ruocheng Guo, Yuanshun Yao, Mingyan Liu, Yang Liu

To generalize the abstaining decisions to test samples, we then train a surrogate model to learn the abstaining decisions based on the IP solutions in an end-to-end manner.

Decision Making Fairness

Professional Basketball Player Behavior Synthesis via Planning with Diffusion

no code implementations7 Jun 2023 Xiusi Chen, Wei-Yao Wang, Ziniu Hu, Curtis Chou, Lam Hoang, Kun Jin, Mingyan Liu, P. Jeffrey Brantingham, Wei Wang

To accomplish reward-guided trajectory generation, conditional sampling is introduced to condition the diffusion model on the value function and conduct classifier-guided sampling.

Decision Making

Performative Federated Learning: A Solution to Model-Dependent and Heterogeneous Distribution Shifts

no code implementations8 May 2023 Kun Jin, Tongxin Yin, Zhongzhu Chen, Zeyu Sun, Xueru Zhang, Yang Liu, Mingyan Liu

We consider a federated learning (FL) system consisting of multiple clients and a server, where the clients aim to collaboratively learn a common decision model from their distributed data.

Federated Learning

DensePure: Understanding Diffusion Models towards Adversarial Robustness

no code implementations1 Nov 2022 Chaowei Xiao, Zhongzhu Chen, Kun Jin, Jiongxiao Wang, Weili Nie, Mingyan Liu, Anima Anandkumar, Bo Li, Dawn Song

By using the highest density point in the conditional distribution as the reversed sample, we identify the robust region of a given instance under the diffusion model's reverse process.

Adversarial Robustness Denoising

How Do Fair Decisions Fare in Long-term Qualification?

1 code implementation NeurIPS 2020 Xueru Zhang, Ruibo Tu, Yang Liu, Mingyan Liu, Hedvig Kjellström, Kun Zhang, Cheng Zhang

Our results show that static fairness constraints can either promote equality or exacerbate disparity depending on the driving factor of qualification transitions and the effect of sensitive attributes on feature distributions.

Decision Making Fairness

Robust Deep Reinforcement Learning against Adversarial Perturbations on State Observations

4 code implementations NeurIPS 2020 Huan Zhang, Hongge Chen, Chaowei Xiao, Bo Li, Mingyan Liu, Duane Boning, Cho-Jui Hsieh

Several works have shown this vulnerability via adversarial attacks, but existing approaches on improving the robustness of DRL under this setting have limited success and lack for theoretical principles.

reinforcement-learning Reinforcement Learning (RL)

Fairness in Learning-Based Sequential Decision Algorithms: A Survey

no code implementations14 Jan 2020 Xueru Zhang, Mingyan Liu

However, in practice most decision-making processes are of a sequential nature, where decisions made in the past may have an impact on future data.

Decision Making Fairness

Recycled ADMM: Improving the Privacy and Accuracy of Distributed Algorithms

no code implementations8 Oct 2019 Xueru Zhang, Mohammad Mahdi Khalili, Mingyan Liu

It can be shown that the privacy-accuracy tradeoff can be improved significantly compared with conventional ADMM.

Adversarial Objects Against LiDAR-Based Autonomous Driving Systems

no code implementations11 Jul 2019 Yulong Cao, Chaowei Xiao, Dawei Yang, Jing Fang, Ruigang Yang, Mingyan Liu, Bo Li

Deep neural networks (DNNs) are found to be vulnerable against adversarial examples, which are carefully crafted inputs with a small magnitude of perturbation aiming to induce arbitrarily incorrect predictions.

Autonomous Driving

Group Retention when Using Machine Learning in Sequential Decision Making: the Interplay between User Dynamics and Fairness

no code implementations NeurIPS 2019 Xueru Zhang, Mohammad Mahdi Khalili, Cem Tekin, Mingyan Liu

Machine Learning (ML) models trained on data from multiple demographic groups can inherit representation disparity (Hashimoto et al., 2018) that may exist in the data: the model may be less favorable to groups contributing less to the training process; this in turn can degrade population retention in these groups over time, and exacerbate representation disparity in the long run.

Decision Making Fairness

Distributed Learning of Average Belief Over Networks Using Sequential Observations

no code implementations19 Nov 2018 Kaiqing Zhang, Yang Liu, Ji Liu, Mingyan Liu, Tamer Başar

This paper addresses the problem of distributed learning of average belief with sequential observations, in which a network of $n>1$ agents aim to reach a consensus on the average value of their beliefs, by exchanging information only with their neighbors.

MeshAdv: Adversarial Meshes for Visual Recognition

no code implementations CVPR 2019 Chaowei Xiao, Dawei Yang, Bo Li, Jia Deng, Mingyan Liu

Highly expressive models such as deep neural networks (DNNs) have been widely applied to various applications.

Recycled ADMM: Improve Privacy and Accuracy with Less Computation in Distributed Algorithms

no code implementations7 Oct 2018 Xueru Zhang, Mohammad Mahdi Khalili, Mingyan Liu

Alternating direction method of multiplier (ADMM) is a powerful method to solve decentralized convex optimization problems.

Improving the Privacy and Accuracy of ADMM-Based Distributed Algorithms

no code implementations ICML 2018 Xueru Zhang, Mohammad Mahdi Khalili, Mingyan Liu

Alternating direction method of multiplier (ADMM) is a popular method used to design distributed versions of a machine learning algorithm, whereby local computations are performed on local data with the output exchanged among neighbors in an iterative fashion.

Spatially Transformed Adversarial Examples

3 code implementations ICLR 2018 Chaowei Xiao, Jun-Yan Zhu, Bo Li, Warren He, Mingyan Liu, Dawn Song

Perturbations generated through spatial transformation could result in large $\mathcal{L}_p$ distance measures, but our extensive experiments show that such spatially transformed adversarial examples are perceptually realistic and more difficult to defend against with existing defense systems.

An Online Approach to Dynamic Channel Access and Transmission Scheduling

no code implementations4 Apr 2015 Yang Liu, Mingyan Liu

A natural remedy is a learning framework, which has also been extensively studied in the same context, but a typical learning algorithm in this literature seeks only the best static policy, with performance measured by weak regret, rather than learning a good dynamic channel access policy.

Scheduling

Group Learning and Opinion Diffusion in a Broadcast Network

no code implementations14 Sep 2013 Yang Liu, Mingyan Liu

We analyze the following group learning problem in the context of opinion diffusion: Consider a network with $M$ users, each facing $N$ options.

Online Learning in a Contract Selection Problem

no code implementations15 May 2013 Cem Tekin, Mingyan Liu

In an online contract selection problem there is a seller which offers a set of contracts to sequentially arriving buyers whose types are drawn from an unknown distribution.

Recommendation Systems

Optimal Adaptive Learning in Uncontrolled Restless Bandit Problems

no code implementations20 Jul 2011 Cem Tekin, Mingyan Liu

In an uncontrolled restless bandit problem, there is a finite set of arms, each of which when pulled yields a positive reward.

Online Algorithms for the Multi-Armed Bandit Problem with Markovian Rewards

no code implementations14 Jul 2010 Cem Tekin, Mingyan Liu

The player receives a state-dependent reward each time it plays an arm.

Cannot find the paper you are looking for? You can Submit a new open access paper.