Search Results for author: Haotao Wang

Found 21 papers, 15 papers with code

Safe and Robust Watermark Injection with a Single OoD Image

1 code implementation4 Sep 2023 Shuyang Yu, Junyuan Hong, Haobo Zhang, Haotao Wang, Zhangyang Wang, Jiayu Zhou

Training a high-performance deep neural network requires large amounts of data and computational resources.

Model extraction

Trap and Replace: Defending Backdoor Attacks by Trapping Them into an Easy-to-Replace Subnetwork

1 code implementation12 Oct 2022 Haotao Wang, Junyuan Hong, Aston Zhang, Jiayu Zhou, Zhangyang Wang

As a result, both the stem and the classification head in the final network are hardly affected by backdoor training samples.

backdoor defense Classification +1

Removing Batch Normalization Boosts Adversarial Training

1 code implementation4 Jul 2022 Haotao Wang, Aston Zhang, Shuai Zheng, Xingjian Shi, Mu Li, Zhangyang Wang

In addition, NoFrost achieves a $23. 56\%$ adversarial robustness against PGD attack, which improves the $13. 57\%$ robustness in BN-based AT.

Adversarial Robustness

Efficient Split-Mix Federated Learning for On-Demand and In-Situ Customization

1 code implementation ICLR 2022 Junyuan Hong, Haotao Wang, Zhangyang Wang, Jiayu Zhou

In this paper, we propose a novel Split-Mix FL strategy for heterogeneous participants that, once training is done, provides in-situ customization of model sizes and robustness.

Personalized Federated Learning

Equalized Robustness: Towards Sustainable Fairness Under Distributional Shifts

no code implementations29 Sep 2021 Haotao Wang, Junyuan Hong, Jiayu Zhou, Zhangyang Wang

In this paper, we first propose a new fairness goal, termed Equalized Robustness (ER), to impose fair model robustness against unseen distribution shifts across majority and minority groups.

Fairness

Federated Robustness Propagation: Sharing Robustness in Heterogeneous Federated Learning

1 code implementation18 Jun 2021 Junyuan Hong, Haotao Wang, Zhangyang Wang, Jiayu Zhou

In this paper, we study a novel FL strategy: propagating adversarial robustness from rich-resource users that can afford AT, to those with poor resources that cannot afford it, during federated learning.

Adversarial Robustness Federated Learning

Taxonomy of Machine Learning Safety: A Survey and Primer

no code implementations9 Jun 2021 Sina Mohseni, Haotao Wang, Zhiding Yu, Chaowei Xiao, Zhangyang Wang, Jay Yadawa

The open-world deployment of Machine Learning (ML) algorithms in safety-critical applications such as autonomous vehicles needs to address a variety of ML vulnerabilities such as interpretability, verifiability, and performance limitations.

Autonomous Vehicles BIG-bench Machine Learning +1

Troubleshooting Blind Image Quality Models in the Wild

no code implementations CVPR 2021 Zhihua Wang, Haotao Wang, Tianlong Chen, Zhangyang Wang, Kede Ma

Recently, the group maximum differentiation competition (gMAD) has been used to improve blind image quality assessment (BIQA) models, with the help of full-reference metrics.

Blind Image Quality Assessment Network Pruning

Efficiently Troubleshooting Image Segmentation Models with Human-In-The-Loop

no code implementations1 Jan 2021 Haotao Wang, Tianlong Chen, Zhangyang Wang, Kede Ma

Image segmentation lays the foundation for many high-stakes vision applications such as autonomous driving and medical image analysis.

Autonomous Driving Image Segmentation +2

GAN Slimming: All-in-One GAN Compression by A Unified Optimization Framework

2 code implementations ECCV 2020 Haotao Wang, Shupeng Gui, Haichuan Yang, Ji Liu, Zhangyang Wang

Generative adversarial networks (GANs) have gained increasing popularity in various computer vision applications, and recently start to be deployed to resource-constrained mobile devices.

Image-to-Image Translation Quantization +1

AutoGAN-Distiller: Searching to Compress Generative Adversarial Networks

3 code implementations ICML 2020 Yonggan Fu, Wuyang Chen, Haotao Wang, Haoran Li, Yingyan Lin, Zhangyang Wang

Inspired by the recent success of AutoML in deep compression, we introduce AutoML to GAN compression and develop an AutoGAN-Distiller (AGD) framework.

AutoML Knowledge Distillation +2

I Am Going MAD: Maximum Discrepancy Competition for Comparing Classifiers Adaptively

1 code implementation ICLR 2020 Haotao Wang, Tianlong Chen, Zhangyang Wang, Kede Ma

On the other hand, the trained classifiers have traditionally been evaluated on small and fixed sets of test images, which are deemed to be extremely sparsely distributed in the space of all natural images.

Image Classification

Triple Wins: Boosting Accuracy, Robustness and Efficiency Together by Enabling Input-Adaptive Inference

2 code implementations ICLR 2020 Ting-Kuei Hu, Tianlong Chen, Haotao Wang, Zhangyang Wang

Deep networks were recently suggested to face the odds between accuracy (on clean natural images) and robustness (on adversarially perturbed images) (Tsipras et al., 2019).

Privacy-Preserving Deep Action Recognition: An Adversarial Learning Framework and A New Dataset

5 code implementations12 Jun 2019 Zhen-Yu Wu, Haotao Wang, Zhaowen Wang, Hailin Jin, Zhangyang Wang

We first discuss an innovative heuristic of cross-dataset training and evaluation, enabling the use of multiple single-task datasets (one with target task labels and the other with privacy labels) in our problem.

Action Recognition Privacy Preserving +1

Model Compression with Adversarial Robustness: A Unified Optimization Framework

2 code implementations NeurIPS 2019 Shupeng Gui, Haotao Wang, Chen Yu, Haichuan Yang, Zhangyang Wang, Ji Liu

Deep model compression has been extensively studied, and state-of-the-art methods can now achieve high compression ratios with minimal accuracy loss.

Adversarial Robustness Model Compression +1

Cannot find the paper you are looking for? You can Submit a new open access paper.