Search Results for author: Nanyang Ye

Found 21 papers, 8 papers with code

Functional Bayesian Neural Networks for Model Uncertainty Quantification

no code implementations ICLR 2019 Nanyang Ye, Zhanxing Zhu

In this paper, we extend the Bayesian neural network to functional Bayesian neural network with functional Monte Carlo methods that use the samples of functionals instead of samples of networks' parameters for inference to overcome the curse of dimensionality for uncertainty quantification.

Uncertainty Quantification

Rethinking ASTE: A Minimalist Tagging Scheme Alongside Contrastive Learning

no code implementations12 Mar 2024 Qiao Sun, Liujia Yang, Minghao Ma, Nanyang Ye, Qinying Gu

Aspect Sentiment Triplet Extraction (ASTE) is a burgeoning subtask of fine-grained sentiment analysis, aiming to extract structured sentiment triplets from unstructured textual data.

Aspect Sentiment Triplet Extraction Contrastive Learning +1

G-NAS: Generalizable Neural Architecture Search for Single Domain Generalization Object Detection

1 code implementation7 Feb 2024 Fan Wu, Jinling Gao, Lanqing Hong, Xinbing Wang, Chenghu Zhou, Nanyang Ye

To address this issue, we propose the Generalizable loss (G-loss), which is an OoD-aware objective, preventing NAS from over-fitting by using gradient descent to optimize parameters not only on a subset of easy-to-learn features but also the remaining predictive features for generalization, and the overall framework is named G-NAS.

Domain Generalization Neural Architecture Search +2

Domain Invariant Learning for Gaussian Processes and Bayesian Exploration

1 code implementation18 Dec 2023 Xilong Zhao, Siyuan Bian, Yaoyun Zhang, Yuliang Zhang, Qinying Gu, Xinbing Wang, Chenghu Zhou, Nanyang Ye

We further demonstrate the effectiveness of the DIL-GP Bayesian optimization method on a PID parameters tuning experiment for a quadrotor.

Bayesian Optimization Gaussian Processes

BayesFT: Bayesian Optimization for Fault Tolerant Neural Network Architecture

no code implementations30 Sep 2022 Nanyang Ye, Jingbiao Mei, Zhicheng Fang, Yuwen Zhang, Ziqing Zhang, Huaying Wu, Xiaoyao Liang

For neural architecture search space design, instead of conducting neural architecture search on the whole feasible neural architecture search space, we first systematically explore the weight drifting tolerance of different neural network components, such as dropout, normalization, number of layers, and activation functions in which dropout is found to be able to improve the neural network robustness to weight drifting.

Bayesian Optimization Image Classification +3

Regularization Penalty Optimization for Addressing Data Quality Variance in OoD Algorithms

no code implementations12 Jun 2022 Runpeng Yu, Hong Zhu, Kaican Li, Lanqing Hong, Rui Zhang, Nanyang Ye, Shao-Lun Huang, Xiuqiang He

Due to the poor generalization performance of traditional empirical risk minimization (ERM) in the case of distributional shift, Out-of-Distribution (OoD) generalization algorithms receive increasing attention.

regression

NAS-OoD: Neural Architecture Search for Out-of-Distribution Generalization

1 code implementation ICCV 2021 Haoyue Bai, Fengwei Zhou, Lanqing Hong, Nanyang Ye, S. -H. Gary Chan, Zhenguo Li

In this work, we propose robust Neural Architecture Search for OoD generalization (NAS-OoD), which optimizes the architecture with respect to its performance on generated OoD data by gradient descent.

Domain Generalization Neural Architecture Search +1

DeepIC: Coding for Interference Channels via Deep Learning

no code implementations13 Aug 2021 Karl Chahine, Nanyang Ye, Hyeji Kim

Interestingly, it is shown that there exists an asymptotic scheme, called Han-Kobayashi scheme, that performs better than TD and TIN.

Adversarial Invariant Learning

1 code implementation CVPR 2021 Nanyang Ye, Jingxuan Tang, Huayu Deng, Xiao-Yun Zhou, Qianxiao Li, Zhenguo Li, Guang-Zhong Yang, Zhanxing Zhu

To the best of our knowledge, this is one of the first to adopt differentiable environment splitting method to enable stable predictions across environments without environment index information, which achieves the state-of-the-art performance on datasets with strong spurious correlation, such as Colored MNIST.

Domain Generalization Out-of-Distribution Generalization

VeniBot: Towards Autonomous Venipuncture with Automatic Puncture Area and Angle Regression from NIR Images

no code implementations27 May 2021 Xu Cao, Zijie Chen, Bolin Lai, Yuxuan Wang, Yu Chen, Zhengqing Cao, Zhilin Yang, Nanyang Ye, Junbo Zhao, Xiao-Yun Zhou, Peng Qi

For the automation, we focus on the positioning part and propose a Dual-In-Dual-Out network based on two-step learning and two-task learning, which can achieve fully automatic regression of the suitable puncture area and angle from near-infrared(NIR) images.

Navigate regression

Amata: An Annealing Mechanism for Adversarial Training Acceleration

no code implementations15 Dec 2020 Nanyang Ye, Qianxiao Li, Xiao-Yun Zhou, Zhanxing Zhu

However, conducting adversarial training brings much computational overhead compared with standard training.

Batch Group Normalization

no code implementations4 Dec 2020 Xiao-Yun Zhou, Jiacheng Sun, Nanyang Ye, Xu Lan, Qijun Luo, Bo-Lin Lai, Pedro Esperanca, Guang-Zhong Yang, Zhenguo Li

Among previous normalization methods, Batch Normalization (BN) performs well at medium and large batch sizes and is with good generalizability to multiple vision tasks, while its performance degrades significantly at small batch sizes.

Few-Shot Learning Image Classification +2

Achieving Adversarial Robustness via Sparsity

no code implementations11 Sep 2020 Shufan Wang, Ningyi Liao, Liyao Xiang, Nanyang Ye, Quanshi Zhang

Through experiments on a variety of adversarial pruning methods, we find that weights sparsity will not hurt but improve robustness, where both weights inheritance from the lottery ticket and adversarial training improve model robustness in network pruning.

Adversarial Robustness Network Pruning

Predicting Visible Image Differences Under Varying Display Brightness and Viewing Distance

2 code implementations CVPR 2019 Nanyang Ye, Krzysztof Wolski, Rafal K. Mantiuk

Then, we develop a hybrid model that combines white-box processing stages for modeling the effects of luminance masking and contrast sensitivity, with a black-box deep neural network.

Bayesian Adversarial Learning

no code implementations NeurIPS 2018 Nanyang Ye, Zhanxing Zhu

In this work, a novel robust training framework is proposed to alleviate this issue, Bayesian Robust Learning, in which a distribution is put on the adversarial data-generating distribution to account for the uncertainty of the adversarial data-generating process.

Langevin Dynamics with Continuous Tempering for Training Deep Neural Networks

no code implementations NeurIPS 2017 Nanyang Ye, Zhanxing Zhu, Rafal K. Mantiuk

Minimizing non-convex and high-dimensional objective functions is challenging, especially when training modern deep neural networks.

Stochastic Optimization

Cannot find the paper you are looking for? You can Submit a new open access paper.