Search Results for author: Gavin Weiguang Ding

Found 8 papers, 4 papers with code

CDT: Cascading Decision Trees for Explainable Reinforcement Learning

1 code implementation15 Nov 2020 Zihan Ding, Pablo Hernandez-Leal, Gavin Weiguang Ding, Changjian Li, Ruitong Huang

As a second contribution our study reveals limitations of explaining black-box policies via imitation learning with tree-based explainable models, due to its inherent instability.

Explainable Models Imitation Learning +3

Cascaded Deep Neural Networks for Retinal Layer Segmentation of Optical Coherence Tomography with Fluid Presence

no code implementations7 Dec 2019 Donghuan Lu, Morgan Heisler, Da Ma, Setareh Dabiri, Sieun Lee, Gavin Weiguang Ding, Marinko V. Sarunic, Mirza Faisal Beg

Optical coherence tomography (OCT) is a non-invasive imaging technology which can provide micrometer-resolution cross-sectional images of the inner structures of the eye.

On the Effectiveness of Low Frequency Perturbations

no code implementations28 Feb 2019 Yash Sharma, Gavin Weiguang Ding, Marcus Brubaker

Carefully crafted, often imperceptible, adversarial perturbations have been shown to cause state-of-the-art models to yield extremely inaccurate outputs, rendering them unsuitable for safety-critical application domains.

Adversarial Attack Adversarial Robustness

On the Sensitivity of Adversarial Robustness to Input Data Distributions

no code implementations ICLR 2019 Gavin Weiguang Ding, Kry Yik Chau Lui, Xiaomeng Jin, Luyu Wang, Ruitong Huang

Even a semantics-preserving transformations on the input data distribution can cause a significantly different robustness for the adversarial trained model that is both trained and evaluated on the new distribution.

Adversarial Robustness

MMA Training: Direct Input Space Margin Maximization through Adversarial Training

1 code implementation ICLR 2020 Gavin Weiguang Ding, Yash Sharma, Kry Yik Chau Lui, Ruitong Huang

We study adversarial robustness of neural networks from a margin maximization perspective, where margins are defined as the distances from inputs to a classifier's decision boundary.

Adversarial Defense Adversarial Robustness

Cannot find the paper you are looking for? You can Submit a new open access paper.