no code implementations • 28 Apr 2023 • Zhiyuan Cheng, Hongjun Choi, James Liang, Shiwei Feng, Guanhong Tao, Dongfang Liu, Michael Zuzak, Xiangyu Zhang
We argue that the weakest link of fusion models depends on their most vulnerable modality, and propose an attack framework that targets advanced camera-LiDAR fusion-based 3D object detection models through camera-only adversarial attacks.
no code implementations • 27 Feb 2023 • Eun Som Jeon, Hongjun Choi, Ankita Shukla, Pavan Turaga
AMD loss uses the angular distance between positive and negative features by projecting them onto a hypersphere, motivated by the near angular distributions seen in many feature extractors.
1 code implementation • 8 Nov 2022 • Hongjun Choi, Eun Som Jeon, Ankita Shukla, Pavan Turaga
Mixup is a popular data augmentation technique based on creating new samples by linear interpolation between two given data samples, to improve both the generalization and robustness of the trained model.
1 code implementation • 11 Jul 2022 • Zhiyuan Cheng, James Liang, Hongjun Choi, Guanhong Tao, Zhiwen Cao, Dongfang Liu, Xiangyu Zhang
Experimental results show that our method can generate stealthy, effective, and robust adversarial patches for different target objects and models and achieves more than 6 meters mean depth estimation error and 93% attack success rate (ASR) in object detection with a patch of 1/9 of the vehicle's rear area.
no code implementations • 2 Feb 2021 • Ella Y. Wang, Anirudh Som, Ankita Shukla, Hongjun Choi, Pavan Turaga
In addition to these findings, our work also presents a new application of the OS regularizer in healthcare, increasing the post-hoc interpretability and performance of deep learning models for COVID-19 classification to facilitate adoption of these methods in clinical settings.
no code implementations • 22 Sep 2020 • Hongjun Choi, Anirudh Som, Pavan Turaga
Standard deep learning models that employ the categorical cross-entropy loss are known to perform well at image classification tasks.
no code implementations • 28 Apr 2020 • Yuanzhong Xu, HyoukJoong Lee, Dehao Chen, Hongjun Choi, Blake Hechtman, Shibo Wang
In data-parallel synchronous training of deep neural networks, different devices (replicas) run the same program with different partitions of the training batch, but weight update computation is repeated on all replicas, because the weights do not have a batch dimension to partition.
1 code implementation • 21 Apr 2020 • Hongjun Choi, Anirudh Som, Pavan Turaga
We find that although the proposed geometrically constrained loss-function improves quantitative results modestly, it has a qualitatively surprisingly beneficial effect on increasing the interpretability of deep-net decisions as seen by the visual explanations generated by techniques such as the Grad-CAM.
1 code implementation • 5 Jun 2019 • Anirudh Som, Hongjun Choi, Karthikeyan Natesan Ramamurthy, Matthew Buman, Pavan Turaga
To the best of our knowledge, we are the first to propose the use of deep learning for computing topological features directly from data.