no code implementations • 16 Apr 2024 • Matthew Inkawhich, Nathan Inkawhich, Hao Yang, Jingyang Zhang, Randolph Linderman, Yiran Chen
Our method also excels in low-data settings, outperforming supervised baselines using a fraction of the training data.
no code implementations • 1 Apr 2024 • Amol Khanna, Edward Raff, Nathan Inkawhich
Linear models are ubiquitous in data science, but are particularly prone to overfitting and data memorization in high dimensions.
no code implementations • 24 Mar 2024 • Chenhui Xu, Fuxun Yu, Zirui Xu, Nathan Inkawhich, Xiang Chen
Our experimental results demonstrate the superior performance of the MC Ensemble strategy in OOD detection compared to both the naive Deep Ensemble method and a standalone model of comparable size.
no code implementations • 18 Jan 2024 • Anish Lakkapragada, Amol Khanna, Edward Raff, Nathan Inkawhich
As machine learning becomes increasingly prevalent in impactful decisions, recognizing when inference data is outside the model's expected input distribution is paramount for giving context to predictions.
Dimensionality Reduction Out of Distribution (OOD) Detection
no code implementations • 28 Aug 2023 • Nathan Inkawhich, Gwendolyn McDonald, Ryan Luley
We show our attacks to be potent in whitebox and blackbox settings, as well as when transferred across foundational model types (e. g., attack DINOv2 with CLIP)!
no code implementations • 30 Mar 2023 • Noah Fleischmann, Walter Bennette, Nathan Inkawhich
Machine learning models deployed in the open world may encounter observations that they were not trained to recognize, and they risk misclassifying such observations with high confidence.
1 code implementation • 25 Mar 2023 • Jingyang Zhang, Nathan Inkawhich, Randolph Linderman, Ryan Luley, Yiran Chen, Hai Li
Building up reliable Out-of-Distribution (OOD) detectors is challenging, often requiring the use of OOD data during training.
no code implementations • 20 Mar 2023 • Nathan Inkawhich
In the first, a global representation model is trained via self-supervised learning on a large pool of diverse and unlabeled SAR data.
no code implementations • 9 Sep 2022 • Randolph Linderman, Jingyang Zhang, Nathan Inkawhich, Hai Li, Yiran Chen
Furthermore, we diagnose the classifiers performance at each level of the hierarchy improving the explainability and interpretability of the models predictions.
no code implementations • 23 Aug 2022 • Matthew Inkawhich, Nathan Inkawhich, Hai Li, Yiran Chen
Current state-of-the-art object proposal networks are trained with a closed-world assumption, meaning they learn to only detect objects of the training classes.
no code implementations • 2 Jul 2021 • Jerrick Liu, Nathan Inkawhich, Oliver Nina, Radu Timofte, Sahil Jain, Bob Lee, Yuru Duan, Wei Wei, Lei Zhang, Songzheng Xu, Yuxuan Sun, Jiaqi Tang, Mengru Ma, Gongzhe Li, Xueli Geng, Huanqia Cai, Chengxue Cai, Sol Cummings, Casian Miron, Alexandru Pasarica, Cheng-Yen Yang, Hung-Min Hsu, Jiarui Cai, Jie Mei, Chia-Ying Yeh, Jenq-Neng Hwang, Michael Xin, Zhongkai Shangguan, Zihe Zheng, Xu Yifei, Lehan Yang, Kele Xu, Min Feng
In this paper, we introduce the first Challenge on Multi-modal Aerial View Object Classification (MAVOC) in conjunction with the NTIRE 2021 workshop at CVPR.
1 code implementation • 7 Jun 2021 • Jingyang Zhang, Nathan Inkawhich, Randolph Linderman, Yiran Chen, Hai Li
We then propose Mixture Outlier Exposure (MixOE), which mixes ID data and training outliers to expand the coverage of different OOD granularities, and trains the model such that the prediction confidence linearly decays as the input transitions from ID to OOD.
Medical Image Classification Out-of-Distribution Detection +1
no code implementations • 17 Mar 2021 • Matthew Inkawhich, Nathan Inkawhich, Eric Davis, Hai Li, Yiran Chen
Over recent years, a myriad of novel convolutional network architectures have been developed to advance state-of-the-art performance on challenging recognition tasks.
no code implementations • 17 Mar 2021 • Nathan Inkawhich, Kevin J Liang, Jingyang Zhang, Huanrui Yang, Hai Li, Yiran Chen
During the online phase of the attack, we then leverage representations of highly related proxy classes from the whitebox distribution to fool the blackbox model into predicting the desired target class.
3 code implementations • NeurIPS 2020 • Huanrui Yang, Jingyang Zhang, Hongliang Dong, Nathan Inkawhich, Andrew Gardner, Andrew Touchet, Wesley Wilkes, Heath Berry, Hai Li
The process is hard, often requires models with large capacity, and suffers from significant loss on clean data accuracy.
no code implementations • NeurIPS 2020 • Nathan Inkawhich, Kevin J Liang, Binghui Wang, Matthew Inkawhich, Lawrence Carin, Yiran Chen
We consider the blackbox transfer-based targeted adversarial attack threat model in the realm of deep neural network (DNN) image classifiers.
no code implementations • ICLR 2020 • Nathan Inkawhich, Kevin J Liang, Lawrence Carin, Yiran Chen
Almost all current adversarial attacks of CNN classifiers rely on information derived from the output layer of the network.
1 code implementation • CVPR 2019 • Nathan Inkawhich, Wei Wen, Hai (Helen) Li, Yiran Chen
Many recent works have shown that deep learning models are vulnerable to quasi-imperceptible input perturbations, yet practitioners cannot fully explain this behavior.
no code implementations • ICLR 2019 • Nathan Inkawhich, Matthew Inkawhich, Yiran Chen, Hai Li
The success of deep learning research has catapulted deep models into production systems that our society is becoming increasingly dependent on, especially in the image and video domains.