1 code implementation • 3 Feb 2024 • Yatong Bai, Mo Zhou, Vishal M. Patel, Somayeh Sojoudi
Adversarial robustness often comes at the cost of degraded accuracy, impeding the real-life application of robust classification models.
no code implementations • 9 Jan 2024 • Yatong Bai, Utsav Garg, Apaar Shanker, Haoming Zhang, Samyak Parajuli, Erhan Bas, Isidora Filipovic, Amelia N. Chu, Eugenia D Fomitcheva, Elliot Branson, Aerin Kim, Somayeh Sojoudi, Kyunghyun Cho
Vision and vision-language applications of neural networks, such as image classification and captioning, rely on large-scale annotated datasets that require non-trivial data-collecting processes.
no code implementations • 26 Nov 2023 • Yatong Bai, Brendon G. Anderson, Somayeh Sojoudi
However, standard learning models often suffer from an accuracy-robustness trade-off, which is a limitation that must be overcome in the control of safety-critical systems that require both high performance and rigorous robustness guarantees.
1 code implementation • 19 Sep 2023 • Yatong Bai, Trung Dang, Dung Tran, Kazuhito Koishida, Somayeh Sojoudi
Diffusion models power a vast majority of text-to-audio (TTA) generation methods.
Ranked #10 on Audio Generation on AudioCaps
no code implementations • 29 Jul 2023 • Samuel Pfrommer, Yatong Bai, Hyunin Lee, Somayeh Sojoudi
Imitation learning suffers from causal confusion.
1 code implementation • 29 Jan 2023 • Yatong Bai, Brendon G. Anderson, Aerin Kim, Somayeh Sojoudi
While prior research has proposed a plethora of methods that build neural classifiers robust against adversarial robustness, practitioners are still reluctant to adopt them due to their unacceptably severe clean accuracy penalties.
Ranked #1 on Adversarial Robustness on CIFAR-100 (using extra training data)
no code implementations • 6 Jan 2022 • Yatong Bai, Tanmay Gautam, Somayeh Sojoudi
We apply the robust convex optimization theory to convex training and develop convex formulations that train ANNs robust to adversarial inputs.
no code implementations • 25 May 2021 • Yatong Bai, Tanmay Gautam, Yu Gai, Somayeh Sojoudi
Recent work has shown that the training of a one-hidden-layer, scalar-output fully-connected ReLU neural network can be reformulated as a finite-dimensional convex program.