no code implementations • 27 Nov 2023 • Xinyu Tian, Shu Zou, Zhaoyuan Yang, Jing Zhang
Although soft prompt tuning is effective in efficiently adapting Vision-Language (V&L) models for downstream tasks, it shows limitations in dealing with distribution shifts.
1 code implementation • 12 Nov 2023 • Zhaoyuan Yang, Zhengyang Yu, Zhiwei Xu, Jaskirat Singh, Jing Zhang, Dylan Campbell, Peter Tu, Richard Hartley
We present a diffusion-based image morphing approach with perceptually-uniform sampling (IMPUS) that produces smooth, direct and realistic interpolations given an image pair.
no code implementations • 12 Sep 2023 • James Robert Kubricht, Zhaoyuan Yang, Jianwei Qiu, Peter Henry Tu
Deep learning approaches to natural language processing have made great strides in recent years.
1 code implementation • 31 Jul 2023 • Mengqi He, Jing Zhang, Zhaoyuan Yang, Mingyi He, Nick Barnes, Yuchao Dai
We analysis performance of semantic segmentation models wrt.
no code implementations • 6 Jul 2023 • Peter Tu, Zhaoyuan Yang, Richard Hartley, Zhiwei Xu, Jing Zhang, Yiwei Fu, Dylan Campbell, Jaskirat Singh, Tianyu Wang
This paper begins with a description of methods for estimating image probability density functions that reflects the observation that such data is usually constrained to lie in restricted regions of the high-dimensional image space-not every pattern of pixels is an image.
no code implementations • 22 Mar 2023 • Yun-Yun Tsai, Ju-Chin Chao, Albert Wen, Zhaoyuan Yang, Chengzhi Mao, Tapan Shah, Junfeng Yang
Test-time defenses solve these issues but most existing test-time defenses require adapting the model weights, therefore they do not work on frozen models and complicate model memory management.
no code implementations • 26 Oct 2022 • Zhaoyuan Yang, Zhiwei Xu, Jing Zhang, Richard Hartley, Peter Tu
In this work, we formulate a novel framework for adversarial robustness using the manifold hypothesis.
no code implementations • 22 Sep 2022 • Zhaoyuan Yang, Yewteck Tan, Shiraj Sen, Johan Reimann, John Karigiannis, Mohammed Yousefhussien, Nurali Virani
We test the hypothesis that model trained on a single dataset may not generalize to other off-road navigation datasets and new locations due to the input distribution drift.
no code implementations • 27 Apr 2022 • Zhaoyuan Yang, Arpit Jain
Dropout as regularization has been used extensively to prevent overfitting for training neural networks.
no code implementations • 14 Oct 2021 • Weizhong Yan, Zhaoyuan Yang, Jianwei Qiu
With proliferation of deep learning (DL) applications in diverse domains, vulnerability of DL models to adversarial attacks has become an increasingly interesting research topic in the domains of Computer Vision (CV) and Natural Language Processing (NLP).
no code implementations • 19 Feb 2020 • Chitresh Bhushan, Zhaoyuan Yang, Nurali Virani, Naresh Iyer
Machine learning models provide statistically impressive results which might be individually unreliable.
no code implementations • 18 Nov 2019 • Nurali Virani, Naresh Iyer, Zhaoyuan Yang
To address this need, we link the question of reliability of a model's individual prediction to the epistemic uncertainty of the model's prediction.
no code implementations • 26 Feb 2019 • Zhaoyuan Yang, Naresh Iyer, Johan Reimann, Nurali Virani
Recent work has demonstrated robust mechanisms by which attacks can be orchestrated on machine learning models.
no code implementations • 15 Sep 2018 • Abhishek Gupta, Zhaoyuan Yang
Complex autonomous control systems are subjected to sensor failures, cyber-attacks, sensor noise, communication channel failures, etc.