no code implementations • 16 Oct 2023 • Jirong Yi, Jingchao Gao, Tianming Wang, Xiaodong Wu, Weiyu Xu
We propose an outlier detection approach for reconstructing the ground-truth signals modeled by generative models under sparse outliers.
no code implementations • 23 Nov 2022 • Jirong Yi, Qiaosheng Zhang, Zhen Chen, Qiao Liu, Wei Shao, Yusen He, Yaohua Wang
We first argue that the MSE minimization approach is equivalent to a conditional entropy learning problem, and then propose a mutual information learning formulation for solving regression problems by using a reparameterization technique.
no code implementations • 3 Oct 2022 • Jirong Yi, Qiaosheng Zhang, Zhen Chen, Qiao Liu, Wei Shao
Deep learning systems have been reported to acheive state-of-the-art performances in many applications, and one of the keys for achieving this is the existence of well trained classifiers on benchmark datasets which can be used as backbone feature extractors in downstream tasks.
no code implementations • 21 Sep 2022 • Jirong Yi, Qiaosheng Zhang, Zhen Chen, Qiao Liu, Wei Shao
Deep learning systems have been reported to achieve state-of-the-art performances in many applications, and a key is the existence of well trained classifiers on benchmark datasets.
no code implementations • 2 Apr 2021 • Jirong Yi
Inspired by alternating direction method of multipliers and the idea of operator splitting, we propose a efficient algorithm for solving large-scale quadratically constrained basis pursuit.
no code implementations • 5 Aug 2020 • Jirong Yi, Myung Cho, Xiaodong Wu, Raghu Mudumbai, Weiyu Xu
In this paper, we consider the problem of designing optimal pooling matrix for group testing (for example, for COVID-19 virus testing) with the constraint that no more than $r>0$ samples can be pooled together, which we call "dilution constraint".
no code implementations • 28 Jul 2020 • Jirong Yi, Raghu Mudumbai, Weiyu Xu
We consider the theoretical problem of designing an optimal adversarial attack on a decision system that maximally degrades the achievable performance of the system as measured by the mutual information between the degraded signal and the label of interest.
no code implementations • 26 Mar 2020 • Zain Khan, Jirong Yi, Raghu Mudumbai, Xiaodong Wu, Weiyu Xu
Recent works have demonstrated the existence of {\it adversarial examples} targeting a single machine learning system.
no code implementations • 25 May 2019 • Jirong Yi, Hui Xie, Leixin Zhou, Xiaodong Wu, Weiyu Xu, Raghuraman Mudumbai
In this paper, we present a simple hypothesis about a feature compression property of artificial intelligence (AI) classifiers and present theoretical arguments to show that this hypothesis successfully accounts for the observed fragility of AI classifiers to small adversarial perturbations.
no code implementations • 27 Jan 2019 • Hui Xie, Jirong Yi, Weiyu Xu, Raghu Mudumbai
We present a simple hypothesis about a compression property of artificial intelligence (AI) classifiers and present theoretical arguments to show that this hypothesis successfully accounts for the observed fragility of AI classifiers to small adversarial perturbations.
no code implementations • 26 Oct 2018 • Jirong Yi, Anh Duc Le, Tianming Wang, Xiaodong Wu, Weiyu Xu
In this paper, we propose a generative model neural network approach for reconstructing the ground truth signals under sparse outliers.
no code implementations • 14 Feb 2018 • Jirong Yi, Weiyu Xu
In [12, 14, 15], the authors established the necessary and sufficient null space conditions for nuclear norm minimization to recover every possible low-rank matrix with rank at most r (the strong null space condition).
no code implementations • 4 Nov 2017 • Weiyu Xu, Jirong Yi, Soura Dasgupta, Jian-Feng Cai, Mathews Jacob, Myung Cho
However, it is known that in order for TV minimization and atomic norm minimization to recover the missing data or the frequencies, the underlying $R$ frequencies are required to be well-separated, even when the measurements are noiseless.