no code implementations • 19 May 2024 • Chenyi Zhang, Tongyang Li
In addition, we also give an algorithm for escaping saddle points and reaching an $\epsilon$-second order stationary point of a nonconvex $f$, using $\tilde{O}(n^{1. 5}/\epsilon^{2. 5})$ comparison queries.
1 code implementation • 30 Mar 2024 • Chenyi Zhang, Yihan Hu, Henghui Ding, Humphrey Shi, Yao Zhao, Yunchao Wei
Despite significant advancements in image matting, existing models heavily depend on manually-drawn trimaps for accurate results in natural image scenarios.
no code implementations • 4 Mar 2024 • Shuai Ma, Chenyi Zhang, Xinru Wang, Xiaojuan Ma, Ming Yin
Artificial Intelligence (AI) is increasingly employed in various decision-making tasks, typically as a Recommender, providing recommendations that the AI deems correct.
no code implementations • 21 Apr 2022 • Weizhen Xu, Chenyi Zhang, Fangzhen Zhao, Liangda Fang
Adversarial attacks hamper the functionality and accuracy of Deep Neural Networks (DNNs) by meddling with subtle perturbations to their inputs. In this work, we propose a new Mask-based Adversarial Defense scheme (MAD) for DNNs to mitigate the negative effect from adversarial attacks.
no code implementations • NeurIPS 2021 • Chenyi Zhang, Tongyang Li
Compared to the previous state-of-the-art algorithms by Jin et al. with $\tilde{O}((\log n)^{4}/\epsilon^{2})$ or $\tilde{O}((\log n)^{6}/\epsilon^{1. 75})$ iterations, our algorithm is polynomially better in terms of $\log n$ and matches their complexities in terms of $1/\epsilon$.
1 code implementation • 6 Oct 2021 • Fangzhen Zhao, Chenyi Zhang, Naipeng Dong, Zefeng You, Zhenxin Wu
Deep neural networks (DNN) can achieve high performance when applied to In-Distribution (ID) data which come from the same distribution as the training set.
no code implementations • 20 Jul 2020 • Chenyi Zhang, Jiaqi Leng, Tongyang Li
Compared to the classical state-of-the-art algorithm by Jin et al. with $\tilde{O}(\log^{6} (n)/\epsilon^{1. 75})$ queries to the gradient oracle (i. e., the first-order oracle), our quantum algorithm is polynomially better in terms of $\log n$ and matches its complexity in terms of $1/\epsilon$.