no code implementations • 19 Feb 2024 • Haiming Zhu, Yangyang Xu, Shengfeng He
In this paper, we present QueryWarp, a novel framework for temporally coherent human motion video translation.
no code implementations • 22 Nov 2023 • Yangyang Xu, Shengfeng He, Wenqi Shao, Kwan-Yee K. Wong, Yu Qiao, Ping Luo
In this paper, we introduce DiffusionMat, a novel image matting framework that employs a diffusion model for the transition from coarse to refined alpha mattes.
no code implementations • 15 Nov 2023 • Hari Dahal, Wei Liu, Yangyang Xu
For the former case, DPALM achieves the complexity of $\widetilde{\mathcal{O}}\left(\varepsilon^{-2. 5} \right)$ to produce an $\varepsilon$-KKT point by applying an accelerated proximal gradient (APG) method to each DPALM subproblem.
1 code implementation • 3 Sep 2023 • Jiajin Zhang, Hanqing Chao, Amit Dhurandhar, Pin-Yu Chen, Ali Tajer, Yangyang Xu, Pingkun Yan
To accomplish this challenging task, first, a spectral sensitivity map is introduced to characterize the generalization weaknesses of models in the frequency domain.
no code implementations • ICCV 2023 • Yangyang Xu, Shengfeng He, Kwan-Yee K. Wong, Ping Luo
In this paper, we propose a unified recurrent framework, named \textbf{R}ecurrent v\textbf{I}deo \textbf{G}AN \textbf{I}nversion and e\textbf{D}iting (RIGID), to explicitly and simultaneously enforce temporally coherent GAN inversion and facial editing of real videos.
1 code implementation • 10 Aug 2023 • Yangyang Xu, Yibo Yang, Bernard Ghanem, Lefei Zhang, Du Bo, DaCheng Tao
In this work, we present a novel MTL model by combining both merits of deformable CNN and query-based Transformer with shared gating for multi-task learning of dense prediction.
1 code implementation • 14 Jul 2023 • Gabriel Mancino-Ball, Yangyang Xu
Coupling this with variance-reduction (VR) techniques, our proposed method, entitled VRLM, by a single neighbor communication per iteration, is able to achieve an $\mathcal{O}(\kappa^3\varepsilon^{-3})$ sample complexity under the general stochastic setting, with either a big-batch or small-batch VR option, where $\kappa$ is the condition number of the problem and $\varepsilon$ is the desired solution accuracy.
no code implementations • 14 Jul 2023 • Wei Liu, Qihang Lin, Yangyang Xu
In this paper, we make the first attempt to establish lower complexity bounds of FOMs for solving a class of composite non-convex non-smooth optimization with linear constraints.
no code implementations • 5 Apr 2023 • Yangyang Xu
With this relation, we show that when the dual regularizer is smooth, our algorithm can have lower complexity results (with reduced dependence on a condition number) than existing ones to produce a near-stationary point of the original formulation.
2 code implementations • 9 Jan 2023 • Yangyang Xu, Yibo Yang, Lefei Zhang
In this work, we present a novel MTL model by combining both merits of deformable CNN and query-based Transformer for multi-task learning of dense prediction.
no code implementations • ICCV 2023 • Yangyang Xu, Yibo Yang, Lefei Zhang
With the less sensitive divergence, our knowledge distillation with an alternative match is applied for capturing inter-task and intra-task information between the teacher model and the student model of each task, thereby learning more "dark knowledge" for effective distillation.
no code implementations • 19 Dec 2022 • Zichong Li, Pin-Yu Chen, Sijia Liu, Songtao Lu, Yangyang Xu
In this paper, we design and analyze stochastic inexact augmented Lagrangian methods (Stoc-iALM) to solve problems involving a nonconvex composite (i. e. smooth+nonsmooth) objective and nonconvex smooth functional constraints.
1 code implementation • 1 Dec 2022 • Jiajin Zhang, Hanqing Chao, Amit Dhurandhar, Pin-Yu Chen, Ali Tajer, Yangyang Xu, Pingkun Yan
Domain generalization (DG) aims to train a model to perform well in unseen domains under different distributions.
1 code implementation • 5 Aug 2022 • Jia Li, Ziyang Zhang, Junjie Lang, Yueqi Jiang, Liuwei An, Peng Zou, Yangyang Xu, Sheng Gao, Jie Lin, Chunxiao Fan, Xiao Sun, Meng Wang
In this paper, we present our solutions for the Multimodal Sentiment Analysis Challenge (MuSe) 2022, which includes MuSe-Humor, MuSe-Reaction and MuSe-Stress Sub-challenges.
1 code implementation • 28 May 2022 • Yangyang Xu, Xiangtai Li, Haobo Yuan, Yibo Yang, Lefei Zhang
We first model each task with a task-relevant query.
1 code implementation • CVPR 2022 • Yangyang Xu, Bailin Deng, Junle Wang, Yanqing Jing, Jia Pan, Shengfeng He
Although previous research can leverage generative priors to produce high-resolution results, their quality can suffer from the entangled semantics of the latent space.
2 code implementations • ICCV 2021 • Yangyang Xu, Yong Du, Wenpeng Xiao, Xuemiao Xu, Shengfeng He
This inborn property is used for two unique purposes: 1) regularizing the joint inversion process, such that each of the inverted code is semantically accessible from one of the other and fastened in a editable domain; 2) enforcing inter-image coherence, such that the fidelity of each inverted code can be maximized with the complement of other images.
no code implementations • 5 Feb 2021 • Jingjing Ren, Xiaowei Hu, Lei Zhu, Xuemiao Xu, Yangyang Xu, Weiming Wang, Zijun Deng, Pheng-Ann Heng
Camouflaged object detection is a challenging task that aims to identify objects having similar texture to the surroundings.
no code implementations • 31 May 2020 • Yangyang Xu, Yibo Xu
In this paper, we propose a new SGM, named PStorm, for solving nonconvex nonsmooth stochastic problems.
1 code implementation • 21 Feb 2020 • Yangyang Xu, Colin Sutcher-Shepard, Yibo Xu, Jie Chen
The proposed method is tested on both convex and non-convex machine learning problems, and the numerical results demonstrate its clear advantages over the sync counterpart and the async-parallel nonadaptive SGM.
Optimization and Control Distributed, Parallel, and Cluster Computing Numerical Analysis Numerical Analysis 90C15, 65Y05, 68W15, 65K05
no code implementations • 24 Oct 2019 • Yibo Xu, Yangyang Xu
However, the additional compositional structure prohibits easy access to unbiased stochastic approximation of the gradient, so directly applying the SGM to a finite-sum compositional optimization problem (COP) is often inefficient.
no code implementations • 22 Nov 2018 • Tao Sun, Yuejiao Sun, Yangyang Xu, Wotao Yin
random and cyclic selections are either infeasible or very expensive.
no code implementations • NeurIPS 2018 • Bo Liu, Tengyang Xie, Yangyang Xu, Mohammad Ghavamzadeh, Yin-Lam Chow, Daoming Lyu, Daesub Yoon
Risk management in dynamic decision problems is a primary concern in many fields, including financial investment, autonomous driving, and healthcare.
no code implementations • 8 Jan 2018 • Yangyang Xu, Lei Wang
Our first subnet is a two-stream network which could explore both temporal and spatial information.
no code implementations • 18 May 2017 • Yangyang Xu
Recent several years have witnessed the surge of asynchronous (async-) parallel computing methods due to the extremely big data involved in many modern applications and also the advancement of multi-core machines and computer clusters.
no code implementations • 17 Feb 2017 • Yangyang Xu, Shuzhong Zhang
We show that the rate can be accelerated to $O(1/t^2)$ if the objective is strongly convex.
no code implementations • 13 Dec 2016 • Zhimin Peng, Yangyang Xu, Ming Yan, Wotao Yin
Recent years have witnessed the surge of asynchronous parallel (async-parallel) iterative algorithms due to problems involving very large-scale data and a large number of decision variables.
no code implementations • 30 Sep 2016 • Hao-Jun Michael Shi, Shenyinying Tu, Yangyang Xu, Wotao Yin
This monograph presents a class of algorithms called coordinate descent algorithms for mathematicians, statisticians, and engineers outside the field of optimization.
no code implementations • 13 Aug 2016 • Yangyang Xu
In optimization, BCU first appears as the coordinate descent method that works well for smooth problems or those with separable nonsmooth terms and/or separable constraints.
no code implementations • 29 Jun 2016 • Yangyang Xu
al, SIIMS'14], which requires strong convexity on both block variables and no linearization to the objective or augmented term.
no code implementations • 19 May 2016 • Xiang Gao, Yangyang Xu, Shuzhong Zhang
Assuming mere convexity, we establish its $O(1/t)$ convergence rate in terms of the objective value and feasibility measure.
no code implementations • 5 Jan 2016 • Zhimin Peng, Tianyu Wu, Yangyang Xu, Ming Yan, Wotao Yin
To derive simple subproblems for several new classes of applications, this paper systematically studies coordinate-friendly operators that perform low-cost coordinate updates.
no code implementations • 30 Nov 2015 • Yangyang Xu, Ioannis Akrotirianakis, Amit Chakraborty
The Support Vector Machine (SVM) has been used in a wide variety of classification problems.
no code implementations • 30 Nov 2015 • Yangyang Xu, Ioannis Akrotirianakis, Amit Chakraborty
A lot of effort has been put to generalize the binary SVM to multiclass SVM (MSVM) which are more complex problems.
1 code implementation • 8 Jun 2015 • Zhimin Peng, Yangyang Xu, Ming Yan, Wotao Yin
The agents share $x$ through either global memory or communication.
no code implementations • 2 Jun 2015 • Nan Zhou, Yangyang Xu, Hong Cheng, Jun Fang, Witold Pedrycz
In this paper, we propose a global and local structure preserving sparse subspace learning (GLoSS) model for unsupervised feature selection.
no code implementations • 16 Aug 2014 • Yangyang Xu, Wotao Yin
With very few exceptions, this issue has limited the applications of image-patch methods to the local kind of tasks such as denoising, inpainting, cartoon-texture decomposition, super-resolution, and image deblurring, for which one can process a few patches at a time.
no code implementations • 12 Aug 2014 • Yangyang Xu, Wotao Yin
Its convergence for both convex and nonconvex cases are established in different senses.
no code implementations • 15 Apr 2014 • Jianing V. Shi, Yangyang Xu, Richard G. Baraniuk
In this paper, we introduce the concept of sparse bilinear logistic regression for decision problems involving explanatory variables that are two-dimensional matrices.
1 code implementation • 4 Dec 2013 • Yangyang Xu, Ruru Hao, Wotao Yin, Zhixun Su
Phase transition plots reveal that our algorithm can recover a variety of synthetic low-rank tensors from significantly fewer samples than the compared methods, which include a matrix completion method applied to tensor recovery and two state-of-the-art tensor completion methods.
Numerical Analysis Numerical Analysis Computation
no code implementations • 6 Mar 2011 • Yangyang Xu, Wotao Yin, Zaiwen Wen, Yin Zhang
By taking the advantages of both nonnegativity and low-rankness, one can generally obtain superior results than those of just using one of the two properties.
Information Theory Numerical Analysis Information Theory Numerical Analysis