no code implementations • 17 Apr 2024 • Feng Yu, Teng Zhang, Gilad Lerman
We present the subspace-constrained Tyler's estimator (STE) designed for recovering a low-dimensional subspace within a dataset that may be highly corrupted with outliers.
no code implementations • 27 Mar 2024 • Gilad Lerman, Feng Yu, Teng Zhang
It further shows that under the generalized haystack model, STE initialized by the Tyler's M-estimator (TME), can recover the subspace when the fraction of iniliers is too small for TME to handle.
no code implementations • 5 Nov 2023 • Qian Chen, Yiqiang Chen, Xinlong Jiang, Teng Zhang, Weiwei Dai, Wuliang Huang, Zhen Yan, Bo Ye
Model fusion is becoming a crucial component in the context of model-as-a-service scenarios, enabling the delivery of high-quality model services to local users.
no code implementations • 4 Nov 2023 • Casey Garner, Gilad Lerman, Teng Zhang
This paper studies the commonly utilized windowed Anderson acceleration (AA) algorithm for fixed-point methods, $x^{(k+1)}=q(x^{(k)})$.
no code implementations • 5 Sep 2023 • Shaohua Liu, Yu Qi, Gen Li, Mingjian Chen, Teng Zhang, Jia Cheng, Jun Lei
Specifically, we construct subgraphs of spatial, temporal, spatial-temporal, and global views respectively to precisely characterize the user's interests in various contexts.
no code implementations • 30 Jun 2023 • Yiqiang Chen, Teng Zhang, Xinlong Jiang, Qian Chen, Chenlong Gao, Wuliang Huang
The conflicting gradient projection technique is used to enhance the generalization of the large-scale general model between different tasks.
2 code implementations • 8 May 2023 • Yilin Wang, Nan Cao, Teng Zhang, Xuanhua Shi, Hai Jin
Optimal margin Distribution Machine (ODM) is a newly proposed statistical learning framework rooting in the novel margin theory, which demonstrates better generalization performance than the traditional large margin based counterparts.
no code implementations • 13 Apr 2023 • Teng Zhang, Kang Li
Adversarial training and data augmentation with noise are widely adopted techniques to enhance the performance of neural networks.
no code implementations • 29 Dec 2022 • Teng Zhang, Haoyi Yang, Lingzhou Xue
Sparse principal component analysis (SPCA) is widely used for dimensionality reduction and feature extraction in high-dimensional data analysis.
no code implementations • 14 May 2022 • Hsin-Hsiung Huang, Feng Yu, Xing Fan, Teng Zhang
While matrix variate regression models have been studied in many existing works, classical statistical and computational methods for the analysis of the regression coefficient estimation are highly affected by high dimensional and noisy matrix-valued predictors.
no code implementations • 29 Apr 2022 • Meng-Zhang Qian, Zheng Ai, Teng Zhang, Wei Gao
Margin has played an important role on the design and analysis of learning algorithms during the past years, mostly working with the maximization of the minimum margin.
no code implementations • 14 Mar 2022 • Jin Xie, Teng Zhang, Jose Blanchet, Peter Glynn, Matthew Randolph, David Scheinker
In order for an algorithm to see sustained use, it must be compatible with changes to hospital capacity, patient volumes, and scheduling practices.
no code implementations • 29 Sep 2021 • Yilin Wang, Nan Cao, Teng Zhang, Hai Jin
Optimal margin Distribution Machine (ODM), a newly proposed statistical learning framework rooting in the novel margin theory, demonstrates better generalization performance than the traditional large margin based counterparts.
no code implementations • 22 Sep 2021 • Honggang Yu, Shihfeng Zeng, Teng Zhang, Ing-Chao Lin, Yier Jin
As a result, our theoretical proofs provide support to more efficient active learning methods with the help of adversarial examples, contrary to previous works where adversarial examples are often used as destructive solutions.
no code implementations • 8 Sep 2021 • Tianren Wang, Can Peng, Teng Zhang, Brian Lovell
With the excellent disentanglement properties of state-of-the-art generative models, image editing has been the dominant approach to control the attributes of synthesised face images.
no code implementations • 23 Feb 2021 • Teng Zhang
We employ large-scale finite element simulations of a bilayer neo-Hookean solid (e. g., a film bonded on a substrate) to explore mechanical principles that govern the formation of hexagonal wrinkling patterns and strategies for making nearly perfect hexagonal patterns.
Soft Condensed Matter
no code implementations • 20 Feb 2021 • Xing Fan, Marianna Pensky, Feng Yu, Teng Zhang
The paper considers a Mixture Multilayer Stochastic Block Model (MMLSBM), where layers can be partitioned into groups of similar networks, and networks in each group are equipped with a distinct Stochastic Block Model.
no code implementations • 22 Aug 2020 • Gang Zhao, Teng Zhang, Chenxiao Wang, Ping Lv, Ji Wu
We convert the Chinese medical text attributes extraction task into a sequence tagging or machine reading comprehension task.
no code implementations • 13 Jun 2020 • Tianren Wang, Teng Zhang, Brian Lovell
Text-to-Face (TTF) synthesis is a challenging task with great potential for diverse computer vision applications.
no code implementations • 7 Feb 2020 • Ziyi Yang, Teng Zhang, Iman Soltani Bozchalooi, Eric Darve
Decoded memory units in MEMGAN are more interpretable and disentangled than previous methods, which further demonstrates the effectiveness of the memory mechanism.
no code implementations • 22 Sep 2019 • Can Peng, Kun Zhao, Arnold Wiliem, Teng Zhang, Peter Hobson, Anthony Jennings, Brian C. Lovell
Critical findings are observed: (1) The best balance between detection accuracy, detection speed and file size is achieved at 8 times downsampling captured with a $40\times$ objective; (2) compression which reduces the file size dramatically, does not necessarily have an adverse effect on overall accuracy; (3) reducing the amount of training data to some extents causes a drop in precision but has a negligible impact on the recall; (4) in most cases, Faster R-CNN achieves the best accuracy in the glomerulus detection task.
1 code implementation • 6 Aug 2019 • Feng Yu, Yi Yang, Teng Zhang
In comparison, this work proposes to decompose the objective function into two components, where one component is the loss function plus part of the total variation penalty, and the other component is the remaining total variation penalty.
Optimization and Control Computation
no code implementations • 16 Jul 2019 • Liangchen Liu, Teng Zhang, Kun Zhao, Arnold Wiliem, Kieren Astin-Walmsley, Brian Lovell
We propose a novel two-stage zoom-in detection method to gradually focus on the object of interest.
no code implementations • 24 Jun 2019 • Sam Maksoud, Arnold Wiliem, Kun Zhao, Teng Zhang, Lin Wu, Brian C. Lovell
This is because the system can ignore the attention mechanism by assigning equal weights for all members.
1 code implementation • 24 Jun 2019 • Meng Li, Lin Wu, Arnold Wiliem, Kun Zhao, Teng Zhang, Brian C. Lovell
Histopathology image analysis can be considered as a Multiple instance learning (MIL) problem, where the whole slide histopathology image (WSI) is regarded as a bag of instances (i. e, patches) and the task is to predict a single class label to the WSI.
no code implementations • 20 Mar 2018 • Teng Zhang, Johanna Carvajal, Daniel F. Smith, Kun Zhao, Arnold Wiliem, Peter Hobson, Anthony Jennings, Brian C. Lovell
In order to address the quality assessment problem, we propose a deep neural network based framework to automatically assess the slide quality in a semantic way.
1 code implementation • ICML 2018 • Kevin Tian, Teng Zhang, James Zou
However, in addition to the text data itself, we often have additional covariates associated with individual corpus documents---e. g. the demographic of the author, time and venue of publication---and we would like the embedding to naturally capture this information.
no code implementations • ICLR 2018 • Kevin Tian, Teng Zhang, James Zou
In addition to the text data itself, we often have additional covariates associated with individual documents in the corpus---e. g. the demographic of the author, time and venue of publication, etc.---and we would like the embedding to naturally capture the information of the covariates.
2 code implementations • 7 Dec 2017 • Teng Zhang, Arnold Wiliem, Siqi Yang, Brian C. Lovell
While it can greatly increase the scope and benefits of the current security surveillance systems, performing such a task using thermal images is a challenging problem compared to face recognition task in the Visible Light Domain (VLD).
no code implementations • 27 Sep 2017 • Gilad Lerman, Yunpeng Shi, Teng Zhang
We establish exact recovery for the Least Unsquared Deviations (LUD) algorithm of Ozyesil and Singer.
no code implementations • 20 Aug 2017 • Tianyi Lin, Linbo Qiao, Teng Zhang, Jiashi Feng, Bofeng Zhang
This optimization model abstracts a number of important applications in artificial intelligence and machine learning, such as fused Lasso, fused logistic regression, and a class of graph-guided regularized minimization.
no code implementations • ICML 2017 • Teng Zhang, Zhi-Hua Zhou
It still remains open for multi-class classification, and due to the complexity of margin for multi-class classification, optimizing its distribution by mean and variance can also be difficult.
no code implementations • 1 Aug 2017 • Teng Zhang, Yi Yang
Robust PCA is a widely used statistical procedure to recover a underlying low-rank matrix with grossly corrupted observations.
1 code implementation • 16 Jun 2017 • He-Da Wang, Teng Zhang, Ji Wu
This article describes the final solution of team monkeytyping, who finished in second place in the YouTube-8M video understanding challenge.
no code implementations • 13 Jun 2017 • Tyler Maunu, Teng Zhang, Gilad Lerman
The practicality of the deterministic condition is demonstrated on some statistical models of data, and the method achieves almost state-of-the-art recovery guarantees on the Haystack Model for different regimes of sample size and ambient dimension.
no code implementations • 2 May 2017 • Marianna Pensky, Teng Zhang
We estimate the edge probability tensor by a kernel-type procedure and extract the group memberships of the nodes by spectral clustering.
no code implementations • 26 Apr 2017 • Tejal Bhamre, Teng Zhang, Amit Singer
The missing phase problem in X-ray crystallography is commonly solved using the technique of molecular replacement, which borrows phases from a previously solved homologous structure, and appends them to the measured Fourier magnitudes of the diffraction patterns of the unknown structure.
no code implementations • 12 Apr 2016 • Teng Zhang, Zhi-Hua Zhou
Support vector machine (SVM) has been one of the most popular learning algorithms, with the central idea of maximizing the minimum margin, i. e., the smallest distance from the instances to the classification boundary.
no code implementations • 22 Feb 2016 • Tejal Bhamre, Teng Zhang, Amit Singer
In CWF, the covariance matrix of the projection images is used within the classical Wiener filtering framework for solving the image restoration deconvolution problem.
no code implementations • 1 Dec 2014 • Tejal Bhamre, Teng Zhang, Amit Singer
In single particle reconstruction (SPR) from cryo-electron microscopy (cryo-EM), the 3D structure of a molecule needs to be determined from its 2D projection images taken at unknown viewing directions.
no code implementations • 5 Nov 2013 • Teng Zhang, Zhi-Hua Zhou
In this paper, we propose the Large margin Distribution Machine (LDM), which tries to achieve a better generalization performance by optimizing the margin distribution.
no code implementations • 7 Jun 2012 • Teng Zhang
This paper considers the problem of robust subspace recovery: given a set of $N$ points in $\mathbb{R}^D$, if many lie in a $d$-dimensional subspace, then can we recover the underlying subspace?
no code implementations • 18 Feb 2012 • Gilad Lerman, Michael McCoy, Joel A. Tropp, Teng Zhang
Consider a dataset of vector-valued observations that consists of noisy inliers, which are explained well by a low-dimensional subspace, along with some number of outliers.
no code implementations • 20 Dec 2011 • Teng Zhang, Gilad Lerman
That is, we assume a data set that some of its points are sampled around a fixed subspace and the rest of them are spread in the whole ambient space, and we aim to recover the fixed underlying subspace.
no code implementations • 18 Dec 2010 • Gilad Lerman, Teng Zhang
We say that one of the underlying subspaces of the model is most significant if its mixture weight is higher than the sum of the mixture weights of all other subspaces.