1 code implementation • 11 Mar 2024 • Aozhong zhang, Zi Yang, Naigang Wang, Yingyong Qin, Jack Xin, Xin Li, Penghang Yin
Within a fixed layer, COMQ treats all the scaling factor(s) and bit-codes as the variables of the reconstruction error.
no code implementations • 10 Mar 2024 • Nhat Thanh Tran, Jack Xin, Guofa Zhou
Dengue fever is one of the most deadly mosquito-born tropical infectious diseases.
no code implementations • 2 Jul 2023 • Kevin Bui, Fanghui Xue, Fredrick Park, Yingyong Qi, Jack Xin
This time-consuming, three-step process is a result of using subgradient descent to train CNNs.
1 code implementation • 2 Jul 2023 • Nhat Thanh Tran, Jack Xin
We study a fast local-global window-based attention method to accelerate Informer for long sequence time-series forecasting.
1 code implementation • 1 Jul 2023 • Kevin Bui, Yifei Lou, Fredrick Park, Jack Xin
Poisson noise commonly occurs in images captured by photon-limited imaging systems such as in astronomy and medicine.
no code implementations • 10 Feb 2023 • Zhijian Li, Biao Yang, Penghang Yin, Yingyong Qi, Jack Xin
In this paper, we propose a feature affinity (FA) assisted knowledge distillation (KD) method to improve quantization-aware training of deep neural networks (DNN).
1 code implementation • 6 Jan 2023 • Kevin Bui, Yifei Lou, Fredrick Park, Jack Xin
In this paper, we aim to segment an image degraded by blur and Poisson noise.
no code implementations • 31 Aug 2022 • Zhongjian Wang, Jack Xin, Zhiwen Zhang
We study a regularized interacting particle method for computing aggregation patterns and near singular solutions of a Keller-Segal (KS) chemotaxis system in two and three space dimensions, then further develop DeepParticle (DP) method to learn and generate solutions under variations of physical parameters.
no code implementations • 19 Apr 2022 • Justin Baker, Hedi Xia, Yiwei Wang, Elena Cherkaev, Akil Narayan, Long Chen, Jack Xin, Andrea L. Bertozzi, Stanley J. Osher, Bao Wang
Learning neural ODEs often requires solving very stiff ODE systems, primarily using explicit adaptive step size ODE solvers.
no code implementations • 16 Apr 2022 • Fanghui Xue, Biao Yang, Yingyong Qi, Jack Xin
It has been shown by many researchers that transformers perform as well as convolutional neural networks in many computer vision tasks.
no code implementations • 9 Apr 2022 • Zhijian Li, Jack Xin
We propose an adaptive projection-gradient descent-shrinkage-splitting method (APGDSSM) to integrate penalty based channel pruning into quantization-aware training (QAT).
no code implementations • 30 Mar 2022 • Ziang Long, Yunling Zheng, Meng Yu, Jack Xin
Variational auto-encoder (VAE) is an effective neural network architecture to disentangle a speech utterance into speaker identity and linguistic content latent embeddings, then generate an utterance for a target speaker from that of a source speaker.
1 code implementation • 21 Feb 2022 • Kevin Bui, Yifei Lou, Fredrick Park, Jack Xin
In this paper, we design an efficient, multi-stage image segmentation framework that incorporates a weighted difference of anisotropic and isotropic total variation (AITV).
no code implementations • 23 Jan 2022 • Zhijian Li, Jack Xin, Guofa Zhou
We developed an integrated recurrent neural network and nonlinear regression spatio-temporal model for vector-borne disease evolution.
no code implementations • 22 Jan 2022 • Yunling Zheng, Carson Hu, Guang Lin, Meng Yue, Bao Wang, Jack Xin
Due to the sparsified queries, GLassoformer is more computationally efficient than the standard transformers.
no code implementations • 2 Nov 2021 • Zhongjian Wang, Jack Xin, Zhiwen Zhang
We introduce the so called DeepParticle method to learn and generate invariant measures of stochastic dynamical systems with physical parameters based on data computed from an interacting particle method (IPM).
no code implementations • 10 Dec 2020 • Ziang Long, Penghang Yin, Jack Xin
Deep neural networks (DNNs) are quantized for efficient inference on resource-constrained platforms.
no code implementations • 23 Nov 2020 • Ziang Long, Penghang Yin, Jack Xin
In this paper, we propose a class of STEs with certain monotonicity, and consider their applications to the training of a two-linear-layer network with quantized activation functions for non-linear multi-category classification.
no code implementations • 18 Oct 2020 • Yunling Zheng, Zhijian Li, Jack Xin, Guofa Zhou
For edge feature, we design an RNN model to capture the neighboring effect and regularize the landscape of loss function so that local minima are effective and robust for prediction.
1 code implementation • 3 Oct 2020 • Kevin Bui, Fredrick Park, Shuai Zhang, Yingyong Qi, Jack Xin
Network slimming with T$\ell_1$ regularization also outperforms the latest Bayesian modification of network slimming in compressing a CNN architecture in terms of memory storage while preserving its model accuracy after channel pruning.
no code implementations • 4 Sep 2020 • Yuanchang Sun, Jack Xin
In this paper, we introduce a preprocessing technique for blind source separation (BSS) of nonnegative and overlapped data.
1 code implementation • 31 Aug 2020 • Zhijian Li, Bao Wang, Jack Xin
To solve the problems that adversarial training jeopardizes DNNs' accuracy on clean images and the struture of sparsity, we design a trade-off loss function that helps DNNs preserve their natural accuracy and improve the channel sparsity.
no code implementations • 10 Aug 2020 • Fanghui Xue, Yingyong Qi, Jack Xin
Differentiable architecture search (DARTS) is an effective method for data-driven neural network design based on solving a bilevel optimization problem.
no code implementations • 14 Jul 2020 • Zhijian Li, Yunling Zheng, Jack Xin, Guofa Zhou
Modeling the trend of infection and real-time forecasting of cases can help decision making and control of the disease spread.
1 code implementation • 9 May 2020 • Kevin Bui, Fredrick Park, Yifei Lou, Jack Xin
In a class of piecewise-constant image segmentation models, we propose to incorporate a weighted difference of anisotropic and isotropic total variation (AITV) to regularize the partition boundaries in an image.
no code implementations • 28 Feb 2020 • Ziang Long, Penghang Yin, Jack Xin
In this paper, we study the dynamics of gradient descent in learning neural networks for classification problems.
no code implementations • 17 Dec 2019 • Kevin Bui, Fredrick Park, Shuai Zhang, Yingyong Qi, Jack Xin
Deepening and widening convolutional neural networks (CNNs) significantly increases the number of trainable weight parameters by adding more convolutional layers and feature maps per layer, respectively.
no code implementations • ICLR 2019 • Penghang Yin, Jiancheng Lyu, Shuai Zhang, Stanley Osher, Yingyong Qi, Jack Xin
We prove that if the STE is properly chosen, the expected coarse gradient correlates positively with the population gradient (not available for the training), and its negation is a descent direction for minimizing the population loss.
no code implementations • 20 Feb 2019 • Fanghui Xue, Jack Xin
We study sparsification of convolutional neural networks (CNN) by a relaxed variable splitting method of $\ell_0$ and transformed-$\ell_1$ (T$\ell_1$) penalties, with application to complex curves such as texts written in different fonts, and words written with trembling hands simulating those of Parkinson's disease patients.
2 code implementations • 13 Feb 2019 • Zhijian Li, Xiyang Luo, Bao Wang, Andrea L. Bertozzi, Jack Xin
We study epidemic forecasting on real-world health data by a graph-structured recurrent neural network (GSRNN).
2 code implementations • 25 Jan 2019 • Thu Dinh, Jack Xin
In this paper, we study the problem of coarse gradient descent (CGD) learning of a one hidden layer convolutional neural network (CNN) with binarized activation function and sparse weights.
Optimization and Control 90C26, 97R40, 68T05
no code implementations • 24 Jan 2019 • Jiancheng Lyu, Shuai Zhang, Yingyong Qi, Jack Xin
In addition, we found experimentally that the standard convex relaxation of permutation matrices into stochastic matrices leads to poor performance.
1 code implementation • 24 Dec 2018 • Haotian Gu, Jack Xin, Zhiwen Zhang
To facilitate the algorithm design and convergence analysis, we decompose the solution of the viscous G-equation into a mean-free part and a mean part, where their evolution equations can be derived accordingly.
Numerical Analysis 65M12, 70H20, 76F25, 78M34, 80A25
no code implementations • 15 Aug 2018 • Penghang Yin, Shuai Zhang, Jiancheng Lyu, Stanley Osher, Yingyong Qi, Jack Xin
We introduce the notion of coarse gradient and propose the blended coarse gradient descent (BCGD) algorithm, for training fully quantized neural networks.
2 code implementations • 19 Jan 2018 • Penghang Yin, Shuai Zhang, Jiancheng Lyu, Stanley Osher, Yingyong Qi, Jack Xin
We propose BinaryRelax, a simple two-phase algorithm, for training deep neural networks with quantized weights.
no code implementations • 23 Nov 2017 • Bao Wang, Penghang Yin, Andrea L. Bertozzi, P. Jeffrey Brantingham, Stanley J. Osher, Jack Xin
In this work, we first present a proper representation of crime data.
no code implementations • 19 Dec 2016 • Penghang Yin, Shuai Zhang, Yingyong Qi, Jack Xin
We present LBW-Net, an efficient optimization based method for quantization and training of the low bit-width convolutional neural networks (CNNs).