Search Results for author: Jack Xin

Found 37 papers, 12 papers with code

COMQ: A Backpropagation-Free Algorithm for Post-Training Quantization

1 code implementation11 Mar 2024 Aozhong zhang, Zi Yang, Naigang Wang, Yingyong Qin, Jack Xin, Xin Li, Penghang Yin

Within a fixed layer, COMQ treats all the scaling factor(s) and bit-codes as the variables of the reconstruction error.

Quantization

FWin transformer for dengue prediction under climate and ocean influence

no code implementations10 Mar 2024 Nhat Thanh Tran, Jack Xin, Guofa Zhou

Dengue fever is one of the most deadly mosquito-born tropical infectious diseases.

A Proximal Algorithm for Network Slimming

no code implementations2 Jul 2023 Kevin Bui, Fanghui Xue, Fredrick Park, Yingyong Qi, Jack Xin

This time-consuming, three-step process is a result of using subgradient descent to train CNNs.

Fourier-Mixed Window Attention: Accelerating Informer for Long Sequence Time-Series Forecasting

1 code implementation2 Jul 2023 Nhat Thanh Tran, Jack Xin

We study a fast local-global window-based attention method to accelerate Informer for long sequence time-series forecasting.

Time Series Time Series Forecasting

Weighted Anisotropic-Isotropic Total Variation for Poisson Denoising

1 code implementation1 Jul 2023 Kevin Bui, Yifei Lou, Fredrick Park, Jack Xin

Poisson noise commonly occurs in images captured by photon-limited imaging systems such as in astronomy and medicine.

Astronomy Computational Efficiency +1

Feature Affinity Assisted Knowledge Distillation and Quantization of Deep Neural Networks on Label-Free Data

no code implementations10 Feb 2023 Zhijian Li, Biao Yang, Penghang Yin, Yingyong Qi, Jack Xin

In this paper, we propose a feature affinity (FA) assisted knowledge distillation (KD) method to improve quantization-aware training of deep neural networks (DNN).

Knowledge Distillation Quantization

A DeepParticle method for learning and generating aggregation patterns in multi-dimensional Keller-Segel chemotaxis systems

no code implementations31 Aug 2022 Zhongjian Wang, Jack Xin, Zhiwen Zhang

We study a regularized interacting particle method for computing aggregation patterns and near singular solutions of a Keller-Segal (KS) chemotaxis system in two and three space dimensions, then further develop DeepParticle (DP) method to learn and generate solutions under variations of physical parameters.

Proximal Implicit ODE Solvers for Accelerating Learning Neural ODEs

no code implementations19 Apr 2022 Justin Baker, Hedi Xia, Yiwei Wang, Elena Cherkaev, Akil Narayan, Long Chen, Jack Xin, Andrea L. Bertozzi, Stanley J. Osher, Bao Wang

Learning neural ODEs often requires solving very stiff ODE systems, primarily using explicit adaptive step size ODE solvers.

Computational Efficiency

Searching Intrinsic Dimensions of Vision Transformers

no code implementations16 Apr 2022 Fanghui Xue, Biao Yang, Yingyong Qi, Jack Xin

It has been shown by many researchers that transformers perform as well as convolutional neural networks in many computer vision tasks.

Image Classification object-detection +1

Channel Pruning In Quantization-aware Training: An Adaptive Projection-gradient Descent-shrinkage-splitting Method

no code implementations9 Apr 2022 Zhijian Li, Jack Xin

We propose an adaptive projection-gradient descent-shrinkage-splitting method (APGDSSM) to integrate penalty based channel pruning into quantization-aware training (QAT).

Quantization

Enhancing Zero-Shot Many to Many Voice Conversion with Self-Attention VAE

no code implementations30 Mar 2022 Ziang Long, Yunling Zheng, Meng Yu, Jack Xin

Variational auto-encoder (VAE) is an effective neural network architecture to disentangle a speech utterance into speaker identity and linguistic content latent embeddings, then generate an utterance for a target speaker from that of a source speaker.

Sentence Voice Conversion

An Efficient Smoothing and Thresholding Image Segmentation Framework with Weighted Anisotropic-Isotropic Total Variation

1 code implementation21 Feb 2022 Kevin Bui, Yifei Lou, Fredrick Park, Jack Xin

In this paper, we design an efficient, multi-stage image segmentation framework that incorporates a weighted difference of anisotropic and isotropic total variation (AITV).

Image Segmentation Segmentation +1

An integrated recurrent neural network and regression model with spatial and climatic couplings for vector-borne disease dynamics

no code implementations23 Jan 2022 Zhijian Li, Jack Xin, Guofa Zhou

We developed an integrated recurrent neural network and nonlinear regression spatio-temporal model for vector-borne disease evolution.

Recommendation Systems regression

glassoformer: a query-sparse transformer for post-fault power grid voltage prediction

no code implementations22 Jan 2022 Yunling Zheng, Carson Hu, Guang Lin, Meng Yue, Bao Wang, Jack Xin

Due to the sparsified queries, GLassoformer is more computationally efficient than the standard transformers.

DeepParticle: learning invariant measure by a deep neural network minimizing Wasserstein distance on data generated from an interacting particle method

no code implementations2 Nov 2021 Zhongjian Wang, Jack Xin, Zhiwen Zhang

We introduce the so called DeepParticle method to learn and generate invariant measures of stochastic dynamical systems with physical parameters based on data computed from an interacting particle method (IPM).

Recurrence of Optimum for Training Weight and Activation Quantized Networks

no code implementations10 Dec 2020 Ziang Long, Penghang Yin, Jack Xin

Deep neural networks (DNNs) are quantized for efficient inference on resource-constrained platforms.

Negation Quantization

Learning Quantized Neural Nets by Coarse Gradient Method for Non-linear Classification

no code implementations23 Nov 2020 Ziang Long, Penghang Yin, Jack Xin

In this paper, we propose a class of STEs with certain monotonicity, and consider their applications to the training of a two-linear-layer network with quantized activation functions for non-linear multi-category classification.

General Classification

A Spatial-Temporal Graph Based Hybrid Infectious Disease Model with Application to COVID-19

no code implementations18 Oct 2020 Yunling Zheng, Zhijian Li, Jack Xin, Guofa Zhou

For edge feature, we design an RNN model to capture the neighboring effect and regularize the landscape of loss function so that local minima are effective and robust for prediction.

Time Series Time Series Analysis

Improving Network Slimming with Nonconvex Regularization

1 code implementation3 Oct 2020 Kevin Bui, Fredrick Park, Shuai Zhang, Yingyong Qi, Jack Xin

Network slimming with T$\ell_1$ regularization also outperforms the latest Bayesian modification of network slimming in compressing a CNN architecture in terms of memory storage while preserving its model accuracy after channel pruning.

Image Classification object-detection +3

Lorentzian Peak Sharpening and Sparse Blind Source Separation for NMR Spectroscopy

no code implementations4 Sep 2020 Yuanchang Sun, Jack Xin

In this paper, we introduce a preprocessing technique for blind source separation (BSS) of nonnegative and overlapped data.

blind source separation

An Integrated Approach to Produce Robust Models with High Efficiency

1 code implementation31 Aug 2020 Zhijian Li, Bao Wang, Jack Xin

To solve the problems that adversarial training jeopardizes DNNs' accuracy on clean images and the struture of sparsity, we design a trade-off loss function that helps DNNs preserve their natural accuracy and improve the channel sparsity.

Quantization Vocal Bursts Intensity Prediction

RARTS: An Efficient First-Order Relaxed Architecture Search Method

no code implementations10 Aug 2020 Fanghui Xue, Yingyong Qi, Jack Xin

Differentiable architecture search (DARTS) is an effective method for data-driven neural network design based on solving a bilevel optimization problem.

Bilevel Optimization Network Pruning

A Recurrent Neural Network and Differential Equation Based Spatiotemporal Infectious Disease Model with Application to COVID-19

no code implementations14 Jul 2020 Zhijian Li, Yunling Zheng, Jack Xin, Guofa Zhou

Modeling the trend of infection and real-time forecasting of cases can help decision making and control of the disease spread.

Decision Making

A Weighted Difference of Anisotropic and Isotropic Total Variation for Relaxed Mumford-Shah Color and Multiphase Image Segmentation

1 code implementation9 May 2020 Kevin Bui, Fredrick Park, Yifei Lou, Jack Xin

In a class of piecewise-constant image segmentation models, we propose to incorporate a weighted difference of anisotropic and isotropic total variation (AITV) to regularize the partition boundaries in an image.

Denoising Image Segmentation +2

$\ell_0$ Regularized Structured Sparsity Convolutional Neural Networks

no code implementations17 Dec 2019 Kevin Bui, Fredrick Park, Shuai Zhang, Yingyong Qi, Jack Xin

Deepening and widening convolutional neural networks (CNNs) significantly increases the number of trainable weight parameters by adding more convolutional layers and feature maps per layer, respectively.

Understanding Straight-Through Estimator in Training Activation Quantized Neural Nets

no code implementations ICLR 2019 Penghang Yin, Jiancheng Lyu, Shuai Zhang, Stanley Osher, Yingyong Qi, Jack Xin

We prove that if the STE is properly chosen, the expected coarse gradient correlates positively with the population gradient (not available for the training), and its negation is a descent direction for minimizing the population loss.

Negation

Learning Sparse Neural Networks via $\ell_0$ and T$\ell_1$ by a Relaxed Variable Splitting Method with Application to Multi-scale Curve Classification

no code implementations20 Feb 2019 Fanghui Xue, Jack Xin

We study sparsification of convolutional neural networks (CNN) by a relaxed variable splitting method of $\ell_0$ and transformed-$\ell_1$ (T$\ell_1$) penalties, with application to complex curves such as texts written in different fonts, and words written with trembling hands simulating those of Parkinson's disease patients.

General Classification

A Study on Graph-Structured Recurrent Neural Networks and Sparsification with Application to Epidemic Forecasting

2 code implementations13 Feb 2019 Zhijian Li, Xiyang Luo, Bao Wang, Andrea L. Bertozzi, Jack Xin

We study epidemic forecasting on real-world health data by a graph-structured recurrent neural network (GSRNN).

Convergence of a Relaxed Variable Splitting Coarse Gradient Descent Method for Learning Sparse Weight Binarized Activation Neural Networks

2 code implementations25 Jan 2019 Thu Dinh, Jack Xin

In this paper, we study the problem of coarse gradient descent (CGD) learning of a one hidden layer convolutional neural network (CNN) with binarized activation function and sparse weights.

Optimization and Control 90C26, 97R40, 68T05

AutoShuffleNet: Learning Permutation Matrices via an Exact Lipschitz Continuous Penalty in Deep Convolutional Neural Networks

no code implementations24 Jan 2019 Jiancheng Lyu, Shuai Zhang, Yingyong Qi, Jack Xin

In addition, we found experimentally that the standard convex relaxation of permutation matrices into stochastic matrices leads to poor performance.

Graph Matching

An efficient model reduction method for solving viscous G-equations in incompressible cellular flows

1 code implementation24 Dec 2018 Haotian Gu, Jack Xin, Zhiwen Zhang

To facilitate the algorithm design and convergence analysis, we decompose the solution of the viscous G-equation into a mean-free part and a mean part, where their evolution equations can be derived accordingly.

Numerical Analysis 65M12, 70H20, 76F25, 78M34, 80A25

Blended Coarse Gradient Descent for Full Quantization of Deep Neural Networks

no code implementations15 Aug 2018 Penghang Yin, Shuai Zhang, Jiancheng Lyu, Stanley Osher, Yingyong Qi, Jack Xin

We introduce the notion of coarse gradient and propose the blended coarse gradient descent (BCGD) algorithm, for training fully quantized neural networks.

Binarization Quantization

BinaryRelax: A Relaxation Approach For Training Deep Neural Networks With Quantized Weights

2 code implementations19 Jan 2018 Penghang Yin, Shuai Zhang, Jiancheng Lyu, Stanley Osher, Yingyong Qi, Jack Xin

We propose BinaryRelax, a simple two-phase algorithm, for training deep neural networks with quantized weights.

Quantization

Quantization and Training of Low Bit-Width Convolutional Neural Networks for Object Detection

no code implementations19 Dec 2016 Penghang Yin, Shuai Zhang, Yingyong Qi, Jack Xin

We present LBW-Net, an efficient optimization based method for quantization and training of the low bit-width convolutional neural networks (CNNs).

object-detection Object Detection +1

Cannot find the paper you are looking for? You can Submit a new open access paper.