Search Results for author: Rujie Liu

Found 13 papers, 3 papers with code

Generative Modelling with High-Order Langevin Dynamics

no code implementations19 Apr 2024 Ziqiang Shi, Rujie Liu

In this paper, we propose a novel fast high-quality generative modelling method based on high-order Langevin dynamics (HOLD) with score matching.

Image Generation Unconditional Image Generation

Speech Separation Based on Multi-Stage Elaborated Dual-Path Deep BiLSTM with Auxiliary Identity Loss

1 code implementation6 Aug 2020 Ziqiang Shi, Rujie Liu, Jiqing Han

We have open sourced our re-implementation of the DPRNN-TasNet here (https://github. com/ShiZiqiang/dual-path-RNNs-DPRNNs-based-speech-separation), and our TasTas is realized based on this implementation of DPRNN-TasNet, it is believed that the results in this paper can be reproduced with ease.

Speaker Separation Speech Separation

La Furca: Iterative Context-Aware End-to-End Monaural Speech Separation Based on Dual-Path Deep Parallel Inter-Intra Bi-LSTM with Attention

1 code implementation23 Jan 2020 Ziqiang Shi, Rujie Liu, Jiqing Han

We have open-sourced our re-implementation of the DPRNN-TasNet in https://github. com/ShiZiqiang/dual-path-RNNs-DPRNNs-based-speech-separation, and our `La Furca' is realized based on this implementation of DPRNN-TasNet, it is believed that the results in this paper can be smoothly reproduced.

Sound Audio and Speech Processing

Learning to Find Correlated Features by Maximizing Information Flow in Convolutional Neural Networks

no code implementations30 Jun 2019 Wei Shen, Fei Li, Rujie Liu

We argue that the discard of the correlated discriminative information is partially caused by the fact that the minimization of the classification loss doesn't ensure to learn the overall discriminative information but only the most discriminative information.

Classification General Classification +1

FurcaNeXt: End-to-end monaural speech separation with dynamic gated dilated temporal convolutional networks

no code implementations12 Feb 2019 Ziqiang Shi, Huibin Lin, Liu Liu, Rujie Liu, Jiqing Han, Anyan Shi

Deep dilated temporal convolutional networks (TCN) have been proved to be very effective in sequence modeling.

Sound Audio and Speech Processing

Learning to generate filters for convolutional neural networks

no code implementations ICLR 2018 Wei Shen, Rujie Liu

In this paper, we propose to generate sample-specific filters for convolutional layers in the forward pass.

Tackling Early Sparse Gradients in Softmax Activation Using Leaky Squared Euclidean Distance

no code implementations27 Nov 2018 Wei Shen, Rujie Liu

However, we find that choosing squared Euclidean distance may cause distance explosion leading gradients to be extremely sparse in the early stage of back propagation.

One-Shot Learning

Generating Attention from Classifier Activations for Fine-grained Recognition

no code implementations27 Nov 2018 Wei Shen, Rujie Liu

Recent advances in fine-grained recognition utilize attention maps to localize objects of interest.

Semantic Segmentation

Multi-view (Joint) Probability Linear Discrimination Analysis for Multi-view Feature Verification

no code implementations20 Apr 2017 Ziqiang Shi, Liu Liu, Mengjiao Wang, Rujie Liu

However, in practical use, when using multi-task learned network as feature extractor, the extracted feature are always attached to several labels.

Decision Making

Learning Residual Images for Face Attribute Manipulation

1 code implementation CVPR 2017 Wei Shen, Rujie Liu

The transformation networks are responsible for the attribute manipulation and its dual operation and the discriminative network is used to distinguish the generated images from real images.

Attribute Generative Adversarial Network

Empirical study of PROXTONE and PROXTONE$^+$ for Fast Learning of Large Scale Sparse Models

no code implementations18 Apr 2016 Ziqiang Shi, Rujie Liu

Thus in some applications, in order to train sparse models fast, we propose to combine the merits of both methods, that is we use PROXTONE in the first several epochs to reach the neighborhood of an optimal solution, and then use the first order method to explore the possibility of sparsity in the following training.

Online and stochastic Douglas-Rachford splitting method for large scale machine learning

no code implementations22 Aug 2013 Ziqiang Shi, Rujie Liu

Then we proved that the online DRs splitting method enjoy an $O(1)$ regret bound and stochastic DRs splitting has a convergence rate of $O(1/\sqrt{T})$.

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.