Search Results for author: Wenlin Wang

Found 30 papers, 7 papers with code

JIZHI: A Fast and Cost-Effective Model-As-A-Service System for Web-Scale Online Inference at Baidu

1 code implementation3 Jun 2021 Hao liu, Qian Gao, Jiang Li, Xiaochao Liao, Hao Xiong, Guangxing Chen, Wenlin Wang, Guobao Yang, Zhiwei Zha, daxiang dong, Dejing Dou, Haoyi Xiong

In this work, we present JIZHI - a Model-as-a-Service system - that per second handles hundreds of millions of online inference requests to huge deep models with more than trillions of sparse parameters, for over twenty real-time recommendation services at Baidu, Inc.

Recommendation Systems

Graph-Driven Generative Models for Heterogeneous Multi-Task Learning

no code implementations20 Nov 2019 Wenlin Wang, Hongteng Xu, Zhe Gan, Bai Li, Guoyin Wang, Liqun Chen, Qian Yang, Wenqi Wang, Lawrence Carin

We propose a novel graph-driven generative model, that unifies multiple heterogeneous learning tasks into the same framework.

Multi-Task Learning Type prediction

Learning to Recommend from Sparse Data via Generative User Feedback

no code implementations ICLR 2020 Wenlin Wang, Hongteng Xu, Ruiyi Zhang, Wenqi Wang, Piyush Rai, Lawrence Carin

To address this, we propose a learning framework that improves collaborative filtering with a synthetic feedback loop (CF-SFL) to simulate the user feedback.

Collaborative Filtering Recommendation Systems

Zero-Shot Recognition via Optimal Transport

no code implementations20 Oct 2019 Wenlin Wang, Hongteng Xu, Guoyin Wang, Wenqi Wang, Lawrence Carin

{Specifically, we build a conditional generative model to generate features from seen-class attributes, and establish an optimal transport between the distribution of the generated features and that of the real features.}

Attribute Generalized Zero-Shot Learning

On Norm-Agnostic Robustness of Adversarial Training

no code implementations15 May 2019 Bai Li, Changyou Chen, Wenlin Wang, Lawrence Carin

Adversarial examples are carefully perturbed in-puts for fooling machine learning models.

BIG-bench Machine Learning

Second-Order Adversarial Attack and Certifiable Robustness

no code implementations ICLR 2019 Bai Li, Changyou Chen, Wenlin Wang, Lawrence Carin

In this paper, we propose a powerful second-order attack method that reduces the accuracy of the defense model by Madry et al. (2017).

Adversarial Attack

Sequence Generation with Guider Network

no code implementations2 Nov 2018 Ruiyi Zhang, Changyou Chen, Zhe Gan, Wenlin Wang, Liqun Chen, Dinghan Shen, Guoyin Wang, Lawrence Carin

Sequence generation with reinforcement learning (RL) has received significant attention recently.

Reinforcement Learning (RL)

Distilled Wasserstein Learning for Word Embedding and Topic Modeling

no code implementations NeurIPS 2018 Hongteng Xu, Wenlin Wang, Wei Liu, Lawrence Carin

When learning the topic model, we leverage a distilled underlying distance matrix to update the topic distributions and smoothly calculate the corresponding optimal transports.

Mortality Prediction Word Embeddings

Certified Adversarial Robustness with Additive Noise

3 code implementations NeurIPS 2019 Bai Li, Changyou Chen, Wenlin Wang, Lawrence Carin

The existence of adversarial data examples has drawn significant attention in the deep-learning community; such data are seemingly minimally perturbed relative to the original data, but lead to very different outputs from a deep-learning algorithm.

Adversarial Attack Adversarial Robustness

A Unified Particle-Optimization Framework for Scalable Bayesian Sampling

no code implementations29 May 2018 Changyou Chen, Ruiyi Zhang, Wenlin Wang, Bai Li, Liqun Chen

There has been recent interest in developing scalable Bayesian sampling methods such as stochastic gradient MCMC (SG-MCMC) and Stein variational gradient descent (SVGD) for big-data analysis.

Joint Embedding of Words and Labels for Text Classification

2 code implementations ACL 2018 Guoyin Wang, Chunyuan Li, Wenlin Wang, Yizhe Zhang, Dinghan Shen, Xinyuan Zhang, Ricardo Henao, Lawrence Carin

Word embeddings are effective intermediate representations for capturing semantic regularities between words, when learning the representations of text sequences.

General Classification Sentiment Analysis +2

Wide Compression: Tensor Ring Nets

no code implementations CVPR 2018 Wenqi Wang, Yifan Sun, Brian Eriksson, Wenlin Wang, Vaneet Aggarwal

Deep neural networks have demonstrated state-of-the-art performance in a variety of real-world applications.

Image Classification

On the Use of Word Embeddings Alone to Represent Natural Language Sequences

no code implementations ICLR 2018 Dinghan Shen, Guoyin Wang, Wenlin Wang, Martin Renqiang Min, Qinliang Su, Yizhe Zhang, Ricardo Henao, Lawrence Carin

In this paper, we conduct an extensive comparative study between Simple Word Embeddings-based Models (SWEMs), with no compositional parameters, relative to employing word embeddings within RNN/CNN-based models.

Sentence Word Embeddings

Topic Compositional Neural Language Model

no code implementations28 Dec 2017 Wenlin Wang, Zhe Gan, Wenqi Wang, Dinghan Shen, Jiaji Huang, Wei Ping, Sanjeev Satheesh, Lawrence Carin

The TCNLM learns the global semantic coherence of a document via a neural topic model, and the probability of each learned latent topic is further used to build a Mixture-of-Experts (MoE) language model, where each expert (corresponding to one topic) is a recurrent neural network (RNN) that accounts for learning the local structure of a word sequence.

Language Modelling

Continuous-Time Flows for Efficient Inference and Density Estimation

no code implementations ICML 2018 Changyou Chen, Chunyuan Li, Liqun Chen, Wenlin Wang, Yunchen Pu, Lawrence Carin

Distinct from normalizing flows and GANs, CTFs can be adopted to achieve the above two goals in one framework, with theoretical guarantees.

Density Estimation

A Convergence Analysis for A Class of Practical Variance-Reduction Stochastic Gradient MCMC

no code implementations4 Sep 2017 Changyou Chen, Wenlin Wang, Yizhe Zhang, Qinliang Su, Lawrence Carin

However, there has been little theoretical analysis of the impact of minibatch size to the algorithm's convergence rate.

Stochastic Optimization

Earliness-Aware Deep Convolutional Networks for Early Time Series Classification

no code implementations14 Nov 2016 Wenlin Wang, Changyou Chen, Wenqi Wang, Piyush Rai, Lawrence Carin

Unlike most existing methods for early classification of time series data, that are designed to solve this problem under the assumption of the availability of a good set of pre-defined (often hand-crafted) features, our framework can jointly perform feature learning (by learning a deep hierarchy of \emph{shapelets} capturing the salient characteristics in each time series), along with a dynamic truncation model to help our deep feature learning architecture focus on the early parts of each time series.

Classification Early Classification +4

Cannot find the paper you are looking for? You can Submit a new open access paper.