Search Results for author: Xun Zheng

Found 12 papers, 3 papers with code

Learning Sparse Nonparametric DAGs

2 code implementations29 Sep 2019 Xun Zheng, Chen Dan, Bryon Aragam, Pradeep Ravikumar, Eric P. Xing

We develop a framework for learning sparse nonparametric directed acyclic graphs (DAGs) from data.

Causal Discovery

LightLDA: Big Topic Models on Modest Compute Clusters

1 code implementation4 Dec 2014 Jinhui Yuan, Fei Gao, Qirong Ho, Wei Dai, Jinliang Wei, Xun Zheng, Eric P. Xing, Tie-Yan Liu, Wei-Ying Ma

When building large-scale machine learning (ML) programs, such as big topic models or deep neural nets, one usually assumes such tasks can only be attempted with industrial-sized clusters with thousands of nodes, which are out of reach for most practitioners or academic researchers.

Topic Models

On Model Parallelization and Scheduling Strategies for Distributed Machine Learning

no code implementations NeurIPS 2014 Seunghak Lee, Jin Kyu Kim, Xun Zheng, Qirong Ho, Garth A. Gibson, Eric P. Xing

Distributed machine learning has typically been approached from a data parallel perspective, where big data are partitioned to multiple workers and an algorithm is executed concurrently over different data subsets under various synchronization schemes to ensure speed-up and/or correctness.

BIG-bench Machine Learning Scheduling

Model-Parallel Inference for Big Topic Models

no code implementations10 Nov 2014 Xun Zheng, Jin Kyu Kim, Qirong Ho, Eric P. Xing

In real world industrial applications of topic modeling, the ability to capture gigantic conceptual space by learning an ultra-high dimensional topical representation, i. e., the so-called "big model", is becoming the next desideratum after enthusiasms on "big data", especially for fine-grained downstream tasks such as online advertising, where good performances are usually achieved by regression-based predictors built on millions if not billions of input features.

Topic Models

Primitives for Dynamic Big Model Parallelism

no code implementations18 Jun 2014 Seunghak Lee, Jin Kyu Kim, Xun Zheng, Qirong Ho, Garth A. Gibson, Eric P. Xing

When training large machine learning models with many variables or parameters, a single machine is often inadequate since the model may be too large to fit in memory, while training can take a long time even with stochastic updates.

Scheduling

Petuum: A New Platform for Distributed Machine Learning on Big Data

no code implementations30 Dec 2013 Eric P. Xing, Qirong Ho, Wei Dai, Jin Kyu Kim, Jinliang Wei, Seunghak Lee, Xun Zheng, Pengtao Xie, Abhimanu Kumar, Yao-Liang Yu

What is a systematic way to efficiently apply a wide spectrum of advanced ML programs to industrial scale problems, using Big Models (up to 100s of billions of parameters) on Big Data (up to terabytes or petabytes)?

BIG-bench Machine Learning Scheduling

Consistent Bounded-Asynchronous Parameter Servers for Distributed ML

no code implementations30 Dec 2013 Jinliang Wei, Wei Dai, Abhimanu Kumar, Xun Zheng, Qirong Ho, Eric P. Xing

Many ML algorithms fall into the category of \emph{iterative convergent algorithms} which start from a randomly chosen initial point and converge to optima by repeating iteratively a set of procedures.

Improved Bayesian Logistic Supervised Topic Models with Data Augmentation

no code implementations ACL 2013 Jun Zhu, Xun Zheng, Bo Zhang

Supervised topic models with a logistic likelihood have two issues that potentially limit their practical use: 1) response variables are usually over-weighted by document word counts; and 2) existing variational inference methods make strict mean-field assumptions.

Bayesian Inference Data Augmentation +2

Cannot find the paper you are looking for? You can Submit a new open access paper.