no code implementations • 6 May 2024 • Yizhuo Lu, Changde Du, Chong Wang, Xuanliu Zhu, Liuyun Jiang, Huiguang He
Reconstructing human dynamic vision from brain activity is a challenging task with great scientific significance.
no code implementations • 15 Mar 2024 • Chong Wang, Yi Yu, Lanqing Guo, Bihan Wen
This is primarily due to the unique characteristic of spatially varying illumination within shadow images.
1 code implementation • 15 Mar 2024 • Chong Wang, Lanqing Guo, YuFei Wang, Hao Cheng, Yi Yu, Bihan Wen
Starting from decomposing the original maximum-a-posteriori problem of accelerated MRI, we present a rigorous derivation of the proposed PDAC framework, which could be further unfolded into an end-to-end trainable network.
no code implementations • 14 Mar 2024 • Aonan Zhang, Chong Wang, Yi Wang, Xuanyu Zhang, Yunfei Cheng
In this paper, we introduce an improved approach of speculative decoding aimed at enhancing the efficiency of serving large language models.
no code implementations • 14 Mar 2024 • Brandon McKinzie, Zhe Gan, Jean-Philippe Fauconnier, Sam Dodge, BoWen Zhang, Philipp Dufter, Dhruti Shah, Xianzhi Du, Futang Peng, Floris Weers, Anton Belyi, Haotian Zhang, Karanjeet Singh, Doug Kang, Ankur Jain, Hongyu Hè, Max Schwarzer, Tom Gunter, Xiang Kong, Aonan Zhang, Jianyu Wang, Chong Wang, Nan Du, Tao Lei, Sam Wiseman, Guoli Yin, Mark Lee, ZiRui Wang, Ruoming Pang, Peter Grasch, Alexander Toshev, Yinfei Yang
Through careful and comprehensive ablations of the image encoder, the vision language connector, and various pre-training data choices, we identified several crucial design lessons.
Ranked #20 on Visual Question Answering on MM-Vet
no code implementations • 2 Mar 2024 • Chenchen Tao, Chong Wang, Yuexian Zou, Xiaohao Peng, Jiafei Wu, Jiangbo Qian
Most models for weakly supervised video anomaly detection (WS-VAD) rely on multiple instance learning, aiming to distinguish normal and abnormal snippets without specifying the type of anomaly.
no code implementations • 29 Jan 2024 • Jiaxin Yu, Peng Liang, Yujia Fu, Amjed Tahir, Mojtaba Shahin, Chong Wang, Yangxiao Cai
To explore the challenges of applying LLMs in practical code review for security defect detection, this study compared the detection performance of three state-of-the-art LLMs (Gemini Pro, GPT-4, and GPT-3. 5) under five prompts on 549 code files that contain security defects from real-world code reviews.
1 code implementation • 26 Dec 2023 • Weisong Sun, Chunrong Fang, Yudu You, Yuchen Chen, Yi Liu, Chong Wang, Jian Zhang, Quanjun Zhang, Hanwei Qian, Wei Zhao, Yang Liu, Zhenyu Chen
PromptCS trains a prompt agent that can generate continuous prompts to unleash the potential for LLMs in code summarization.
no code implementations • 30 Nov 2023 • Chong Wang, Yuanhong Chen, Fengbei Liu, Davis James McCarthy, Helen Frazer, Gustavo Carneiro
Such an approach enables the learning of more powerful prototype representations since each learned prototype will own a measure of variability, which naturally reduces the sparsity given the spread of the distribution around each prototype, and we also integrate a prototype diversity objective function into the GMM optimisation to reduce redundancy.
1 code implementation • 16 Nov 2023 • Chong Wang, Cheng Xu, Adeel Akram, Zhilin Shan, Qixing Zhang
By using two different negative instance sampling strategies on positive images and negative images respectively, the problem of supervision signal confusion caused by label diversity in the process of network training is alleviated.
1 code implementation • 14 Sep 2023 • Jiabao Li, Yuqi Li, Ciliang Sun, Chong Wang, Jinhui Xiang
We propose Multi-spectral Neural Radiance Fields(Spec-NeRF) for jointly reconstructing a multispectral radiance field and spectral sensitivity functions(SSFs) of the camera from a set of color images filtered by different filters.
1 code implementation • 2 Aug 2023 • Fengbei Liu, Chong Wang, Yuanhong Chen, Yuyuan Liu, Gustavo Carneiro
Second, we introduce a new Partial Label Supervision (PLS) for noisy label learning that accounts for both clean label coverage and uncertainty.
Ranked #5 on Learning with noisy labels on CIFAR-10N-Random3
1 code implementation • 6 Apr 2023 • Yuanhong Chen, Yuyuan Liu, Hu Wang, Fengbei Liu, Chong Wang, Helen Frazer, Gustavo Carneiro
We show empirical results that demonstrate the effectiveness of our benchmark.
no code implementations • 13 Feb 2023 • Mimee Xu, Jiankai Sun, Xin Yang, Kevin Yao, Chong Wang
Without incurring the cost of re-training, and without degrading the model unnecessarily, we develop Unlearn-ALS by making a few key modifications to the fine-tuning procedure under Alternating Least Squares optimisation, thus applicable to any bi-linear models regardless of the training procedure.
1 code implementation • 9 Feb 2023 • Lin Zheng, Jianbo Yuan, Chong Wang, Lingpeng Kong
Built upon previous progress of RFA, we characterize this gap through the lens of control variates and show that RFA can be decomposed into a sum of multiple control variate estimators for each element in the sequence.
no code implementations • 31 Jan 2023 • Yuanhong Chen, Yuyuan Liu, Chong Wang, Michael Elliott, Chun Fung Kwok, Carlos Pena-Solorzano, Yu Tian, Fengbei Liu, Helen Frazer, Davis J. McCarthy, Gustavo Carneiro
Given the large size of such datasets, researchers usually face a dilemma with the weakly annotated subset: to not use it or to fully annotate it.
1 code implementation • ICCV 2023 • Chong Wang, Yuyuan Liu, Yuanhong Chen, Fengbei Liu, Yu Tian, Davis J. McCarthy, Helen Frazer, Gustavo Carneiro
Prototypical part network (ProtoPNet) methods have been designed to achieve interpretable classification by associating predictions with a set of training prototypes, which we refer to as trivial prototypes because they are trained to lie far from the classification boundary in the feature space.
Explainable Artificial Intelligence (XAI) Image Classification +1
no code implementations • 1 Jan 2023 • Fengbei Liu, Yuanhong Chen, Chong Wang, Yu Tain, Gustavo Carneiro
Also, the new sample selection is based on multi-view consensus, which uses the label views from training labels and model predictions to divide the training set into clean and noisy for training the multi-class model and to re-label the training samples with multiple top-ranked labels for training the multi-label model.
no code implementations • ICCV 2023 • Lanqing Guo, Chong Wang, Wenhan Yang, YuFei Wang, Bihan Wen
Recent deep learning methods have achieved superior results in shadow removal.
1 code implementation • CVPR 2023 • Lanqing Guo, Chong Wang, Wenhan Yang, Siyu Huang, YuFei Wang, Hanspeter Pfister, Bihan Wen
Recent deep learning methods have achieved promising results in image shadow removal.
no code implementations • 23 Nov 2022 • Xiang Gao, Weihao Gao, Wenzhi Xiao, Zhirui Wang, Chong Wang, Liang Xiang
Experiments show that, compared to training from scratch, fine-tuning the pretrained model can significantly improve the performance for seven molecular property prediction tasks and two force field tasks.
no code implementations • 23 Nov 2022 • Xiang Gao, Weihao Gao, Wenzhi Xiao, Zhirui Wang, Chong Wang, Liang Xiang
To model the complex nonlinearity in predicting molecular properties in an more end-to-end approach, we propose to encode the positional quantities with a learnable embedding that is continuous and differentiable.
no code implementations • 17 Nov 2022 • Yuanshun Yao, Chong Wang, Hang Li
The key idea is to train a surrogate model to learn the effect of removing a subset of user history on the recommendation.
no code implementations • 26 Sep 2022 • Chong Wang, Yuanhong Chen, Yuyuan Liu, Yu Tian, Fengbei Liu, Davis J. McCarthy, Michael Elliott, Helen Frazer, Gustavo Carneiro
On the other hand, prototype-based models improve interpretability by associating predictions with training image prototypes, but they are less accurate than global models and their prototypes tend to have poor diversity.
1 code implementation • 21 Sep 2022 • Yuanhong Chen, Hu Wang, Chong Wang, Yu Tian, Fengbei Liu, Michael Elliott, Davis J. McCarthy, Helen Frazer, Gustavo Carneiro
When analysing screening mammograms, radiologists can naturally process information across two ipsilateral views of each breast, namely the cranio-caudal (CC) and mediolateral-oblique (MLO) views.
no code implementations • 20 Sep 2022 • Ke Bai, Aonan Zhang, Zhizhong Li, Ricardo Heano, Chong Wang, Lawrence Carin
In recommendation systems, items are likely to be exposed to various users and we would like to learn about the familiarity of a new user with an existing item.
1 code implementation • 25 Aug 2022 • Jiankai Sun, Xin Yang, Yuanshun Yao, Junyuan Xie, Di wu, Chong Wang
Federated learning (FL) has gained significant attention recently as a privacy-enhancing tool to jointly train a machine learning model by multiple participants.
no code implementations • 25 Jul 2022 • Chong Wang, Rongkai Zhang, Saiprasad Ravishankar, Bihan Wen
To this end, we propose a novel deep reinforcement learning (DRL) based PnP framework, dubbed RePNP, by leveraging a light-weight DRL-based denoiser for robust image restoration tasks.
no code implementations • 16 Jun 2022 • Ruihan Wu, Xin Yang, Yuanshun Yao, Jiankai Sun, Tianyi Liu, Kilian Q. Weinberger, Chong Wang
Differentially Private (DP) data release is a promising technique to disseminate data without compromising the privacy of data subjects.
no code implementations • 24 May 2022 • Jiankai Sun, Xin Yang, Yuanshun Yao, Junyuan Xie, Di wu, Chong Wang
In this work, we propose two evaluation algorithms that can more accurately compute the widely used AUC (area under curve) metric when using label DP in vFL.
1 code implementation • 10 Apr 2022 • Lin Zheng, Chong Wang, Lingpeng Kong
By combining the expressiveness in RA and the efficiency in RFA, we develop a novel linear complexity self-attention mechanism called linear randomized attention (LARA).
1 code implementation • 28 Mar 2022 • Yuyuan Liu, Yu Tian, Chong Wang, Yuanhong Chen, Fengbei Liu, Vasileios Belagiannis, Gustavo Carneiro
The most successful SSL approaches are based on consistency learning that minimises the distance between model responses obtained from perturbed views of the unlabelled data.
1 code implementation • 23 Mar 2022 • Yu Tian, Guansong Pang, Fengbei Liu, Yuyuan Liu, Chong Wang, Yuanhong Chen, Johan W Verjans, Gustavo Carneiro
Current polyp detection methods from colonoscopy videos use exclusively normal (i. e., healthy) training images, which i) ignore the importance of temporal information in consecutive video frames, and ii) lack knowledge about the polyps.
1 code implementation • 22 Mar 2022 • Yu Tian, Guansong Pang, Yuyuan Liu, Chong Wang, Yuanhong Chen, Fengbei Liu, Rajvinder Singh, Johan W Verjans, Mengyu Wang, Gustavo Carneiro
Our UAD approach, the memory-augmented multi-level cross-attentional masked autoencoder (MemMC-MAE), is a transformer-based approach, consisting of a novel memory-augmented self-attention operator for the encoder and a new multi-level cross-attention operator for the decoder.
no code implementations • 4 Mar 2022 • Xin Yang, Jiankai Sun, Yuanshun Yao, Junyuan Xie, Chong Wang
Split learning is a distributed training framework that allows multiple parties to jointly train a machine learning model over vertically partitioned data (partitioned by attributes).
2 code implementations • ICCV 2023 • Yuanhong Chen, Fengbei Liu, Hu Wang, Chong Wang, Yu Tian, Yuyuan Liu, Gustavo Carneiro
Deep learning methods have shown outstanding classification accuracy in medical imaging problems, which is largely attributed to the availability of large-scale datasets manually annotated with clean labels.
no code implementations • 2 Mar 2022 • Jiankai Sun, Xin Yang, Yuanshun Yao, Chong Wang
As the raw labels often contain highly sensitive information, some recent work has been proposed to prevent the label leakage from the backpropagated gradients effectively in vFL.
no code implementations • 2 Mar 2022 • Yuanshun Yao, Chong Wang, Hang Li
Modern recommender systems face an increasing need to explain their recommendations.
no code implementations • NeurIPS Workshop AI4Scien 2021 • Ce Yang, Weihao Gao, Di wu, Chong Wang
Simulation of the dynamics of physical systems is essential to the development of both science and engineering.
no code implementations • NeurIPS Workshop AI4Scien 2021 • Tianze Zheng, Weihao Gao, Chong Wang
Molecular dynamics (MD) simulation predicts the trajectory of atoms by solving Newton's equation of motion with a numeric integrator.
no code implementations • NeurIPS 2021 • Haiying Wang, Aonan Zhang, Chong Wang
We first prove that, with imbalanced data, the available information about unknown parameters is only tied to the relatively small number of positive instances, which justifies the usage of negative sampling.
no code implementations • 2 Sep 2021 • Zun Wang, Chong Wang, Sibo Zhao, Yong Xu, Shaogang Hao, Chang Yu Hsieh, Bing-Lin Gu, Wenhui Duan
With many frameworks based on message passing neural networks proposed to predict molecular and bulk properties, machine learning methods have tremendously shifted the paradigms of computational sciences underpinning physics, material science, chemistry, and biology.
1 code implementation • 4 Aug 2021 • Weijie Liu, Chong Wang, Haohe Li, Shenghao Yu, Jiafei Wu
By adjusting the prediction distribution of the base detector using the output of this GCN, the proposed model serves as a hard auxiliary classification task, which guides the detector to improve the class representation implicitly.
no code implementations • 21 Jul 2021 • Jiankai Sun, Yuanshun Yao, Weihao Gao, Junyuan Xie, Chong Wang
Recently researchers have studied input leakage problems in Federated Learning (FL) where a malicious party can reconstruct sensitive training inputs provided by users from shared gradient.
no code implementations • 12 Jun 2021 • Xiangyu Zhao, Haochen Liu, Wenqi Fan, Hui Liu, Jiliang Tang, Chong Wang
Unlike existing algorithms, the proposed controller can adaptively generate the loss probabilities for different data examples according to their varied convergence behaviors.
no code implementations • 10 Jun 2021 • Jiankai Sun, Xin Yang, Yuanshun Yao, Aonan Zhang, Weihao Gao, Junyuan Xie, Chong Wang
In this paper, we propose a vFL framework based on Private Set Union (PSU) that allows each party to keep sensitive membership information to itself.
no code implementations • 6 Apr 2021 • Matthew F. Singh, Chong Wang, Michael W. Cole, ShiNung Ching
Intuitively, our approach consists of solving for the parameters that generate the most accurate state estimator (Extended Kalman Filter).
1 code implementation • 6 Mar 2021 • Fengbei Liu, Yuanhong Chen, Yu Tian, Yuyuan Liu, Chong Wang, Vasileios Belagiannis, Gustavo Carneiro
In this paper, we propose a new training module called Non-Volatile Unbiased Memory (NVUM), which non-volatility stores running average of model logits for a new regularization loss on noisy multi-label problem.
Image Classification with Label Noise Learning with noisy labels +1
2 code implementations • ICLR 2022 • Oscar Li, Jiankai Sun, Xin Yang, Weihao Gao, Hongyi Zhang, Junyuan Xie, Virginia Smith, Chong Wang
Two-party split learning is a popular technique for learning a model across feature-partitioned data.
no code implementations • 8 Jan 2021 • Zun Wang, Chong Wang, Sibo Zhao, Shiqiao Du, Yong Xu, Bing-Lin Gu, Wenhui Duan
Molecular dynamics is a powerful simulation tool to explore material properties.
no code implementations • 7 Jan 2021 • Joshua Mutch, Xuetao Ma, Chong Wang, Paul Malinowski, Joss Ayres-Sims, Qianni Jiang, Zhaoyu Liu, Di Xiao, Matthew Yankowitz, Jiun-Haw Chu
The angular dependence of the Hall resistivity approaches a signum function, persisting down to an extremely low field of 0. 03 T. By varying the carrier density of ZrTe5 over three orders of magnitude, we show that this singular behavior is due to the anomalous Hall effect generated by the ultra-dilute massive Dirac carriers in the quantum limit of Pauli paramagnetism when the Zeeman energy exceeds the Fermi energy.
Mesoscale and Nanoscale Physics
1 code implementation • 1 Jan 2021 • Weihao Gao, Xiangjun Fan, Jiankai Sun, Kai Jia, Wenzhi Xiao, Chong Wang, Xiaobing Liu
With the model learnt, a beam search over the latent codes is performed to retrieve the top candidates.
no code implementations • 17 Dec 2020 • Vladimir Calvera, Chong Wang
We argue that in the most natural scenario, a spin-$S$ system realizes a $U(2S)$ DSL, described at low energy by gapless Dirac fermions coupled with an emergent $U(2S)$ gauge field (also known as $U(2S)$ QCD$_3$).
Strongly Correlated Electrons
no code implementations • 23 Aug 2020 • Zhiqiang Ma, Grace Bang, Chong Wang, Xiaomo Liu
Earnings calls are hosted by management of public companies to discuss the company's financial performance with analysts and investors.
1 code implementation • 12 Jul 2020 • Weihao Gao, Xiangjun Fan, Chong Wang, Jiankai Sun, Kai Jia, Wenzhi Xiao, Ruofan Ding, Xingyan Bin, Hui Yang, Xiaobing Liu
With the model learnt, a beam search over the structure is performed to retrieve the top candidates for reranking.
no code implementations • 26 Feb 2020 • Xiangyu Zhao, Chong Wang, Ming Chen, Xudong Zheng, Xiaobing Liu, Jiliang Tang
Deep learning based recommender systems (DLRSs) often have embedding layers, which are utilized to lessen the dimensionality of categorical variables (e. g. user/item identifiers) and meaningfully transform them in the low-dimensional space.
no code implementations • 30 Jan 2020 • Renqin Cai, Qinglei Wang, Chong Wang, Xiaobing Liu
To better model the long-term dependence structure, we propose a GatedLongRec solution in this work.
no code implementations • 31 Dec 2019 • Yi Zhang, Chong Wang, Ye Zheng, Jieyu Zhao, Yuqi Li, Xijiong Xie
Subsequently, in temporal analysis, we use TCNs to extract temporal features and employ improved Squeeze-and-Excitation Networks (SENets) to strengthen the representational power of temporal features from each TCNs' layers.
1 code implementation • ACL 2020 • Haoming Jiang, Chen Liang, Chong Wang, Tuo Zhao
To overcome this limitation, we propose a novel multi-domain NMT model using individual modules for each domain, on which we apply word-level, adaptive and layer-wise domain mixing.
no code implementations • ICLR 2020 • Alejandro Newell, Lu Jiang, Chong Wang, Li-Jia Li, Jia Deng
Multi-task learning holds the promise of less data, parameters, and time than training of separate models.
2 code implementations • ICLR 2019 • Honghua Dong, Jiayuan Mao, Tian Lin, Chong Wang, Lihong Li, Denny Zhou
We propose the Neural Logic Machine (NLM), a neural-symbolic architecture for both inductive learning and logic reasoning.
no code implementations • ICCV 2019 • Yuyin Zhou, Zhe Li, Song Bai, Chong Wang, Xinlei Chen, Mei Han, Elliot Fishman, Alan Yuille
Accurate multi-organ abdominal CT segmentation is essential to many clinical applications such as computer-aided intervention.
no code implementations • ICLR 2019 • Shun Liao, Ting Chen, Tian Lin, Denny Zhou, Chong Wang
In this paper, we present a novel softmax inference speedup method, Doubly Sparse Softmax (DS-Softmax), that leverages sparse mixture of sparse experts to efficiently retrieve top-k classes.
no code implementations • 6 Nov 2018 • Jiangtao Feng, Lingpeng Kong, Po-Sen Huang, Chong Wang, Da Huang, Jiayuan Mao, Kan Qiao, Dengyong Zhou
We also design an efficient dynamic programming algorithm to decode segments that allows the model to be trained faster than the existing neural phrase-based machine translation method by Huang et al. (2018).
1 code implementation • 10 Oct 2018 • Aonan Zhang, Quan Wang, Zhenyao Zhu, John Paisley, Chong Wang
In this paper, we propose a fully supervised speaker diarization approach, named unbounded interleaved-state recurrent neural networks (UIS-RNN).
Ranked #1 on Speaker Diarization on Hub5'00 CallHome
no code implementations • 9 Oct 2018 • Weihao Gao, Yu-Han Liu, Chong Wang, Sewoong Oh
Theoretically, we prove that the proposed scheme is optimal for compressing one-hidden-layer ReLU neural networks.
no code implementations • NIPS Workshop CDNNRIA 2018 • Ting Chen, Ji Lin, Tian Lin, Song Han, Chong Wang, Denny Zhou
Modern deep neural networks have a large amount of weights, which make them difficult to deploy on computation constrained devices such as mobile phones.
no code implementations • EMNLP 2018 • Da Tang, Xiujun Li, Jianfeng Gao, Chong Wang, Lihong Li, Tony Jebara
Experiments with simulated and real users show that our approach performs competitively against a state-of-the-art method that requires human-defined subgoals.
1 code implementation • ICLR 2018 • Kiran K. Thekumparampil, Chong Wang, Sewoong Oh, Li-Jia Li
Recently popularized graph neural networks achieve the state-of-the-art accuracy on a number of standard benchmark datasets for graph-based semi-supervised learning, improving significantly over existing approaches.
Ranked #10 on Graph Regression on Lipophilicity
no code implementations • NeurIPS 2017 • Jianshu Chen, Chong Wang, Lin Xiao, Ji He, Lihong Li, Li Deng
In sequential decision making, it is often important and useful for end users to understand the underlying patterns or causes that lead to the corresponding decisions.
1 code implementation • CVPR 2018 • Zhe Li, Chong Wang, Mei Han, Yuan Xue, Wei Wei, Li-Jia Li, Li Fei-Fei
Accurate identification and localization of abnormalities from radiology images play an integral part in clinical diagnosis and treatment planning.
no code implementations • 9 Sep 2017 • Chong Wang, Xue Zhang, Xipeng Lan
However, as the number of identities becomes extremely large, the training will suffer from bad local minima because effective hard triplets are difficult to be found.
no code implementations • ICLR 2018 • Chong Wang, Xipeng Lan, Yangang Zhang
The idea is to make a small student network imitate the target of a large teacher network, then the student network can be competitive to the teacher one.
4 code implementations • ICLR 2018 • Po-Sen Huang, Chong Wang, Sitao Huang, Dengyong Zhou, Li Deng
In this paper, we present Neural Phrase-based Machine Translation (NPMT).
Ranked #7 on Machine Translation on IWSLT2015 English-German
no code implementations • 28 Feb 2017 • Asli Celikyilmaz, Li Deng, Lihong Li, Chong Wang
We introduce a new paradigm of learning for reasoning, understanding, and prediction, as well as the scaffolding network to implement this paradigm.
2 code implementations • ICML 2017 • Chong Wang, Yining Wang, Po-Sen Huang, Abdel-rahman Mohamed, Dengyong Zhou, Li Deng
The probability of a segmented sequence is calculated as the product of the probabilities of all its segments, where each segment is modeled using existing tools such as recurrent neural networks.
1 code implementation • 5 Nov 2016 • Adji B. Dieng, Chong Wang, Jianfeng Gao, John Paisley
The proposed TopicRNN model integrates the merits of RNNs and latent topic models: it captures local (syntactic) dependencies using an RNN and global (semantic) dependencies using latent topics.
no code implementations • 10 Dec 2015 • Abhimanu Kumar, Shriphani Palakodety, Chong Wang, Carolyn P. Rose, Eric P. Xing, Miaomiao Wen
Online discussion forums are complex webs of overlapping subcommunities (macrolevel structure, across threads) in which users enact different roles depending on which subcommunity they are participating in within a particular time point (microlevel structure, within threads).
35 code implementations • 8 Dec 2015 • Dario Amodei, Rishita Anubhai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Jingdong Chen, Mike Chrzanowski, Adam Coates, Greg Diamos, Erich Elsen, Jesse Engel, Linxi Fan, Christopher Fougner, Tony Han, Awni Hannun, Billy Jun, Patrick LeGresley, Libby Lin, Sharan Narang, Andrew Ng, Sherjil Ozair, Ryan Prenger, Jonathan Raiman, Sanjeev Satheesh, David Seetapun, Shubho Sengupta, Yi Wang, Zhiqian Wang, Chong Wang, Bo Xiao, Dani Yogatama, Jun Zhan, Zhenyao Zhu
We show that an end-to-end deep learning approach can be used to recognize either English or Mandarin Chinese speech--two vastly different languages.
no code implementations • 17 Oct 2015 • Chong Wang, David M. Blei
Robust Bayesian models are appealing alternatives to standard models, providing protection from data that contains outliers or other departures from the model assumptions.
no code implementations • 14 Oct 2015 • Willie Neiswanger, Chong Wang, Eric Xing
We develop a parallel variational inference (VI) procedure for use in data-distributed settings, where each machine only has access to a subset of data and runs VI independently, without communicating with other machines.
no code implementations • TACL 2014 • Dani Yogatama, Chong Wang, Bryan R. Routledge, Noah A. Smith, Eric P. Xing
We present a probabilistic language model that captures temporal dynamics and conditions on arbitrary non-linguistic context features.
no code implementations • NeurIPS 2013 • Prem K. Gopalan, Chong Wang, David Blei
We evaluate the link prediction accuracy of our algorithm on eight real-world networks with up to 60, 000 nodes, and 24 benchmark networks.
no code implementations • NeurIPS 2013 • Chong Wang, Xi Chen, Alexander J. Smola, Eric P. Xing
We demonstrate how to construct the control variate for two practical problems using stochastic gradient optimization.
no code implementations • 19 Nov 2013 • Willie Neiswanger, Chong Wang, Eric Xing
This embarrassingly parallel algorithm allows each machine to act independently on a subset of the data (without communication) until the final combination stage.
no code implementations • NeurIPS 2012 • Chong Wang, David M. Blei
We present a truncation-free online variational inference algorithm for Bayesian nonparametric models.
no code implementations • 25 Oct 2012 • John Paisley, Chong Wang, David M. Blei, Michael. I. Jordan
We develop a nested hierarchical Dirichlet process (nHDP) for hierarchical topic modeling.
2 code implementations • 29 Jun 2012 • Matt Hoffman, David M. Blei, Chong Wang, John Paisley
We develop stochastic variational inference, a scalable algorithm for approximating posterior distributions.
no code implementations • 13 Jun 2012 • Chong Wang, David Blei, David Heckerman
In contrast to the cDTM, the original discrete-time dynamic topic model (dDTM) requires that time be discretized.
no code implementations • NeurIPS 2010 • Bo Thiesson, Chong Wang
Remarkably easy implementation and guaranteed convergence has made the EM algorithm one of the most used algorithms for mixture modeling.
no code implementations • NeurIPS 2009 • Chong Wang, David M. Blei
We present a nonparametric hierarchical Bayesian model of document collections that decouples sparsity and smoothness in the component distributions (i. e., the ``topics).
no code implementations • NeurIPS 2009 • Chong Wang, David M. Blei
The nested Chinese restaurant process (nCRP) is a powerful nonparametric Bayesian model for learning tree-based hierarchies from data.