1 code implementation • 14 Oct 2022 • Yong Guo, Yaofo Chen, Yin Zheng, Qi Chen, Peilin Zhao, Jian Chen, Junzhou Huang, Mingkui Tan
More critically, these independent search processes cannot share their learned knowledge (i. e., the distribution of good architectures) with each other and thus often result in limited search results.
no code implementations • 27 Feb 2021 • Yong Guo, Yaofo Chen, Yin Zheng, Qi Chen, Peilin Zhao, Jian Chen, Junzhou Huang, Mingkui Tan
To this end, we propose a Pareto-Frontier-aware Neural Architecture Generator (NAG) which takes an arbitrary budget as input and produces the Pareto optimal architecture for the target budget.
2 code implementations • 20 Feb 2021 • Yong Guo, Yin Zheng, Mingkui Tan, Qi Chen, Zhipeng Li, Jian Chen, Peilin Zhao, Junzhou Huang
To address this issue, we propose a Neural Architecture Transformer++ (NAT++) method which further enlarges the set of candidate transitions to improve the performance of architecture optimization.
no code implementations • 1 Jan 2021 • Yong Guo, Yaofo Chen, Yin Zheng, Peilin Zhao, Jian Chen, Junzhou Huang, Mingkui Tan
To find promising architectures under different budgets, existing methods may have to perform an independent search for each budget, which is very inefficient and unnecessary.
1 code implementation • 22 Dec 2020 • Xuefei Ning, Junbo Zhao, Wenshuo Li, Tianchen Zhao, Yin Zheng, Huazhong Yang, Yu Wang
In this paper, considering scenarios with capacity budget, we aim to discover adversarially robust architecture at targeted capacities.
no code implementations • 28 Sep 2020 • Xuefei Ning, Wenshuo Li, Zixuan Zhou, Tianchen Zhao, Shuang Liang, Yin Zheng, Huazhong Yang, Yu Wang
A major challenge in NAS is to conduct a fast and accurate evaluation of neural architectures.
1 code implementation • ICML 2020 • Yong Guo, Yaofo Chen, Yin Zheng, Peilin Zhao, Jian Chen, Junzhou Huang, Mingkui Tan
With the proposed search strategy, our Curriculum Neural Architecture Search (CNAS) method significantly improves the search efficiency and finds better architectures than existing NAS methods.
1 code implementation • ICLR 2020 • Dongze Lian, Yin Zheng, Yintao Xu, Yanxiong Lu, Leyu Lin, Peilin Zhao, Junzhou Huang, Shenghua Gao
Recently, Neural Architecture Search (NAS) has been successfully applied to multiple artificial intelligence areas and shows better performance compared with hand-designed networks.
1 code implementation • ECCV 2020 • Xuefei Ning, Yin Zheng, Tianchen Zhao, Yu Wang, Huazhong Yang
Experimental results on various search spaces confirm GATES's effectiveness in improving the performance predictor.
no code implementations • 20 Mar 2020 • Xuefei Ning, Guangjun Ge, Wenshuo Li, Zhenhua Zhu, Yin Zheng, Xiaoming Chen, Zhen Gao, Yu Wang, Huazhong Yang
By inspecting the discovered architectures, we find that the operation primitives, the weight quantization range, the capacity of the model, and the connection pattern have influences on the fault resilience capability of NN models.
1 code implementation • NeurIPS 2019 • Yong Guo, Yin Zheng, Mingkui Tan, Qi Chen, Jian Chen, Peilin Zhao, Junzhou Huang
To verify the effectiveness of the proposed strategies, we apply NAT on both hand-crafted architectures and NAS based architectures.
no code implementations • 25 May 2019 • Jiaxing Wang, Yin Zheng, Xiaoshuang Chen, Junzhou Huang, Jian Cheng
Semi-supervised learning (SSL) provides a powerful framework for leveraging unlabeled data when labels are limited or expensive to obtain.
1 code implementation • 18 May 2019 • Xiaoshuang Chen, Yin Zheng, Jiaxing Wang, Wenye Ma, Junzhou Huang
Factorization machines (FM) are a popular model class to learn pairwise interactions by a low-rank approximation.
1 code implementation • CVPR 2019 • Yang Yang, Wenye Ma, Yin Zheng, Jian-Feng Cai, Weiyu Xu
Removing undesired reflections from images taken through the glass is of great importance in computer vision.
no code implementations • 18 Jun 2018 • Xuefei Ning, Yin Zheng, Zhuxi Jiang, Yu Wang, Huazhong Yang, Junzhou Huang
Moreover, we also propose HiTM-VAE, where the document-specific topic distributions are generated in a hierarchical manner.
no code implementations • ICLR 2018 • Xuefei Ning, Yin Zheng, Zhuxi Jiang, Yu Wang, Huazhong Yang, Junzhou Huang
On the other hand, different with the other BNP topic models, the inference of iTM-VAE is modeled by neural networks, which has rich representation capacity and can be computed in a simple feed-forward manner.
1 code implementation • 21 Dec 2016 • Chao Du, Chongxuan Li, Yin Zheng, Jun Zhu, Bo Zhang
Deep neural networks have shown promise in collaborative filtering (CF).
9 code implementations • 16 Nov 2016 • Zhuxi Jiang, Yin Zheng, Huachun Tan, Bangsheng Tang, Hanning Zhou
In this paper, we propose Variational Deep Embedding (VaDE), a novel unsupervised generative clustering approach within the framework of Variational Auto-Encoder (VAE).
3 code implementations • 31 May 2016 • Yin Zheng, Bangsheng Tang, Wenkui Ding, Hanning Zhou
This paper proposes CF-NADE, a neural autoregressive architecture for collaborative filtering (CF) tasks, which is inspired by the Restricted Boltzmann Machine (RBM) based CF model and the Neural Autoregressive Distribution Estimator (NADE).
Ranked #3 on Recommendation Systems on MovieLens 1M
no code implementations • 18 Mar 2016 • Stanislas Lauly, Yin Zheng, Alexandre Allauzen, Hugo Larochelle
We present an approach based on feed-forward neural networks for learning the distribution of textual documents.
2 code implementations • 24 Nov 2015 • Amjad Almahairi, Nicolas Ballas, Tim Cooijmans, Yin Zheng, Hugo Larochelle, Aaron Courville
The low-capacity sub-networks are applied across most of the input, but also provide a guide to select a few portions of the input on which to apply the high-capacity sub-networks.
no code implementations • 13 Sep 2014 • Yin Zheng, Yu-Jin Zhang, Hugo Larochelle
Second, we propose a deep extension of our model and provide an efficient way of training the deep model.
no code implementations • CVPR 2014 • Yin Zheng, Yu-Jin Zhang, Hugo Larochelle
Topic modeling based on latent Dirichlet allocation (LDA) has been a framework of choice to deal with multimodal data, such as in image annotation tasks.
no code implementations • 23 May 2013 • Yin Zheng, Yu-Jin Zhang, Hugo Larochelle
Topic modeling based on latent Dirichlet allocation (LDA) has been a framework of choice to perform scene recognition and annotation.