no code implementations • ICML 2020 • Wenxian Shi, Hao Zhou, Ning Miao, Lei LI
Interpretability is important in text generation for guiding the generation with interpretable attributes.
1 code implementation • 6 Oct 2023 • Zhenqiao Song, Yunlong Zhao, Wenxian Shi, Yang Yang, Lei LI
In this paper, we propose NAEPro, a model to jointly design Protein sequence and structure based on automatically detected functional sites.
no code implementations • 4 Oct 2023 • Zhenqiao Song, Yunlong Zhao, Yufei Song, Wenxian Shi, Yang Yang, Lei LI
Designing novel proteins with desired functions is crucial in biology and chemistry.
no code implementations • 20 Jul 2021 • Wenxian Shi, Yuxuan Song, Hao Zhou, Bohan Li, Lei LI
However, it has been observed that a converged heavy teacher model is strongly constrained for learning a compact student network and could make the optimization subject to poor local optima.
no code implementations • 1 Jan 2021 • Wenxian Shi, Yuxuan Song, Hao Zhou, Bohan Li, Lei LI
However, it has been observed that a converged heavy teacher model is strongly constrained for learning a compact student network and could make the optimization subject to poor local optima.
1 code implementation • ICLR 2020 • Rong Ye, Wenxian Shi, Hao Zhou, Zhongyu Wei, Lei LI
We propose the variational template machine (VTM), a novel method to generate text descriptions from data tables.
1 code implementation • NeurIPS 2019 • Ning Miao, Hao Zhou, Chengqi Zhao, Wenxian Shi, Lei LI
Neural models for text generation require a softmax layer with proper token embeddings during the decoding phase.
1 code implementation • 16 Jun 2019 • Wenxian Shi, Hao Zhou, Ning Miao, Lei LI
To enhance the controllability and interpretability, one can replace the Gaussian prior with a mixture of Gaussian distributions (GM-VAE), whose mixture components could be related to hidden semantic aspects of data.