no code implementations • 21 Nov 2022 • Zihan Ye, Hikaru Shindo, Devendra Singh Dhami, Kristian Kersting
To make deep learning do more from less, we propose the first neural meta-symbolic system (NEMESYS) for reasoning and learning: meta programming using differentiable forward-chaining reasoning in first-order logic.
1 code implementation • 27 Oct 2022 • Zhaorui Tan, Xi Yang, Zihan Ye, Qiufeng Wang, Yuyao Yan, Anh Nguyen, Kaizhu Huang
Generating consistent and high-quality images from given texts is essential for visual-language understanding.
1 code implementation • 13 Oct 2022 • Zihan Ye, Guanyu Yang, Xiaobo Jin, Youfa Liu, Kaizhu Huang
Broadly speaking, present ZSL methods usually adopt class-level semantic labels and compare them with instance-level semantic predictions to infer unseen classes.
1 code implementation • 23 Sep 2022 • Oleg Arenz, Philipp Dahlinger, Zihan Ye, Michael Volpp, Gerhard Neumann
The two currently most effective methods for GMM-based variational inference, VIPS and iBayes-GMM, both employ independent natural gradient updates for the individual components and their weights.
no code implementations • 29 Sep 2021 • Oleg Arenz, Zihan Ye, Philipp Dahlinger, Gerhard Neumann
Effective approaches for Gaussian variational inference are MORE, VOGN, and VON, which are zero-order, first-order, and second-order, respectively.
1 code implementation • 16 Jun 2021 • Zihan Ye, Fuyuan Hu, Fan Lyu, Linyan Li, Kaizhu Huang
However, the traditional TL cannot search reliable unseen disentangled representations due to the unavailability of unseen classes in ZSL.
no code implementations • 14 Dec 2020 • Fan Lyu, Shuai Wang, Wei Feng, Zihan Ye, Fuyuan Hu, Song Wang
Rehearsal, seeking to remind the model by storing old knowledge in lifelong learning, is one of the most effective ways to mitigate catastrophic forgetting, i. e., biased forgetting of previous knowledge when moving to new tasks.
1 code implementation • 19 May 2020 • Zihan Ye, Fuyuan Hu, Yin Liu, Zhenping Xia, Fan Lyu, Pengqing Liu
First, CNL computes correlations between features of a query layer and all response layers.
no code implementations • 15 Apr 2019 • Zihan Ye, Fan Lyu, Linyan Li, Qiming Fu, Jinchang Ren, Fuyuan Hu
First, we pre-train a Semantic Rectifying Network (SRN) to rectify semantic space with a semantic loss and a rectifying loss.