Search Results for author: Mengyun Chen

Found 6 papers, 3 papers with code

Unsupervisedly Prompting AlphaFold2 for Few-Shot Learning of Accurate Folding Landscape and Protein Structure Prediction

2 code implementations20 Aug 2022 Jun Zhang, Sirui Liu, Mengyun Chen, Haotian Chu, Min Wang, Zidong Wang, Jialiang Yu, Ningxi Ni, Fan Yu, Diqing Chen, Yi Isaac Yang, Boxin Xue, Lijiang Yang, YuAn Liu, Yi Qin Gao

Data-driven predictive methods which can efficiently and accurately transform protein sequences into biologically active structures are highly valuable for scientific research and medical development.

Denoising Few-Shot Learning +2

THOR, Trace-based Hardware-adaptive layer-ORiented Natural Gradient Descent Computation

no code implementations AAAI Technical Track on Machine Learning 2021 Mengyun Chen, Kaixin Gao, Xiaolei Liu, Zidong Wang, Ningxi Ni, Qian Zhang, Lei Chen, Chao Ding, ZhengHai Huang, Min Wang, Shuangling Wang, Fan Yu, Xinyuan Zhao, Dachuan Xu

It is well-known that second-order optimizer can accelerate the training of deep neural networks, however, the huge computation cost of second-order optimization makes it impractical to apply in real practice.

Improving the Efficiency of Grammatical Error Correction with Erroneous Span Detection and Correction

no code implementations EMNLP 2020 Mengyun Chen, Tao Ge, Xingxing Zhang, Furu Wei, Ming Zhou

We propose a novel language-independent approach to improve the efficiency for Grammatical Error Correction (GEC) by dividing the task into two subtasks: Erroneous Span Detection (ESD) and Erroneous Span Correction (ESC).

Grammatical Error Correction Sentence

Enhance Curvature Information by Structured Stochastic Quasi-Newton Methods

no code implementations CVPR 2021 Ming-Han Yang, Dong Xu, Hongyu Chen, Zaiwen Wen, Mengyun Chen

In this paper, we consider stochastic second-order methods for minimizing a finite summation of nonconvex functions.

Second-order methods

Sketchy Empirical Natural Gradient Methods for Deep Learning

1 code implementation10 Jun 2020 Ming-Han Yang, Dong Xu, Zaiwen Wen, Mengyun Chen, Pengxiang Xu

Experiments on the distributed large-batch training show that the scaling efficiency is quite reasonable.

Cannot find the paper you are looking for? You can Submit a new open access paper.