Search Results for author: Rundong Li

Found 6 papers, 2 papers with code

ScaleFold: Reducing AlphaFold Initial Training Time to 10 Hours

no code implementations17 Apr 2024 Feiwen Zhu, Arkadiusz Nowaczynski, Rundong Li, Jie Xin, Yifei Song, Michal Marcinkiewicz, Sukru Burc Eryilmaz, Jun Yang, Michael Andersch

In this work, we conducted a comprehensive analysis on the AlphaFold training procedure based on Openfold, identified that inefficient communications and overhead-dominated computations were the key factors that prevented the AlphaFold training from effective scaling.

Protein Folding

Zoom Out and Observe: News Environment Perception for Fake News Detection

1 code implementation ACL 2022 Qiang Sheng, Juan Cao, Xueyao Zhang, Rundong Li, Danding Wang, Yongchun Zhu

To differentiate fake news from real ones, existing methods observe the language patterns of the news post and "zoom in" to verify its content with knowledge sources or check its readers' replies.

Fake News Detection Misinformation

Confidence Propagation Cluster: Unleash Full Potential of Object Detectors

1 code implementation CVPR 2022 Yichun Shen, Wanli Jiang, Zhen Xu, Rundong Li, Junghyun Kwon, Siyi Li

It has been a long history that most object detection methods obtain objects by using the non-maximum suppression (NMS) and its improved versions like Soft-NMS to remove redundant bounding boxes.

Object object-detection +1

The Bright Side and the Dark Side of Hybrid Organic Inorganic Perovskites

no code implementations23 Oct 2020 Wladek Walukiewicz, Shu Wang, Xinchun Wu, Rundong Li, Matthew P. Sherburne, Bo Wu, Tze Chien Sun, Joel W. Ager, Mark D. Asta

The previously developed bistable amphoteric native defect (BAND) model is used for a comprehensive explanation of the unique photophysical properties and for understanding the remarkable performance of perovskites as photovoltaic materials.

Applied Physics Materials Science

GQ-Net: Training Quantization-Friendly Deep Networks

no code implementations25 Sep 2019 Rundong Li, Rui Fan

Network quantization is a model compression and acceleration technique that has become essential to neural network deployment.

Model Compression Quantization

Fully Quantized Network for Object Detection

no code implementations CVPR 2019 Rundong Li, Yan Wang, Feng Liang, Hongwei Qin, Junjie Yan, Rui Fan

Efficient neural network inference is important in a number of practical domains, such as deployment in mobile settings.

Efficient Neural Network Object +3

Cannot find the paper you are looking for? You can Submit a new open access paper.