no code implementations • CSRR (ACL) 2022 • Yue Wan, Yueen Ma, Haoxuan You, Zhecan Wang, Shih-Fu Chang
Large-scale visual-linguistic pre-training aims to capture the generic representations from multimodal features, which are essential for downstream vision-language tasks.
1 code implementation • 29 Feb 2024 • Rafael Josip Penić, Tin Vlašić, Roland G. Huber, Yue Wan, Mile Šikić
RiNALMo is the largest RNA language model to date with $650$ million parameters pre-trained on $36$ million non-coding RNA sequences from several available databases.
no code implementations • 23 Feb 2024 • Shihong Ling, Yue Wan, Xiaowei Jia, Na Du
The rapid evolution of automated vehicles (AVs) has the potential to provide safer, more efficient, and comfortable travel options.
no code implementations • 5 Nov 2023 • Yue Wan, Jialu Wu, Tingjun Hou, Chang-Yu Hsieh, Xiaowei Jia
Self-supervised learning (SSL) has emerged as a popular solution, utilizing large-scale, unannotated molecular data to learn a foundational representation of chemical space that might be advantageous for downstream tasks.
1 code implementation • 29 Jan 2022 • Yue Wan, Benben Liao, Chang-Yu Hsieh, Shengyu Zhang
In this paper, we propose Retroformer, a novel Transformer-based architecture for retrosynthesis prediction without relying on any cheminformatics tools for molecule editing.