1 code implementation • 22 Feb 2024 • Rui Yang, Boming Yang, Sixun Ouyang, Tianwei She, Aosong Feng, Yuang Jiang, Freddy Lecue, Jinghui Lu, Irene Li
We assess LLMs' zero-shot performance in creating domain-specific concept graphs and introduce TutorQA, a new expert-verified NLP-focused benchmark for scientific graph reasoning and QA.
1 code implementation • 21 Aug 2023 • Fan Gao, Hang Jiang, Rui Yang, Qingcheng Zeng, Jinghui Lu, Moritz Blum, Dairui Liu, Tianwei She, Yuang Jiang, Irene Li
Educational materials such as survey articles in specialized fields like computer science traditionally require tremendous expert inputs and are therefore expensive to create and update.
no code implementations • 20 Dec 2022 • Yuang Jiang, Konstantinos Poularakis, Diego Kiedanski, Sastry Kompella, Leandros Tassiulas
In this work, we propose a novel meta learning based viewport prediction paradigm to alleviate the worst prediction performance and ensure the robustness of viewport prediction.
1 code implementation • 21 Oct 2022 • Aosong Feng, Irene Li, Yuang Jiang, Rex Ying
Efficient Transformers have been developed for long sequence modeling, due to their subquadratic memory and time complexity.
2 code implementations • 26 Sep 2019 • Yuang Jiang, Shiqiang Wang, Victor Valls, Bong Jun Ko, Wei-Han Lee, Kin K. Leung, Leandros Tassiulas
To overcome this challenge, we propose PruneFL -- a novel FL approach with adaptive and distributed parameter pruning, which adapts the model size during FL to reduce both communication and computation overhead and minimize the overall training time, while maintaining a similar accuracy as the original model.