Search Results for author: Yuang Jiang

Found 5 papers, 4 papers with code

Leveraging Large Language Models for Concept Graph Recovery and Question Answering in NLP Education

1 code implementation22 Feb 2024 Rui Yang, Boming Yang, Sixun Ouyang, Tianwei She, Aosong Feng, Yuang Jiang, Freddy Lecue, Jinghui Lu, Irene Li

We assess LLMs' zero-shot performance in creating domain-specific concept graphs and introduce TutorQA, a new expert-verified NLP-focused benchmark for scientific graph reasoning and QA.

Question Answering Text Generation

Large Language Models on Wikipedia-Style Survey Generation: an Evaluation in NLP Concepts

1 code implementation21 Aug 2023 Fan Gao, Hang Jiang, Rui Yang, Qingcheng Zeng, Jinghui Lu, Moritz Blum, Dairui Liu, Tianwei She, Yuang Jiang, Irene Li

Educational materials such as survey articles in specialized fields like computer science traditionally require tremendous expert inputs and are therefore expensive to create and update.

Hallucination Machine Translation +1

Robust and Resource-efficient Machine Learning Aided Viewport Prediction in Virtual Reality

no code implementations20 Dec 2022 Yuang Jiang, Konstantinos Poularakis, Diego Kiedanski, Sastry Kompella, Leandros Tassiulas

In this work, we propose a novel meta learning based viewport prediction paradigm to alleviate the worst prediction performance and ensure the robustness of viewport prediction.

Meta-Learning

Diffuser: Efficient Transformers with Multi-hop Attention Diffusion for Long Sequences

1 code implementation21 Oct 2022 Aosong Feng, Irene Li, Yuang Jiang, Rex Ying

Efficient Transformers have been developed for long sequence modeling, due to their subquadratic memory and time complexity.

Language Modelling text-classification +1

Model Pruning Enables Efficient Federated Learning on Edge Devices

2 code implementations26 Sep 2019 Yuang Jiang, Shiqiang Wang, Victor Valls, Bong Jun Ko, Wei-Han Lee, Kin K. Leung, Leandros Tassiulas

To overcome this challenge, we propose PruneFL -- a novel FL approach with adaptive and distributed parameter pruning, which adapts the model size during FL to reduce both communication and computation overhead and minimize the overall training time, while maintaining a similar accuracy as the original model.

Federated Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.