no code implementations • EMNLP (sdp) 2020 • Jiaxin Ju, Ming Liu, Longxiang Gao, Shirui Pan
The Scholarly Document Processing (SDP) workshop is to encourage more efforts on natural language understanding of scientific task.
1 code implementation • ALTA 2021 • Xinzhe Li, Ming Liu, Xingjun Ma, Longxiang Gao
Universal adversarial texts (UATs) refer to short pieces of text units that can largely affect the predictions of NLP models.
no code implementations • 14 Dec 2023 • Yichen Wan, Youyang Qu, Wei Ni, Yong Xiang, Longxiang Gao, Ekram Hossain
Wireless FL (WFL) is a distributed method of training a global deep learning model in which a large number of participants each train a local model on their training datasets and then upload the local model updates to a central server.
no code implementations • 7 Nov 2023 • Stella Ho, Ming Liu, Shang Gao, Longxiang Gao
Recent advances in continual learning are mostly confined to a supervised learning setting, especially in NLP domain.
no code implementations • 30 May 2023 • Jiwei Guan, Lei Pan, Chen Wang, Shui Yu, Longxiang Gao, Xi Zheng
As deep learning has been applied to increasingly sensitive tasks, uncertainty measurement is crucial in helping improve model robustness, especially in mission-critical scenarios.
no code implementations • 22 Mar 2023 • Borui Cai, Yong Xiang, Longxiang Gao, Di wu, He Zhang, Jiong Jin, Tom Luan
To seek a simple strategy to improve the parameter efficiency of conventional KGE models, we take inspiration from that deeper neural networks require exponentially fewer parameters to achieve expressiveness comparable to wider networks for compositional structures.
no code implementations • 13 Mar 2023 • Borui Cai, Shuiqiao Yang, Longxiang Gao, Yong Xiang
Variational autoencoders (VAE) are powerful generative models that learn the latent representations of input data as random variables.
1 code implementation • 15 Aug 2022 • Chenhao Xu, Youyang Qu, Tom H. Luan, Peter W. Eklund, Yong Xiang, Longxiang Gao
Asynchronous Federated Learning (AFL) is a scheme that reduces the latency of aggregation to improve efficiency, but the learning performance is unstable due to unreasonably weighted local models.
no code implementations • 5 Apr 2022 • Qi Zhong, Leo Yu Zhang, Shengshan Hu, Longxiang Gao, Jun Zhang, Yong Xiang
Fine-tuning attacks are effective in removing the embedded watermarks in deep learning models.
no code implementations • 16 Jan 2022 • Borui Cai, Yong Xiang, Longxiang Gao, He Zhang, Yunfeng Li, JianXin Li
KGC methods assume a knowledge graph is static, but that may lead to inaccurate prediction results because many facts in the knowledge graphs change over time.
no code implementations • 28 Dec 2021 • Jinkai Zheng, Tom H. Luan, Longxiang Gao, Yao Zhang, Yuan Wu
In specific, to preserve the precious computing resource at different levels for most appropriate computing tasks, we integrate a learning scheme based on the prediction of futuristic computing tasks in DT.
no code implementations • 28 Nov 2021 • Chaochen Shi, Yong Xiang, Jiangshan Yu, Longxiang Gao
To make the model more focused on the key contextual information, we use a multi-head attention network to generate embeddings for code features.
no code implementations • 28 Aug 2021 • Stella Ho, Ming Liu, Lan Du, Longxiang Gao, Yong Xiang
Continual learning (CL) refers to a machine learning paradigm that learns continuously without forgetting previously acquired knowledge.
no code implementations • 27 Jul 2021 • Ming Liu, Stella Ho, Mengqi Wang, Longxiang Gao, Yuan Jin, He Zhang
Recent Natural Language Processing techniques rely on deep learning and large pre-trained language models.
no code implementations • 31 May 2021 • Chaochen Shi, Yong Xiang, Robin Ram Mohan Doss, Jiangshan Yu, Keshav Sood, Longxiang Gao
Our experimental studies on over 3, 300 real-world Ethereum smart contracts show that our model can classify smart contracts without source code and has better performance than baseline models.
no code implementations • 12 Mar 2021 • Chenhao Xu, Jiaqi Ge, Yong Li, Yao Deng, Longxiang Gao, Mengshi Zhang, Yong Xiang, Xi Zheng
Federated learning (FL) enables collaborative training of a shared model on edge devices while maintaining data privacy.
no code implementations • 18 Jan 2021 • Uno Fang, JianXin Li, Xuequan Lu, Mumtaz Ali, Longxiang Gao, Yong Xiang
Current annotation for plant disease images depends on manual sorting and handcrafted features by agricultural experts, which is time-consuming and labour-intensive.
no code implementations • 19 Oct 2020 • Jiaxin Ju, Ming Liu, Longxiang Gao, Shirui Pan
The Scholarly Document Processing (SDP) workshop is to encourage more efforts on natural language understanding of scientific task.
1 code implementation • 17 Jul 2020 • Jinming Zhao, Ming Liu, Longxiang Gao, Yuan Jin, Lan Du, He Zhao, He Zhang, Gholamreza Haffari
Obtaining training data for multi-document summarization (MDS) is time consuming and resource-intensive, so recent neural models can only be trained for limited domains.
no code implementations • 21 Feb 2020 • Yuan Jin, He Zhao, Ming Liu, Ye Zhu, Lan Du, Longxiang Gao, He Zhang, Yunfeng Li
Based on the ELBOs, we propose a VAE-based Bayesian MF framework.
no code implementations • 12 Oct 2019 • Yuan Jin, Ming Liu, Yunfeng Li, Ruohua Xu, Lan Du, Longxiang Gao, Yong Xiang
Under synthetic data evaluation, VAE-BPTF tended to recover the right number of latent factors and posterior parameter values.