Search Results for author: Chang Zong

Found 5 papers, 0 papers with code

ProSwitch: Knowledge-Guided Instruction Tuning to Generate Professional and Non-Professional Styled Text

no code implementations14 Mar 2024 Chang Zong, Yuyan Chen, Weiming Lu, Jian Shao, Yueting Zhuang

Large Language Models (LLMs) have demonstrated efficacy in various linguistic applications, including text summarization and controlled text generation.

Language Modelling Text Generation +1

Triad: A Framework Leveraging a Multi-Role LLM-based Agent to Solve Knowledge Base Question Answering

no code implementations22 Feb 2024 Chang Zong, Yuchen Yan, Weiming Lu, Jian Shao, Eliot Huang, Heng Chang, Yueting Zhuang

We evaluated the performance of our framework using three benchmark datasets, and the results show that our framework outperforms state-of-the-art systems on the LC-QuAD and YAGO-QA benchmarks, yielding F1 scores of 11. 8% and 20. 7%, respectively.

Knowledge Base Question Answering

Citation Trajectory Prediction via Publication Influence Representation Using Temporal Knowledge Graph

no code implementations2 Oct 2022 Chang Zong, Yueting Zhuang, Weiming Lu, Jian Shao, Siliang Tang

In this paper, we propose CTPIR, a new citation trajectory prediction framework that is able to represent the influence (the momentum of citation) of either new or existing publications using the history information of all their attributes.

Attribute Graph Embedding +1

Robust Meta-learning with Noise via Eigen-Reptile

no code implementations1 Jan 2021 Dong Chen, Lingfei Wu, Siliang Tang, Fangli Xu, Juncheng Li, Chang Zong, Chilie Tan, Yueting Zhuang

In particular, we first cast the meta-overfitting problem (overfitting on sampling and label noise) as a gradient noise problem since few available samples cause meta-learner to overfit on existing examples (clean or corrupted) of an individual task at every gradient step.

Few-Shot Learning

Differentiable Graph Optimization for Neural Architecture Search

no code implementations1 Jan 2021 Chengyue Huang, Lingfei Wu, Yadong Ding, Siliang Tang, Fangli Xu, Chang Zong, Chilie Tan, Yueting Zhuang

To this end, we learn a differentiable graph neural network as a surrogate model to rank candidate architectures, which enable us to obtain gradient w. r. t the input architectures.

Bayesian Optimization Neural Architecture Search

Cannot find the paper you are looking for? You can Submit a new open access paper.