no code implementations • 3 Oct 2023 • Saurabh Srivastava, Chengyue Huang, Weiguo Fan, Ziyu Yao
Large language models (LLMs) have revolutionized zero-shot task performance, mitigating the need for task-specific annotations while enhancing task generalizability.
1 code implementation • 24 Sep 2022 • Ningyuan Huang, Soledad Villar, Carey E. Priebe, Da Zheng, Chengyue Huang, Lin Yang, Vladimir Braverman
Graph Neural Networks (GNNs) are powerful deep learning methods for Non-Euclidean data.
no code implementations • CVPR 2022 • Yadong Ding, Yu Wu, Chengyue Huang, Siliang Tang, Yi Yang, Longhui Wei, Yueting Zhuang, Qi Tian
Existing NAS-based meta-learning methods apply a two-stage strategy, i. e., first searching architectures and then re-training meta-weights on the searched architecture.
no code implementations • 1 Jan 2021 • Yadong Ding, Yu Wu, Chengyue Huang, Siliang Tang, Yi Yang, Yueting Zhuang
In this paper, we aim to obtain better meta-learners by co-optimizing the architecture and meta-weights simultaneously.
no code implementations • 1 Jan 2021 • Chengyue Huang, Lingfei Wu, Yadong Ding, Siliang Tang, Fangli Xu, Chang Zong, Chilie Tan, Yueting Zhuang
To this end, we learn a differentiable graph neural network as a surrogate model to rank candidate architectures, which enable us to obtain gradient w. r. t the input architectures.