no code implementations • 26 Mar 2024 • Yongyi Yang, Jiaming Yang, Wei Hu, Michał Dereziński
In this paper, we propose HERTA: a High-Efficiency and Rigorous Training Algorithm for Unfolded GNNs that accelerates the whole training process, achieving a nearly-linear time worst-case training guarantee.
no code implementations • 29 Jun 2023 • Yongyi Yang, Jacob Steinhardt, Wei Hu
This appears to suggest that the last-layer representations are completely determined by the labels, and do not depend on the intrinsic structure of input distribution.
1 code implementation • 22 Jun 2022 • Hongjoon Ahn, Yongyi Yang, Quan Gan, Taesup Moon, David Wipf
Moreover, the complexity of this trade-off is compounded in the heterogeneous graph case due to the disparate heterophily relationships between nodes of different types.
1 code implementation • 27 May 2022 • Yongyi Yang, Zengfeng Huang, David Wipf
Deep learning models such as the Transformer are often constructed by heuristics and experience.
no code implementations • 12 Nov 2021 • Yongyi Yang, Tang Liu, Yangkun Wang, Zengfeng Huang, David Wipf
It has been observed that graph neural networks (GNN) sometimes struggle to maintain a healthy balance between the efficient modeling long-range dependencies across nodes while avoiding unintended consequences such oversmoothed node representations or sensitivity to spurious edges.
no code implementations • ICLR 2022 • Yangkun Wang, Jiarui Jin, Weinan Zhang, Yongyi Yang, Jiuhai Chen, Quan Gan, Yong Yu, Zheng Zhang, Zengfeng Huang, David Wipf
In this regard, it has recently been proposed to use a randomly-selected portion of the training labels as GNN inputs, concatenated with the original node features for making predictions on the remaining labels.
1 code implementation • 10 Mar 2021 • Yongyi Yang, Tang Liu, Yangkun Wang, Jinjing Zhou, Quan Gan, Zhewei Wei, Zheng Zhang, Zengfeng Huang, David Wipf
Despite the recent success of graph neural networks (GNN), common architectures often exhibit significant limitations, including sensitivity to oversmoothing, long-range dependencies, and spurious edges, e. g., as can occur as a result of graph heterophily or adversarial attacks.
1 code implementation • 5 Jun 2020 • Zhijing Jin, Yongyi Yang, Xipeng Qiu, Zheng Zhang
In natural language, often multiple entities appear in the same text.