no code implementations • NAACL 2022 • Yang Yan, Junda Ye, Zhongbao Zhang, LiWen Wang
As an essential component of task-oriented dialogue systems, slot filling requires enormous labeled training data in a certain domain.
no code implementations • 23 Jan 2024 • Li Sun, Zhenhao Huang, Hua Wu, Junda Ye, Hao Peng, Zhengtao Yu, Philip S. Yu
Graph Neural Networks (GNNs) have shown great power for learning and mining on graphs, and Graph Structure Learning (GSL) plays an important role in boosting GNNs with a refined graph.
no code implementations • 2 Jan 2024 • Li Sun, Junda Ye, Jiawei Zhang, Yong Yang, Mingsheng Liu, Feiyang Wang, Philip S. Yu
To address the aforementioned issues, we propose a novel Contrastive model for Sequential Interaction Network learning on Co-Evolving RiEmannian spaces, CSINCERE.
no code implementations • 6 May 2023 • Junda Ye, Zhongbao Zhang, Li Sun, Yang Yan, Feiyang Wang, Fuxin Ren
To explore these issues for sequential interaction networks, we propose SINCERE, a novel method representing Sequential Interaction Networks on Co-Evolving RiEmannian manifolds.
no code implementations • 5 May 2023 • Li Sun, Feiyang Wang, Junda Ye, Hao Peng, Philip S. Yu
On the other hand, contrastive learning boosts the deep graph clustering but usually struggles in either graph augmentation or hard sample mining.
no code implementations • 30 Nov 2022 • Li Sun, Junda Ye, Hao Peng, Feiyang Wang, Philip S. Yu
On the one hand, existing methods work with the zero-curvature Euclidean space, and largely ignore the fact that curvature varies over the coming graph sequence.
no code implementations • 30 Aug 2022 • Li Sun, Junda Ye, Hao Peng, Philip S. Yu
To bridge this gap, we make the first attempt to study the problem of self-supervised temporal graph representation learning in the general Riemannian space, supporting the time-varying curvature to shift among hyperspherical, Euclidean and hyperbolic spaces.
no code implementations • 10 Dec 2021 • Li Sun, Zhongbao Zhang, Junda Ye, Hao Peng, Jiawei Zhang, Sen Su, Philip S. Yu
Instead of working on one single constant-curvature space, we construct a mixed-curvature space via the Cartesian product of multiple Riemannian component spaces and design hierarchical attention mechanisms for learning and fusing the representations across these component spaces.