no code implementations • 15 Apr 2024 • Guoming Li, Jian Yang, Shangsong Liang, Dongsheng Luo
Spectral Graph Neural Networks (GNNs) have attracted great attention due to their capacity to capture patterns in the frequency domains with essential graph filters.
2 code implementations • Proceedings of the AAAI Conference on Artificial Intelligence 2021 • Jinpeng Wang, Bin Chen, Qiang Zhang, Zaiqiao Meng, Shangsong Liang, Shu-Tao Xia
Deep quantization methods have shown high efficiency on large-scale image retrieval.
no code implementations • 6 Apr 2024 • Guoming Li, Jian Yang, Shangsong Liang, Dongsheng Luo
Spectral Graph Neural Networks (GNNs) have achieved tremendous success in graph learning.
1 code implementation • 7 Mar 2024 • Jiyong Li, Dilshod Azizov, Yang Li, Shangsong Liang
Recently, because of the high-quality representations of contrastive learning methods, rehearsal-based contrastive continual learning has been proposed to explore how to continually learn transferable representation embeddings to avoid the catastrophic forgetting issue in traditional continual settings.
1 code implementation • 18 Feb 2024 • Junjian Lu, Siwei Liu, Dmitrii Kobylianski, Etienne Dreyer, Eilam Gross, Shangsong Liang
In high-energy physics, particles produced in collision events decay in a format of a hierarchical tree structure, where only the final decay products can be observed using detectors.
1 code implementation • 25 Oct 2023 • Guanzheng Chen, Xin Li, Zaiqiao Meng, Shangsong Liang, Lidong Bing
We generalise the PE scaling approaches to model the continuous dynamics by ordinary differential equations over the length scaling factor, thereby overcoming the constraints of current PE scaling methods designed for specific lengths.
1 code implementation • 1 Feb 2023 • Muhammad Arslan Manzoor, Sarah Albarri, Ziting Xian, Zaiqiao Meng, Preslav Nakov, Shangsong Liang
This survey presents the comprehensive literature on the evolution and enhancement of deep learning multimodal architectures to deal with textual, visual and audio features for diverse cross-modal and modern multimodal tasks.
no code implementations • 7 Nov 2022 • Jiahang Cao, Jinyuan Fang, Zaiqiao Meng, Shangsong Liang
Particularly, we build a fine-grained classification to categorise the models based on three mathematical perspectives of the representation spaces: (1) Algebraic perspective, (2) Geometric perspective, and (3) Analytical perspective.
1 code implementation • 16 Feb 2022 • Guanzheng Chen, Fangyu Liu, Zaiqiao Meng, Shangsong Liang
Parameter-Efficient Tuning (PETuning) methods have been deemed by many as the new paradigm for using pretrained language models (PLMs).
no code implementations • NeurIPS 2021 • Jinyuan Fang, Qiang Zhang, Zaiqiao Meng, Shangsong Liang
Gaussian Processes (GPs) define distributions over functions and their generalization capabilities depend heavily on the choice of kernels.
no code implementations • NeurIPS 2021 • Qiang Zhang, Jinyuan Fang, Zaiqiao Meng, Shangsong Liang, Emine Yilmaz
Conventional meta-learning considers a set of tasks from a stationary distribution.
no code implementations • 19 May 2020 • Lu Yu, Shichao Pei, Chuxu Zhang, Shangsong Liang, Xiao Bai, Nitesh Chawla, Xiangliang Zhang
Pairwise ranking models have been widely used to address recommendation problems.
1 code implementation • NeurIPS 2019 • Zaiqiao Meng, Shangsong Liang, Jinyuan Fang, Teng Xiao
Deep generative models (DGMs) have achieved remarkable advances.
no code implementations • 30 Dec 2018 • Qiang Zhang, Shangsong Liang, Emine Yilmaz
This paper proposes a variational self-attention model (VSAM) that employs variational inference to derive self-attention.
no code implementations • 12 Oct 2018 • Teng Xiao, Shangsong Liang, Hong Shen, Zaiqiao Meng
Specifically, we consider both the generative processes of users and items, and the prior of latent factors of users and items to be side informationspecific, which enables our model to alleviate matrix sparsity and learn better latent representations of users and items.
2 code implementations • 31 Aug 2018 • Xisen Jin, Wenqiang Lei, Zhaochun Ren, Hongshen Chen, Shangsong Liang, Yihong Zhao, Dawei Yin
However, the \emph{expensive nature of state labeling} and the \emph{weak interpretability} make the dialogue state tracking a challenging problem for both task-oriented and non-task-oriented dialogue generation: For generating responses in task-oriented dialogues, state tracking is usually learned from manually annotated corpora, where the human annotation is expensive for training; for generating responses in non-task-oriented dialogues, most of existing work neglects the explicit state tracking due to the unlimited number of dialogue states.