Search Results for author: Meiyu Liang

Found 16 papers, 2 papers with code

Dynamic Self-adaptive Multiscale Distillation from Pre-trained Multimodal Large Model for Efficient Cross-modal Representation Learning

no code implementations16 Apr 2024 Zhengyang Liang, Meiyu Liang, Wei Huang, Yawen Li, Zhe Xue

Our methodology streamlines pre-trained multimodal large models using only their output features and original image-level information, requiring minimal computational resources.

Cross-Modal Retrieval Representation Learning

Improving Expressive Power of Spectral Graph Neural Networks with Eigenvalue Correction

no code implementations28 Jan 2024 Kangkang Lu, Yanhua Yu, Hao Fei, Xuan Li, Zixuan Yang, Zirui Guo, Meiyu Liang, Mengran Yin, Tat-Seng Chua

Moreover, we theoretically establish that the number of distinguishable eigenvalues plays a pivotal role in determining the expressive power of spectral graph neural networks.

Node Classification

Topic model based on co-occurrence word networks for unbalanced short text datasets

no code implementations5 Nov 2023 Chengjie Ma, Junping Du, Meiyu Liang, Zeli Guan

We propose a straightforward solution for detecting scarce topics in unbalanced short-text datasets.

Federated Topic Model and Model Pruning Based on Variational Autoencoder

no code implementations1 Nov 2023 Chengjie Ma, Yawen Li, Meiyu Liang, Ang Li

The first method involves slow pruning throughout the entire model training process, which has limited acceleration effect on the model training process, but can ensure that the pruned model achieves higher accuracy.

Semantic Representation Learning of Scientific Literature based on Adaptive Feature and Graph Neural Network

no code implementations1 Nov 2023 Hongrui Gao, Yawen Li, Meiyu Liang, Zeli Guan, Zhe Xue

At the same time, in order to enrich the features of scientific literature, a learning method of semantic representation of scientific literature based on adaptive features and graph neural network is proposed.

Graph Attention Representation Learning

Semantic Structure Enhanced Contrastive Adversarial Hash Network for Cross-media Representation Learning

2 code implementations ACM Multimedia 2022 Meiyu Liang, Junping Du, Xiaowen Cao, Yang Yu, Kangkang Lu, Zhe Xue, Min Zhang

Secondly, for further improving learning ability of implicit cross-media semantic associations, a semantic label association graph is constructed, and the graph convolutional network is utilized to mine the implicit semantic structures, thus guiding learning of discriminative features of different modalities.

Representation Learning

Cross-modal Search Method of Technology Video based on Adversarial Learning and Feature Fusion

no code implementations11 Oct 2022 Xiangbin Liu, Junping Du, Meiyu Liang, Ang Li

The proposed method uses the framework of adversarial learning to construct a video multimodal feature fusion network and a feature mapping network as generator, a modality discrimination network as discriminator.

Cross-Modal Retrieval Retrieval +1

Robust Diversified Graph Contrastive Network for Incomplete Multi-view Clustering

1 code implementation ACM International Conference on Multimedia 2022 Zhe Xue, Junping Du, Hai Zhu, Zhongchao Guan, Yunfei Long, Yu Zang, Meiyu Liang

To address these issues, we propose a Robust Diversified Graph Contrastive Network (RDGC) for incomplete multi-view clustering, which integrates multi-view representation learning and diversified graph contrastive regularization into a unified framework.

Clustering Contrastive Learning +2

Unsupervised Semantic Representation Learning of Scientific Literature Based on Graph Attention Mechanism and Maximum Mutual Information

no code implementations7 Oct 2022 Hongrui Gao, Yawen Li, Meiyu Liang, Zeli Guan

Therefore, an unsupervised semantic representation learning method of scientific literature based on graph attention mechanism and maximum mutual information (GAMMI) is proposed.

Contrastive Learning Graph Attention +2

Embedding Representation of Academic Heterogeneous Information Networks Based on Federated Learning

no code implementations7 Oct 2022 Junfu Wang, Yawen Li, Meiyu Liang, Ang Li

To solve the above challenges, aiming at the data information of scientific research teams closely related to science and technology, we proposed an academic heterogeneous information network embedding representation learning method based on federated learning (FedAHE), which utilizes node attention and meta path attention mechanism to learn low-dimensional, dense and real-valued vector representations while preserving the rich topological information and meta-path-based semantic information of nodes in network.

Blocking Federated Learning +1

Cross-media Scientific Research Achievements Query based on Ranking Learning

no code implementations26 Apr 2022 Benzhi Wang, Meiyu Liang, Ang Li

With the advent of the information age, the scale of data on the Internet is getting larger and larger, and it is full of text, images, videos, and other information.

Decision Making

Research on Domain Information Mining and Theme Evolution of Scientific Papers

no code implementations18 Apr 2022 Changwei Zheng, Zhe Xue, Meiyu Liang, Feifei Kou, Zeli Guan

In recent years, with the increase of social investment in scientific research, the number of research results in various fields has increased significantly.

Representation Learning

Research topic trend prediction of scientific papers based on spatial enhancement and dynamic graph convolution network

no code implementations30 Mar 2022 Changwei Zheng, Zhe Xue, Meiyu Liang, Feifei Kou

To simultaneously capture the spatial dependencies and temporal changes between research topics, we propose a deep neural network-based research topic hotness prediction algorithm, a spatiotemporal convolutional network model.

Cross-Media Scientific Research Achievements Retrieval Based on Deep Language Model

no code implementations29 Mar 2022 Benzhi Wang, Meiyu Liang, Feifei Kou, Mingying Xu

Science and technology big data contain a lot of cross-media information. There are images and texts in the scientific paper. The s ingle modal search method cannot well meet the needs of scientific researchers. This paper proposes a cross-media scientific research achievements retrieval method based on deep language model (CARDL). It achieves a unified cross-media semantic representation by learning the semantic association between different modal data, and is applied to the generation of text semantic vector of scientific research achievements, and then cross-media retrieval is realized through semantic similarity matching between different modal data. Experimental results show that the proposed CARDL method achieves better cross-modal retrieval performance than existing methods.

Cross-Modal Retrieval Language Modelling +3

Cannot find the paper you are looking for? You can Submit a new open access paper.