Graph Representation Learning
380 papers with code • 1 benchmarks • 6 datasets
The goal of Graph Representation Learning is to construct a set of features (‘embeddings’) representing the structure of the graph and the data thereon. We can distinguish among Node-wise embeddings, representing each node of the graph, Edge-wise embeddings, representing each edge in the graph, and Graph-wise embeddings representing the graph as a whole.
Libraries
Use these libraries to find Graph Representation Learning models and implementationsMost implemented papers
A Survey of Pretraining on Graphs: Taxonomy, Methods, and Applications
Pretrained Language Models (PLMs) such as BERT have revolutionized the landscape of Natural Language Processing (NLP).
Algorithm and System Co-design for Efficient Subgraph-based Graph Representation Learning
Subgraph-based graph representation learning (SGRL) has been recently proposed to deal with some fundamental challenges encountered by canonical graph neural networks (GNNs), and has demonstrated advantages in many important data science applications such as link, relation and motif prediction.
Recipe for a General, Powerful, Scalable Graph Transformer
We propose a recipe on how to build a general, powerful, scalable (GPS) graph Transformer with linear complexity and state-of-the-art results on a diverse set of benchmarks.
A Generalization of ViT/MLP-Mixer to Graphs
First, they capture long-range dependency and mitigate the issue of over-squashing as demonstrated on Long Range Graph Benchmark and TreeNeighbourMatch datasets.
Simplifying Subgraph Representation Learning for Scalable Link Prediction
Link prediction on graphs is a fundamental problem.
Harnessing Explanations: LLM-to-LM Interpreter for Enhanced Text-Attributed Graph Representation Learning
With the advent of powerful large language models (LLMs) such as GPT or Llama2, which demonstrate an ability to reason and to utilize general knowledge, there is a growing need for techniques which combine the textual modelling abilities of LLMs with the structural learning capabilities of GNNs.
Transitivity-Preserving Graph Representation Learning for Bridging Local Connectivity and Role-based Similarity
In this paper, we propose Unified Graph Transformer Networks (UGT) that effectively integrate local and global structural information into fixed-length vector representations.
Learning to Make Predictions on Graphs with Autoencoders
We examine two fundamental tasks associated with graph representation learning: link prediction and semi-supervised node classification.
Open Domain Question Answering Using Early Fusion of Knowledge Bases and Text
In this paper we look at a more practical setting, namely QA over the combination of a KB and entity-linked text, which is appropriate when an incomplete KB is available with a large text corpus.
Adaptive Sampling Towards Fast Graph Representation Learning
Graph Convolutional Networks (GCNs) have become a crucial tool on learning representations of graph vertices.