Node Classification
795 papers with code • 121 benchmarks • 69 datasets
Node Classification is a machine learning task in graph-based data analysis, where the goal is to assign labels to nodes in a graph based on the properties of nodes and the relationships between them.
Node Classification models aim to predict non-existing node properties (known as the target property) based on other node properties. Typical models used for node classification consists of a large family of graph neural networks. Model performance can be measured using benchmark datasets like Cora, Citeseer, and Pubmed, among others, typically using Accuracy and F1.
( Image credit: Fast Graph Representation Learning With PyTorch Geometric )
Libraries
Use these libraries to find Node Classification models and implementationsSubtasks
Latest papers
HyperBERT: Mixing Hypergraph-Aware Layers with Language Models for Node Classification on Text-Attributed Hypergraphs
In this paper, we propose a new architecture, HyperBERT, a mixed text-hypergraph model which simultaneously models hypergraph relational structure while maintaining the high-quality text encoding capabilities of a pre-trained BERT.
Rethinking Node-wise Propagation for Large-scale Graph Learning
However, (i) Most scalable GNNs tend to treat all nodes in graphs with the same propagation rules, neglecting their topological uniqueness; (ii) Existing node-wise propagation optimization strategies are insufficient on web-scale graphs with intricate topology, where a full portrayal of nodes' local properties is required.
Classifying Nodes in Graphs without GNNs
Recently, distillation methods succeeded in eliminating the use of GNNs at test time but they still require them during training.
Similarity-based Neighbor Selection for Graph LLMs
Our research further underscores the significance of graph structure integration in LLM applications and identifies key factors for their success in node classification.
Masked Graph Autoencoder with Non-discrete Bandwidths
Inspired by these understandings, we explore non-discrete edge masks, which are sampled from a continuous and dispersive probability distribution instead of the discrete Bernoulli distribution.
No Need to Look Back: An Efficient and Scalable Approach for Temporal Network Representation Learning
This strategy is implemented using a GPU-executable size-constrained hash table for each node, recording down-sampled recent interactions, which enables rapid response to queries with minimal inference latency.
L2G2G: a Scalable Local-to-Global Network Embedding with Graph Autoencoders
For analysing real-world networks, graph representation learning is a popular tool.
IGCN: Integrative Graph Convolutional Networks for Multi-modal Data
Addressing these restrictions, we introduce a novel integrative neural network approach for multi-modal data networks, named Integrative Graph Convolutional Networks (IGCN).
DGNN: Decoupled Graph Neural Networks with Structural Consistency between Attribute and Graph Embedding Representations
To obtain a more comprehensive embedding representation of nodes, a novel GNNs framework, dubbed Decoupled Graph Neural Networks (DGNN), is introduced.
Cross-Space Adaptive Filter: Integrating Graph Topology and Node Attributes for Alleviating the Over-smoothing Problem
To this end, various methods have been proposed to create an adaptive filter by incorporating an extra filter (e. g., a high-pass filter) extracted from the graph topology.