Search Results for author: Morteza Ramezani

Found 6 papers, 2 papers with code

Learn Locally, Correct Globally: A Distributed Algorithm for Training Graph Neural Networks

no code implementations ICLR 2022 Morteza Ramezani, Weilin Cong, Mehrdad Mahdavi, Mahmut T. Kandemir, Anand Sivasubramaniam

To solve the performance degradation, we propose to apply $\text{{Global Server Corrections}}$ on the server to refine the locally learned models.

On Provable Benefits of Depth in Training Graph Convolutional Networks

1 code implementation NeurIPS 2021 Weilin Cong, Morteza Ramezani, Mehrdad Mahdavi

Graph Convolutional Networks (GCNs) are known to suffer from performance degradation as the number of layers increases, which is usually attributed to over-smoothing.

On the Importance of Sampling in Training GCNs: Tighter Analysis and Variance Reduction

1 code implementation3 Mar 2021 Weilin Cong, Morteza Ramezani, Mehrdad Mahdavi

In this paper, we describe and analyze a general doubly variance reduction schema that can accelerate any sampling method under the memory budget.

Node Classification

On the Importance of Sampling in Training GCNs: Convergence Analysis and Variance Reduction

no code implementations1 Jan 2021 Weilin Cong, Morteza Ramezani, Mehrdad Mahdavi

In this paper, we describe and analyze a general \textbf{\textit{doubly variance reduction}} schema that can accelerate any sampling method under the memory budget.

GCN meets GPU: Decoupling “When to Sample” from “How to Sample”

no code implementations NeurIPS 2020 Morteza Ramezani, Weilin Cong, Mehrdad Mahdavi, Anand Sivasubramaniam, Mahmut Kandemir

Sampling-based methods promise scalability improvements when paired with stochastic gradient descent in training Graph Convolutional Networks (GCNs).

Cannot find the paper you are looking for? You can Submit a new open access paper.