Quadratic Graph Attention Network (Q-GAT) for Robust Construction of Gene Regulatory Networks

24 Mar 2023  ·  HUI ZHANG, Xuexin An, Qiang He, YuDong Yao, Yudong Zhang, Feng-Lei Fan, Yueyang Teng ·

Gene regulatory relationships can be abstracted as a gene regulatory network (GRN), which plays a key role in characterizing complex cellular processes and pathways. Recently, graph neural networks (GNNs), as a class of deep learning models, have emerged as a useful tool to infer gene regulatory relationships from gene expression data. However, deep learning models have been found to be vulnerable to noise, which greatly hinders the adoption of deep learning in constructing GRNs, because high noise is often unavoidable in the process of gene expression measurement. Can we preferably prototype a robust GNN for constructing GRNs? In this paper, we give a positive answer by proposing a Quadratic Graph Attention Network (Q-GAT) with a dual attention mechanism. We study the changes in the predictive accuracy of Q-GAT and 9 state-of-the-art baselines by introducing different levels of adversarial perturbations. Experiments in the E. coli and S. cerevisiae datasets suggest that Q-GAT outperforms the state-of-the-art models in robustness. Lastly, we dissect why Q-GAT is robust through the signal-to-noise ratio (SNR) and interpretability analyses. The former informs that nonlinear aggregation of quadratic neurons can amplify useful signals and suppress unwanted noise, thereby facilitating robustness, while the latter reveals that Q-GAT can leverage more features in prediction thanks to the dual attention mechanism, which endows Q-GAT with the ability to confront adversarial perturbation. We have shared our code in https://github.com/Minorway/Q-GAT_for_Robust_Construction_of_GRN for readers' evaluation.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here