Subgraph Neural Networks

SubGNN is a general framework for subgraph representation learning. SubGNN learns meaningful representations for subgraphs and supports prediction of any subgraph properties.

We present SubGNN, a general method for subgraph representation learning. It addresses a fundamental gap in current graph neural network (GNN) methods that are not yet optimized for subgraph-level predictions.

Our method implements in a neural message passing scheme three distinct channels to each capture a key property of subgraphs: neighborhood, structure, and position. We have generated four synthetic datasets highlighting a specific subgraph property. By performing an ablation study over the channels, we demonstrate that the performance of individual channels aligns closely with their inductive biases.

SubGNN outperforms baseline methods across both synthetic and real-world datasets. Along with SubGNN, we have released eight new datasets representing a diverse collection of tasks to pave the way for innovating subgraph neural networks in the future.

Publication

Subgraph Neural Networks
Emily Alsentzer, Samuel G. Finlayson, Michelle M. Li, Marinka Zitnik
NeurIPS 2020 [arXiv] [poster]

@inproceedings{alsentzer2020subgraph,
  title={Subgraph Neural Networks},
  author={Alsentzer, Emily and Finlayson, Samuel G and Li, Michelle M and Zitnik, Marinka},
  booktitle={Proceedings of Neural Information Processing Systems, NeurIPS},
  year={2020}
}

Motivation

Encoding subgraphs for GNNs is not well-studied or commonly used. Rather, current GNN methods are optimized for node-, edge-, and graph-level predictions, but not yet at the subgraph-level.

Representation learning for subgraphs presents unique challenges.

  • Subgraphs require that we make joint predictions over structures of varying sizes. They do not necessarily cluster together, and can even be composed of multiple disparate components that are far apart from each other in the graph.
  • Subgraphs contain rich higher-order connectivity patterns, both internally and externally with the rest of the graph. The challenge is to inject information about border and external subgraph structure into the GNN’s neural message passing.
  • Subgraphs can be localized or distributed throughout the graph. We must effectively learn about subgraph positions within the underlying graph.

The following figure depicts a simple base graph and five subgraphs, each with different structures. For instance, subgraphs S2, S3, and S5 comprise of single connected components in the graph, whereas subgraphs S1 and S4 each form two isolated components. Colors indicate subgraph labels. The right panel illustrates an example of internal connectivity versus border structure

SubGNN framework

SubGNN takes as input a base graph and subgraph information to train embeddings for each subgraph. Each channel’s output embeddings are then concatenated to generate one final subgraph embedding.

The figure above depicts SubGNN’s architecture. The left panel illustrates how property-specific messages are propagated from anchor node patches to subgraph components. The right panel shows the three channels designed to capture a distinct subgraph property.

Datasets

We present four new synthetic datasets and four novel real-world social and biological datasets to stimulate subgraph representation learning research.

Synthetic datasets

Each synthetic dataset challenges the ability of our methods to capture:

  • DENSITY: Internal structure of subgraph topology.
  • CUT RATIO: Border structure of subgraph topology.
  • CORENESS: Border structure and position of subgraph topology.
  • COMPONENT: Internal and external position of subgraph topology.

Real-world datasets

  • PPI-BP is a molecular biology dataset, where the subgraphs are a group of genes and their labels are the genes’ collective cellular function. The base graph is a human protein-protein interaction network.
  • UDN-METAB is a clinical dataset, where the subgraphs are a collection of phenotypes associated with a rare monogenic disease and their labels are the subcategory of the metabolic disorder most consistent with those phenotypes. The base graph is a knowledge graph containing phenotype and genotype information about rare diseases.
  • UDN-NEURO is similar to UDN-METAB but for one or more neurological disorders (multilabel classification), and shares the same base graph.
  • EM is a social dataset, where the subgraphs are the workout history of a user and their labels are the gender of the user. The base graph is a social fitness network from Endomondo.

Code

Source code is available in the GitHub repository.

Authors

Latest News

May 2022:   George Named the 2022 Wojcicki Troper Fellow

May 2022:   New preprint on PrimeKG

New preprint on building knowledge graphs to enable precision medicine applications.

Apr 2022:   Webster on the Cover of Cell Systems

Webster is on the cover of April issue of Cell Systems. Webster uses cell viability changes following gene perturbation to automatically learn cellular functions and pathways from data.

Apr 2022:   NASA Space Biology

Dr. Zitnik will serve on the Science Working Group at NASA Space Biology.

Mar 2022:   Yasha's Graduate Research Fellowship

Yasha won the National Defense Science and Engineering Graduate (NDSEG) Fellowship. Congratulations!

Mar 2022:   AI4Science at ICML 2022

We are excited to be selected to organize the AI4Science meeting at ICML 2022. Stay tuned for details. http://www.ai4science.net/icml22

Mar 2022:   Graph Algorithms in Biomedicine at PSB 2023

Excited to be organizing a session on Graph Algorithms at PSB 2023. Stay tuned for details.

Mar 2022:   Multimodal Learning on Graphs

New preprint! We introduce REMAP, a multimodal AI approach for disease relation extraction and classification. Project website.

Feb 2022:   Explainable Graph AI on the Capitol Hill

Owen has been selected to present our research on explainable biomedical AI to members of the US Congress at the “Posters on the Hill” symposium. Congrats Owen!

Feb 2022:   Graph Neural Networks for Time Series

Hot off the press at ICLR 2022. Check out Raindrop, our graph neural network with unique predictive capability to learn from irregular time series. Project website.

Feb 2022:   Biomedical Graph ML Tutorial Accepted to ISMB

Excited to present a tutorial at ISMB 2022 on graph representation learning for precision medicine. Congratulations, Michelle!

Feb 2022:   Marissa Won the Gates Cambridge Scholarship

Marissa Sumathipala is among the 23 outstanding US scholars selected be part of the 2022 class of Gates Cambridge Scholars at the University of Cambridge. Congratulations, Marissa!

Jan 2022:   Inferring Gene Multifunctionality

Jan 2022:   Deep Graph AI for Time Series Accepted to ICLR

Paper on graph representation learning for time series accepted to ICLR. Congratulations, Xiang!

Jan 2022:   Probing GNN Explainers Accepted to AISTATS

Jan 2022:   Marissa Sumathipala selected as Churchill Scholar

Marissa Sumathipala is selected for the prestigious Churchill Scholarship. Congratulations, Marissa!

Jan 2022:   Therapeutics Data Commons User Meetup

We invite you to join the growing open-science community at the User Group Meetup of Therapeutics Data Commons! Register for the first live user group meeting on Tuesday, January 25 at 11:00 AM EST.

Jan 2022:   Workshop on Graph Learning Benchmarks

Dec 2021:   NASA: Precision Space Health System

Human space exploration beyond low Earth orbit will involve missions of significant distance and duration. To effectively mitigate myriad space health hazards, paradigm shifts in data and space health systems are necessary to enable Earth independence. Delighted to be working with NASA and can share our recommendations!

Zitnik Lab  ·  Harvard  ·  Department of Biomedical Informatics