Subgraph Neural Networks

SubGNN is a general framework for subgraph representation learning. SubGNN learns meaningful representations for subgraphs and supports prediction of any subgraph properties.

We present SubGNN, a general method for subgraph representation learning. It addresses a fundamental gap in current graph neural network (GNN) methods that are not yet optimized for subgraph-level predictions.

Our method implements in a neural message passing scheme three distinct channels to each capture a key property of subgraphs: neighborhood, structure, and position. We have generated four synthetic datasets highlighting a specific subgraph property. By performing an ablation study over the channels, we demonstrate that the performance of individual channels aligns closely with their inductive biases.

SubGNN outperforms baseline methods across both synthetic and real-world datasets. Along with SubGNN, we have released eight new datasets representing a diverse collection of tasks to pave the way for innovating subgraph neural networks in the future.

Publication

Subgraph Neural Networks
Emily Alsentzer, Samuel G. Finlayson, Michelle M. Li, Marinka Zitnik
NeurIPS 2020 [arXiv] [poster]

@inproceedings{alsentzer2020subgraph,
  title={Subgraph Neural Networks},
  author={Alsentzer, Emily and Finlayson, Samuel G and Li, Michelle M and Zitnik, Marinka},
  booktitle={Proceedings of Neural Information Processing Systems, NeurIPS},
  year={2020}
}

Motivation

Encoding subgraphs for GNNs is not well-studied or commonly used. Rather, current GNN methods are optimized for node-, edge-, and graph-level predictions, but not yet at the subgraph-level.

Representation learning for subgraphs presents unique challenges.

  • Subgraphs require that we make joint predictions over structures of varying sizes. They do not necessarily cluster together, and can even be composed of multiple disparate components that are far apart from each other in the graph.
  • Subgraphs contain rich higher-order connectivity patterns, both internally and externally with the rest of the graph. The challenge is to inject information about border and external subgraph structure into the GNN’s neural message passing.
  • Subgraphs can be localized or distributed throughout the graph. We must effectively learn about subgraph positions within the underlying graph.

The following figure depicts a simple base graph and five subgraphs, each with different structures. For instance, subgraphs S2, S3, and S5 comprise of single connected components in the graph, whereas subgraphs S1 and S4 each form two isolated components. Colors indicate subgraph labels. The right panel illustrates an example of internal connectivity versus border structure

SubGNN framework

SubGNN takes as input a base graph and subgraph information to train embeddings for each subgraph. Each channel’s output embeddings are then concatenated to generate one final subgraph embedding.

The figure above depicts SubGNN’s architecture. The left panel illustrates how property-specific messages are propagated from anchor node patches to subgraph components. The right panel shows the three channels designed to capture a distinct subgraph property.

Datasets

We present four new synthetic datasets and four novel real-world social and biological datasets to stimulate subgraph representation learning research.

Synthetic datasets

Each synthetic dataset challenges the ability of our methods to capture:

  • DENSITY: Internal structure of subgraph topology.
  • CUT RATIO: Border structure of subgraph topology.
  • CORENESS: Border structure and position of subgraph topology.
  • COMPONENT: Internal and external position of subgraph topology.

Real-world datasets

  • PPI-BP is a molecular biology dataset, where the subgraphs are a group of genes and their labels are the genes’ collective cellular function. The base graph is a human protein-protein interaction network.
  • UDN-METAB is a clinical dataset, where the subgraphs are a collection of phenotypes associated with a rare monogenic disease and their labels are the subcategory of the metabolic disorder most consistent with those phenotypes. The base graph is a knowledge graph containing phenotype and genotype information about rare diseases.
  • UDN-NEURO is similar to UDN-METAB but for one or more neurological disorders (multilabel classification), and shares the same base graph.
  • EM is a social dataset, where the subgraphs are the workout history of a user and their labels are the gender of the user. The base graph is a social fitness network from Endomondo.

Code

Source code is available in the GitHub repository.

Authors

Latest News

Apr 2021:   Hot Off the Press: COVID-19 Repurposing

Hot off the press! We deployed AI/ML and network medicine algorithms to rank 6,340 drugs for their expected efficacy against SARS-CoV-2. We screened in human cells the top-ranked drugs, identifying six drugs that reduced viral infection, four of which could be repurposed to treat COVID-19.

Apr 2021:   Representation Learning for Biomedical Nets

In our survey on representation learning for biomedical networks we discuss how long-standing principles of network biology and medicine provide the conceptual grounding for representation learning, explain its successes, and inform future advances.

Mar 2021:   Receiving Amazon Research Award

We are excited about receiving Amazon Faculty Research Award on Actionable Graph Learning for Finding Cures for Emerging Diseases. Thank you to Amazon Science for supporting our research.

Mar 2021:   Michelle's Graduate Research Fellowship

Michelle M. Li won the NSF Graduate Research Fellowship Award. Congratulations!

Mar 2021:   Hot Off the Press: Multiscale Interactome

Hot off the press! We develop a multiscale interactome approach to explain disease treatments. The approach can predict drug-disease treatments, identify proteins and biological functions related to treatment, and identify genes that alter treatment’s efficacy and adverse reactions.

Mar 2021:   Graph Networks in Computational Biology

We are excited to share slides from our recent lecture on Graph Neural Networks in Computational Biology, which we gave at Stanford ML for Graphs course.

Mar 2021:   Fair and Stable Graph Representation Learning

We are thrilled to share the latest preprint on fair and stable graph representation learning.

Feb 2021:   New Preprint on Therapeutics Data Commons

Jan 2021:   An Algorithmic Approach to Patient Safety

The new algorithmic approach investigates population-scale patient safety data and reveals inequalities in adverse events before and during COVID-19 pandemic.

Jan 2021:   Workshop on AI in Health at the Web Conference

We are excited to co-organize Workshop on AI in Health: Transferring and Integrating Knowledge for Better Health at the Web (WWW) conference. The call for papers is open! We also announce the AI in Health Data Challenge.

Jan 2021:   Tutorial on ML for Drug Development

We will present a tutorial on ML/AI for drug discovery and development at IJCAI conference. See the tutorial website.

Dec 2020:   Two New Papers Published

Dec 2020:   Bayer Early Excellence in Science Award

Our research won the Bayer Early Excellence in Science Award. We are honored to have received this recognition!

Nov 2020:   Therapeutics Data Commons (TDC)

We are thrilled to announce Therapeutics Data Commons (TDC)! We invite you to join TDC. TDC is an open-source and community-driven effort.

Nov 2020:   National Symposium on the Future of Drugs

On behalf of the NSF, we are organizing the National Symposium on Drug Repurposing for Future Pandemics. We have a stellar lineup of invited speakers! Register at www.drugsymposium.org.

Oct 2020:   MARS: Novel Cell Types in Single-cell Datasets

Sep 2020:   Four Papers Accepted at NeurIPS

Thrilled that our lab has 4 papers accepted at NeurIPS 2020! Congratulations to fantastic students and collaborators, Michelle, Xiang, Kexin, Sam, and Emily.

Sep 2020:   MITxHarvard Women in AI Interview

The MITxHarvard Women in AI initiative talked with Marinka about AI, machine learning, and the role of new technologies in biomedical research.

Aug 2020:   Trustworthy AI for Healthcare

We are excited to be co-organizing a workshop at AAAI 2021 on Trustworthy AI for Healthcare! We have a stellar lineup of speakers. Details to follow soon!

Aug 2020:   Network Drugs for COVID-19

What are network drugs? Drugs for COVID-19 predicted by network medicine, our graph neural networks (GNNs), and our rank aggregation algorithms, followed by experimental confirmations. The full paper is finally out!

Zitnik Lab  ·  Harvard  ·  Department of Biomedical Informatics