Graph Meta Learning

G-Meta is broadly applicable, theoretically motivated, and scalable graph network framework for few-shot and meta learning.

Graph few-shot and meta learning

Prevailing methods for graphs require abundant label and edge information for learning, yet many real-world graphs only have a few labels available. This presents a new challenge: how to make accurate predictions in the low-data regimes?

When data for a new task are scarce, meta learning can learn from prior experiences and form much-needed inductive biases for fast adaptation to new tasks. However, a systematic way to formulate meta learning problems on graph-structured data is missing. In this work, we first formulate three important but distinct graph meta learning problems. The main idea is to adapt to the graph or label set of interest by learning from related graphs or label sets.

G-Meta algorithm

G-Meta is a meta learning algorithm that excels at all of the above meta learning problems. In contrast to the status quo that propagate messages through the entire graph, G-Meta uses local subgraphs to transfer subgraph-specific information and learn transferable knowledge faster via meta gradients.

Attractive properties of G-Meta

(1) Theoretically justified: We show theoretically that the evidence for a prediction can be found in the local subgraph surrounding the target node or edge.

(2) Inductive: As the input of GNN is a different subgraph for each propagation during meta-training, it can generalize to never-before-seen subgraphs, such as the ones in the meta-testing. This is in contrast to previous works where inductiveness means using the same weight generated from one single trained graph and applying it to a never-before-seen structurally different graph.

(3) Scalable: In a typical graph meta learning setting, we have many graphs and each have large amounts of nodes and edges. But each task is only looking at a few data points scattered across graphs. Previous works propagate through all of the graphs to generate embeddings for a few nodes, which is wasteful. In contrast, G-Meta simply extracts the small subgraphs around a few data points for every task, and is thus not restricted by any number of nodes, edges, and graphs.

(4) Broadly applicable: G-Meta uses an individual subgraph for each data point and thus breaks the dependency across graphs and labels. While previous works only excel at one of the graph meta learning problems for either node classification or link prediction tasks, G-Meta works for all of the three graph meta learning problems and both node and link prediction tasks.

G-Meta excels at graph meta learning

Empirically, experiments on seven datasets and nine baseline methods show that G-Meta outperforms existing methods by up to 16.3%. Unlike previous methods, G-Meta successfully learns in challenging, few-shot learning settings that require generalization to completely new graphs and never-before-seen labels. Finally, G-Meta scales to large graphs, which we demonstrate on a new Tree-of-Life dataset consisting of 1,840 graphs, a two-orders of magnitude increase in the number of graphs used in prior work.

Publication

Graph Meta Learning via Local Subgraphs
Kexin Huang, Marinka Zitnik
NeurIPS 2020 [arXiv] [poster]

@inproceedings{huangG-Meta2020,
  title={Graph Meta Learning via Local Subgraphs},
  author={Huang, Kexin and Zitnik, Marinka},
  booktitle={Proceedings of Neural Information Processing Systems, NeurIPS},
  year={2020}
}

Code

Source code is available in the GitHub repository.

Datasets

ML-ready datasets used in the paper are provided in the HU data repository and, alternatively, the Microsoft repository.

Authors

Latest News

Jul 2021:   Best Paper Award at ICML Interpretable ML for Healthcare

Our short paper on Interactive Visual Explanations for Deep Drug Repurposing received the Best Paper Award at ICML 2021 Interpretable ML in Healthcare Workshop. Stay tuned for more news on this evolving project.

Jul 2021:   Five presentations at ICML 2021

Jun 2021:   Theory and Evaluation for Explanations

We introduce the first axiomatic framework for theoretically analyzing, evaluating, and comparing GNN explanation methods. We formalize key properties that all methods should satisfy to generate reliable explanations: faithfulness, stability, and fairness.

Jun 2021:   Deep Contextual Learners for Protein Networks

New preprint on contextualized protein embeddings aims to characterize genes with disease-specific interactions and elucidate disease manifestation in specific cell types.

May 2021:   New Paper Accepted at UAI

Our unified framework for fair and stable graph representation learning has just been accepted at UAI. We establish a theoretical connection between counterfactual fairness and stability and use it in a framework that can be used with any GNN to learn fair and stable embeddings.

Apr 2021:   Hot Off the Press: COVID-19 Repurposing in PNAS

Hot off the press! We deployed AI/ML and network medicine algorithms to rank 6,340 drugs for their expected efficacy against SARS-CoV-2. We screened in human cells the top-ranked drugs, identifying six drugs that reduced viral infection, four of which could be repurposed to treat COVID-19.

Apr 2021:   Representation Learning for Biomedical Nets

In our survey on representation learning for biomedical networks we discuss how long-standing principles of network biology and medicine provide the conceptual grounding for representation learning, explain its successes, and inform future advances.

Mar 2021:   Receiving Amazon Research Award

We are excited about receiving Amazon Faculty Research Award on Actionable Graph Learning for Finding Cures for Emerging Diseases. Thank you to Amazon Science for supporting our research.

Mar 2021:   Michelle's Graduate Research Fellowship

Michelle M. Li won the NSF Graduate Research Fellowship Award. Congratulations!

Mar 2021:   Hot Off the Press: Multiscale Interactome

Hot off the press! We develop a multiscale interactome approach to explain disease treatments. The approach can predict drug-disease treatments, identify proteins and biological functions related to treatment, and identify genes that alter treatment’s efficacy and adverse reactions.

Mar 2021:   Graph Networks in Computational Biology

We are excited to share slides from our recent lecture on Graph Neural Networks in Computational Biology, which we gave at Stanford ML for Graphs course.

Mar 2021:   Fair and Stable Graph Representation Learning

We are thrilled to share the latest preprint on fair and stable graph representation learning.

Feb 2021:   New Preprint on Therapeutics Data Commons

Jan 2021:   An Algorithmic Approach to Patient Safety

The new algorithmic approach investigates population-scale patient safety data and reveals inequalities in adverse events before and during COVID-19 pandemic.

Jan 2021:   Workshop on AI in Health at the Web Conference

We are excited to co-organize Workshop on AI in Health: Transferring and Integrating Knowledge for Better Health at the Web (WWW) conference. The call for papers is open! We also announce the AI in Health Data Challenge.

Jan 2021:   Tutorial on ML for Drug Development

We will present a tutorial on ML/AI for drug discovery and development at IJCAI conference. See the tutorial website.

Dec 2020:   Two New Papers Published

Dec 2020:   Bayer Early Excellence in Science Award

Our research won the Bayer Early Excellence in Science Award. We are honored to have received this recognition!

Nov 2020:   Therapeutics Data Commons (TDC)

We are thrilled to announce Therapeutics Data Commons (TDC)! We invite you to join TDC. TDC is an open-source and community-driven effort.

Nov 2020:   National Symposium on the Future of Drugs

On behalf of the NSF, we are organizing the National Symposium on Drug Repurposing for Future Pandemics. We have a stellar lineup of invited speakers! Register at www.drugsymposium.org.

Zitnik Lab  ·  Harvard  ·  Department of Biomedical Informatics