Defending Graph Neural Networks against Adversarial Attacks

GNNGuard is a model-agnostic approach that can defend any Graph Neural Network against a variety of adversarial attacks.

Deep learning methods for graphs achieve remarkable performance on many tasks. However, despite the proliferation of such methods and their success, recent findings indicate that even the strongest and most popular Graph Neural Networks (GNNs) are highly vulnerable to adversarial attacks. Adversarial attacks mean that an attacker injects small but carefully-designed perturbations to the graph structures in order to degrade the performance of GNN classifiers.

The vulnerability is a significant issue preventing GNNs from being used in real-world applications. For example, under adversarial attack, small and unnoticeable perturbations of graph structure (e.g., adding two edges on the poisoned node) can catastrophically reduce performance (panel A in the figure).

We develop GNNGuard, a general algorithm to defend against a variety of training-time attacks that perturb the discrete graph structure. GNNGuard can be straightforwardly incorporated into any GNN. By integrating GNNGuard, the GNN classifier can make correct predictions even when trained on the attacked graph (panel B in the figure).

GNNGuard algorithm

Most damaging attacks add fake edges between nodes that have different features and labels. Because of that, the key idea of GNNGuard is to detect and quantify the relationship between the graph structure and node features, if one exists, and then exploit that relationship to mitigate negative effects of the attack. GNNGuard learns how to best assign higher weights to edges connecting similar nodes while pruning edges between unrelated nodes. In specific, instead of the neural message passing of a typical GNN (panel A in the figure), GNNGuard (panel B in the figure) controls the message stream, such as blocking the message from irrelevant neighbors while strengthening messages from highly-related ones.

Remarkably, GNNGuard can effectively restore state-of-the-art performance of GNNs in the face of various adversarial attacks, including targeted and non-targeted attacks, and can defend against attacks on both homophily and heterophily graphs.

Attractive properties of GNNGuard

  • Defense against a variety of attacks: GNNGuard is a general defense approach that is effective against a variety of training-time attacks, including directly targeted, influence, and non-targeted attacks.
  • Integrates with any GNNs: GNNGuard can defend any modern GNN architecture against adversarial attacks.
  • State-of-the-art performance on clean graphs: In real-world settings, we do not know whether a graph has been attacked or not. GNNGuard can restore state-of-the-art performance of a GNN when the graph is attached as well as sustain the original performance on non-attacked graphs.
  • Homophily and heterophily graphs: GNNGuard is the first technique that can defend GNNs against attacks on homophily and heterophily graphs. GNNGuard can be easily generalized to graphs with abundant structural equivalences, where connected nodes have different node features yet similar structural roles.

Publication

GNNGuard: Defending Graph Neural Networks against Adversarial Attacks
Xiang Zhang and Marinka Zitnik
NeurIPS 2020 [arXiv] [poster]

@inproceedings{zhang2020gnnguard,
title     = {GNNGuard: Defending Graph Neural Networks against Adversarial Attacks},
author    = {Zhang, Xiang and Zitnik, Marinka},
booktitle = {Proceedings of Neural Information Processing Systems, NeurIPS},
year      = {2020}
}

Code and datasets

Pytorch implementation of GNNGuard and all datasets are available in the GitHub repository.

Authors

Latest News

Oct 2021:   Adverse Drug Effects During the Pandemic

The COVID-19 pandemic has reshaped health and medicine in ways both dramatic and subtle. Some of the less obvious shifts can only emerge from analysis of millions of pieces of data—patient records, medical notes, clinical encounter reports. Check out the story in Harvard Medicine News highlighting our research.

Oct 2021:   Graph-Guided Networks for Time Series

New preprint! We introduce Raindrop, a graph-guided network for learning representations of irregularly sampled multivariate time series.

Oct 2021:   Massive Analysis of Differential Adverse Events

Hot off the press in Nature Computational Science! We develop an algorithmic approach for massive analysis of drug adverse events. Our analyses of 10,443,476 adverse event reports have implications for safe medication use and public health policy, and can enable comparison of COVID-19 pandemic to other health emergencies.

Sep 2021:   Leveraging Cell Ontology to Classify Cell Types

Hot off the press in Nature Communications! We developed OnClass, an algorithm and accompanying software for automatically classifying cells into cell types that are part of the controlled vocabulary that forms the Cell Ontology.

Sep 2021:   Major New Release of TDC

We are very excited to announce a major release of Therapeutics Data Commons! In the 0.3.0 release we restructured the codebase, simplified the backend and kept user interfaces the same. We also provide detailed documentation for our TDC package.

Aug 2021:   Trustworthy AI for Healthcare at AAAI

We will be organizing a meeting on Trustworthy AI for Healthcare at AAAI 2022. Stay tuned for details and call for papers.

Aug 2021:   New Paper on Therapeutics Data Commons

Our latest paper on Therapeutics Data Commons: Machine Learning Datasets and Tasks for Drug Discovery and Development will appear at NeurIPS. We are excited to contribute novel datasets and benchmarks in the broad area of therapeutics.

Aug 2021:   AI for Science at NeurIPS

We are organizing the AI for Science workshop at NeurIPS 2021 and have a stellar lineup of invited speakers.

Aug 2021:   Best Poster Award at ICML Comp Biology

Congratulations to Michelle for winning the Best Poster Award for her work on deep contextual learners for protein networks at the ICML Workshop on Computational Biology.

Jul 2021:   Best Paper Award at ICML Interpretable ML

Our short paper on Interactive Visual Explanations for Deep Drug Repurposing received the Best Paper Award at the ICML Interpretable ML in Healthcare Workshop. Stay tuned for more news on this evolving project.

Jul 2021:   Five presentations at ICML 2021

Jun 2021:   Theory and Evaluation for Explanations

We introduce the first axiomatic framework for theoretically analyzing, evaluating, and comparing GNN explanation methods. We formalize key properties that all methods should satisfy to generate reliable explanations: faithfulness, stability, and fairness.

Jun 2021:   Deep Contextual Learners for Protein Networks

New preprint on contextualized protein embeddings aims to characterize genes with disease-specific interactions and elucidate disease manifestation in specific cell types.

May 2021:   New Paper Accepted at UAI

Our unified framework for fair and stable graph representation learning has just been accepted at UAI. We establish a theoretical connection between counterfactual fairness and stability and use it in a framework that can be used with any GNN to learn fair and stable embeddings.

Apr 2021:   Hot Off the Press: COVID-19 Repurposing in PNAS

Hot off the press! We deployed AI/ML and network medicine algorithms to rank 6,340 drugs for their expected efficacy against SARS-CoV-2. We screened in human cells the top-ranked drugs, identifying six drugs that reduced viral infection, four of which could be repurposed to treat COVID-19.

Apr 2021:   Representation Learning for Biomedical Nets

In our survey on representation learning for biomedical networks we discuss how long-standing principles of network biology and medicine provide the conceptual grounding for representation learning, explain its successes, and inform future advances.

Mar 2021:   Receiving Amazon Research Award

We are excited about receiving Amazon Faculty Research Award on Actionable Graph Learning for Finding Cures for Emerging Diseases. Thank you to Amazon Science for supporting our research.

Mar 2021:   Michelle's Graduate Research Fellowship

Michelle M. Li won the NSF Graduate Research Fellowship Award. Congratulations!

Mar 2021:   Hot Off the Press: Multiscale Interactome

Hot off the press! We develop a multiscale interactome approach to explain disease treatments. The approach can predict drug-disease treatments, identify proteins and biological functions related to treatment, and identify genes that alter treatment’s efficacy and adverse reactions.

Mar 2021:   Graph Networks in Computational Biology

We are excited to share slides from our recent lecture on Graph Neural Networks in Computational Biology, which we gave at Stanford ML for Graphs course.

Zitnik Lab  ·  Harvard  ·  Department of Biomedical Informatics