Defending Graph Neural Networks against Adversarial Attacks

GNNGuard is a model-agnostic approach that can defend any Graph Neural Network against a variety of adversarial attacks.

Deep learning methods for graphs achieve remarkable performance on many tasks. However, despite the proliferation of such methods and their success, recent findings indicate that even the strongest and most popular Graph Neural Networks (GNNs) are highly vulnerable to adversarial attacks. Adversarial attacks mean that an attacker injects small but carefully-designed perturbations to the graph structures in order to degrade the performance of GNN classifiers.

The vulnerability is a significant issue preventing GNNs from being used in real-world applications. For example, under adversarial attack, small and unnoticeable perturbations of graph structure (e.g., adding two edges on the poisoned node) can catastrophically reduce performance (panel A in the figure).

We develop GNNGuard, a general algorithm to defend against a variety of training-time attacks that perturb the discrete graph structure. GNNGuard can be straightforwardly incorporated into any GNN. By integrating GNNGuard, the GNN classifier can make correct predictions even when trained on the attacked graph (panel B in the figure).

GNNGuard algorithm

Most damaging attacks add fake edges between nodes that have different features and labels. Because of that, the key idea of GNNGuard is to detect and quantify the relationship between the graph structure and node features, if one exists, and then exploit that relationship to mitigate negative effects of the attack. GNNGuard learns how to best assign higher weights to edges connecting similar nodes while pruning edges between unrelated nodes. In specific, instead of the neural message passing of a typical GNN (panel A in the figure), GNNGuard (panel B in the figure) controls the message stream, such as blocking the message from irrelevant neighbors while strengthening messages from highly-related ones.

Remarkably, GNNGuard can effectively restore state-of-the-art performance of GNNs in the face of various adversarial attacks, including targeted and non-targeted attacks, and can defend against attacks on both homophily and heterophily graphs.

Attractive properties of GNNGuard

  • Defense against a variety of attacks: GNNGuard is a general defense approach that is effective against a variety of training-time attacks, including directly targeted, influence, and non-targeted attacks.
  • Integrates with any GNNs: GNNGuard can defend any modern GNN architecture against adversarial attacks.
  • State-of-the-art performance on clean graphs: In real-world settings, we do not know whether a graph has been attacked or not. GNNGuard can restore state-of-the-art performance of a GNN when the graph is attached as well as sustain the original performance on non-attacked graphs.
  • Homophily and heterophily graphs: GNNGuard is the first technique that can defend GNNs against attacks on homophily and heterophily graphs. GNNGuard can be easily generalized to graphs with abundant structural equivalences, where connected nodes have different node features yet similar structural roles.

Publication

GNNGuard: Defending Graph Neural Networks against Adversarial Attacks
Xiang Zhang and Marinka Zitnik
NeurIPS 2020 [arXiv]

@inproceedings{zhang2020gnnguard,
title     = {GNNGuard: Defending Graph Neural Networks against Adversarial Attacks},
author    = {Zhang, Xiang and Zitnik, Marinka},
booktitle = {Proceedings of Neural Information Processing Systems, NeurIPS},
year      = {2020}
}

Code and datasets

Pytorch implementation of GNNGuard and all datasets are available in the GitHub repository.

Authors

Latest News

Nov 2020:   Therapeutics Data Commons (TDC)

We are thrilled to announce Therapeutics Data Commons (TDC)! We invite you to join TDC. TDC is an open-source and community-driven effort.

Nov 2020:   National Symposium on the Future of Drugs

On behalf of the NSF, we are organizing the National Symposium on Drug Repurposing for Future Pandemics. We have a stellar lineup of invited speakers! Register at www.drugsymposium.org.

Oct 2020:   MARS: Novel Cell Types in Single-cell Datasets

Sep 2020:   Four Papers Accepted at NeurIPS

Thrilled that our lab has 4 papers accepted at NeurIPS 2020! Congratulations to fantastic students and collaborators, Michelle, Xiang, Kexin, Sam, and Emily.

Sep 2020:   MITxHarvard Women in AI Interview

The MITxHarvard Women in AI initiative talked with Marinka about AI, machine learning, and the role of new technologies in biomedical research.

Aug 2020:   Trustworthy AI for Healthcare

We are excited to be co-organizing a workshop at AAAI 2021 on Trustworthy AI for Healthcare! We have a stellar lineup of speakers. Details to follow soon!

Aug 2020:   Network Drugs for COVID-19

What are network drugs? Drugs for COVID-19 predicted by network medicine, our graph neural networks (GNNs), and our rank aggregation algorithms, followed by experimental confirmations. The full paper is finally out!

Jul 2020:   Podcast on ML for Drug Development

Tune in to the podcast with Marinka about machine learning to drug development. The discussion focuses on open research questions in the field, including how to limit the search space of high-throughput screens, design drugs entirely from scratch, and identify likely side-effects of combining drugs in novel ways.

Jul 2020:   Postdoctoral Research Fellowship

We have a new opening for a postdoctoral research fellow in novel machine learning methods to combat COVID-19! Submit your application by September 1, 2020.

Jul 2020:   DeepPurpose Library

DeepPurpose is a deep learning library for drug-target interaction prediction and applications to drug repurposing and virtual screening.

Jun 2020:   Subgraph Neural Networks

Subgraph neural networks learn powerful subgraph representations that create fundamentally new opportunities for predictions beyond nodes, edges, and entire graphs.

Jun 2020:   Defense Against Adversarial Attacks

GNNGuard can defend graph neural networks against a variety of training-time attacks. Remarkably, GNNGuard can restore state-of-the-art performance of any GNN in the face of adversarial attacks.

Jun 2020:   Graph Meta Learning via Subgraphs

G-Meta is a meta-learning approach for graphs that quickly adapts to new prediction tasks using only a handful of data points. G-Meta works in most challenging, few-shot learning settings and scales to massive interactomics data as we show on our new Networks of Life dataset comprising of 1,840 networks.

May 2020:   The Open Graph Benchmark

A new paper introducing the Open Graph Benchmark, a diverse set of challenging and realistic benchmark datasets for graph machine learning.

May 2020:   Special Issue on AI for COVID-19

Marinka is co-editing a special issue of IEEE Big Data on AI for COVID-19. In light of the urgent need for data-driven solutions to mitigate the COVID-19 pandemic, the special issue will aim for a fast-track peer review.

May 2020:   Multiscale Interactome

May 2020:   Molecular Interaction Networks

A new preprint describing a graph neural network approach for the prediction of molecular interactions, including drug-drug, drug-target, protein-protein, and gene-disease interactions.

Apr 2020:   Submit to PhD Forum at ECML

The call for ECML-PKDD 2020 PhD Forum Track is now online. If you are a PhD student, submit your work on machine learning and knowledge discovery.

Apr 2020:   Drug Repurposing for COVID-19

We are excited to share our latest results on how networks and graph machine-learning help us search for a cure for COVID-19.

Mar 2020:   AI Cures

We are joining AI Cures initiative at MIT! We will develop machine learning methods for finding promising antiviral molecules for COVID-19 and other emerging pathogens.

Zitnik Lab  ·  Harvard  ·  Department of Biomedical Informatics