Defending Graph Neural Networks against Adversarial Attacks

GNNGuard is a model-agnostic approach that can defend any Graph Neural Network against a variety of adversarial attacks.

Deep learning methods for graphs achieve remarkable performance on many tasks. However, despite the proliferation of such methods and their success, recent findings indicate that even the strongest and most popular Graph Neural Networks (GNNs) are highly vulnerable to adversarial attacks. Adversarial attacks mean that an attacker injects small but carefully-designed perturbations to the graph structures in order to degrade the performance of GNN classifiers.

The vulnerability is a significant issue preventing GNNs from being used in real-world applications. For example, under adversarial attack, small and unnoticeable perturbations of graph structure (e.g., adding two edges on the poisoned node) can catastrophically reduce performance (panel A in the figure).

We develop GNNGuard, a general algorithm to defend against a variety of training-time attacks that perturb the discrete graph structure. GNNGuard can be straightforwardly incorporated into any GNN. By integrating GNNGuard, the GNN classifier can make correct predictions even when trained on the attacked graph (panel B in the figure).

GNNGuard algorithm

Most damaging attacks add fake edges between nodes that have different features and labels. Because of that, the key idea of GNNGuard is to detect and quantify the relationship between the graph structure and node features, if one exists, and then exploit that relationship to mitigate negative effects of the attack. GNNGuard learns how to best assign higher weights to edges connecting similar nodes while pruning edges between unrelated nodes. In specific, instead of the neural message passing of a typical GNN (panel A in the figure), GNNGuard (panel B in the figure) controls the message stream, such as blocking the message from irrelevant neighbors while strengthening messages from highly-related ones.

Remarkably, GNNGuard can effectively restore state-of-the-art performance of GNNs in the face of various adversarial attacks, including targeted and non-targeted attacks, and can defend against attacks on both homophily and heterophily graphs.

Attractive properties of GNNGuard

  • Defense against a variety of attacks: GNNGuard is a general defense approach that is effective against a variety of training-time attacks, including directly targeted, influence, and non-targeted attacks.
  • Integrates with any GNNs: GNNGuard can defend any modern GNN architecture against adversarial attacks.
  • State-of-the-art performance on clean graphs: In real-world settings, we do not know whether a graph has been attacked or not. GNNGuard can restore state-of-the-art performance of a GNN when the graph is attached as well as sustain the original performance on non-attacked graphs.
  • Homophily and heterophily graphs: GNNGuard is the first technique that can defend GNNs against attacks on homophily and heterophily graphs. GNNGuard can be easily generalized to graphs with abundant structural equivalences, where connected nodes have different node features yet similar structural roles.

Publication

GNNGuard: Defending Graph Neural Networks against Adversarial Attacks
Xiang Zhang and Marinka Zitnik
NeurIPS 2020 [arXiv] [poster]

@inproceedings{zhang2020gnnguard,
title     = {GNNGuard: Defending Graph Neural Networks against Adversarial Attacks},
author    = {Zhang, Xiang and Zitnik, Marinka},
booktitle = {Proceedings of Neural Information Processing Systems, NeurIPS},
year      = {2020}
}

Code and datasets

Pytorch implementation of GNNGuard and all datasets are available in the GitHub repository.

Authors

Latest News

Apr 2024:   Biomedical AI Agents

Mar 2024:   Efficient ML Seminar Series

We started a Harvard University Efficient ML Seminar Series. Congrats to Jonathan for spearheading this initiative. Harvard Magazine covered the first meeting focusing on LLMs.

Mar 2024:   UniTS - Unified Time Series Model

UniTS is a unified time series model that can process classification, forecasting, anomaly detection and imputation tasks within a single model with no task-specific modules. UniTS has zero-shot, few-shot, and prompt learning capabilities. Project website.

Mar 2024:   Weintraub Graduate Student Award

Michelle receives the 2024 Harold M. Weintraub Graduate Student Award. The award recognizes exceptional achievement in graduate studies in biological sciences. News Story. Congratulations!

Mar 2024:   PocketGen - Generating Full-Atom Ligand-Binding Protein Pockets

PocketGen is a deep generative model that generates residue sequence and full-atom structure of protein pockets, maximizing binding to ligands. Project website.

Feb 2024:   SPECTRA - Generalizability of Molecular AI

Feb 2024:   Kaneb Fellowship Award

The lab receives the John and Virginia Kaneb Fellowship Award at Harvard Medical School to enhance research progress in the lab.

Feb 2024:   NSF CAREER Award

The lab receives the NSF CAREER Award for our research in geometric deep learning to facilitate algorithmic and scientific advances in therapeutics.

Feb 2024:   Dean’s Innovation Award in AI

Jan 2024:   AI's Prospects in Nature Machine Intelligence

We discussed AI’s 2024 prospects with Nature Machine Intelligence, covering LLM progress, multimodal AI, multi-task agents, and how to bridge the digital divide across communities and world regions.

Jan 2024:   Combinatorial Therapeutic Perturbations

New paper introducing PDGrapher for combinatorial prediction of chemical and genetic perturbations using causally-inspired neural networks.

Nov 2023:   Next Generation of Therapeutics Commons

Oct 2023:   Structure-Based Drug Design

Geometric deep learning has emerged as a valuable tool for structure-based drug design, to generate and refine biomolecules by leveraging detailed three-dimensional geometric and molecular interaction information.

Oct 2023:   Graph AI in Medicine

Graph AI models in medicine integrate diverse data modalities through pre-training, facilitate interactive feedback loops, and foster human-AI collaboration, paving the way to clinically meaningful predictions.

Sep 2023:   New papers accepted at NeurIPS

Sep 2023:   Future Directions in Network Biology

Excited to share our perspectives on current and future directions in network biology.

Aug 2023:   Scientific Discovery in the Age of AI

Jul 2023:   PINNACLE - Contextual AI protein model

PINNACLE is a contextual AI model for protein understanding that dynamically adjusts its outputs based on biological contexts in which it operates. Project website.

Jun 2023:   Our Group is Joining the Kempner Institute

Excited to join Kempner’s inaugural cohort of associate faculty to advance Kempner’s mission of studying the intersection of natural and artificial intelligence.

Jun 2023:   Welcoming a New Postdoctoral Fellow

An enthusiastic welcome to Shanghua Gao who is joining our group as a postdoctoral research fellow.

Zitnik Lab  ·  Artificial Intelligence in Medicine and Science  ·  Harvard  ·  Department of Biomedical Informatics