Defending Graph Neural Networks against Adversarial Attacks

GNNGuard is a model-agnostic approach that can defend any Graph Neural Network against a variety of adversarial attacks.

Deep learning methods for graphs achieve remarkable performance on many tasks. However, despite the proliferation of such methods and their success, recent findings indicate that even the strongest and most popular Graph Neural Networks (GNNs) are highly vulnerable to adversarial attacks. Adversarial attacks mean that an attacker injects small but carefully-designed perturbations to the graph structures in order to degrade the performance of GNN classifiers.

The vulnerability is a significant issue preventing GNNs from being used in real-world applications. For example, under adversarial attack, small and unnoticeable perturbations of graph structure (e.g., adding two edges on the poisoned node) can catastrophically reduce performance (panel A in the figure).

We develop GNNGuard, a general algorithm to defend against a variety of training-time attacks that perturb the discrete graph structure. GNNGuard can be straightforwardly incorporated into any GNN. By integrating GNNGuard, the GNN classifier can make correct predictions even when trained on the attacked graph (panel B in the figure).

GNNGuard algorithm

Most damaging attacks add fake edges between nodes that have different features and labels. Because of that, the key idea of GNNGuard is to detect and quantify the relationship between the graph structure and node features, if one exists, and then exploit that relationship to mitigate negative effects of the attack. GNNGuard learns how to best assign higher weights to edges connecting similar nodes while pruning edges between unrelated nodes. In specific, instead of the neural message passing of a typical GNN (panel A in the figure), GNNGuard (panel B in the figure) controls the message stream, such as blocking the message from irrelevant neighbors while strengthening messages from highly-related ones.

Remarkably, GNNGuard can effectively restore state-of-the-art performance of GNNs in the face of various adversarial attacks, including targeted and non-targeted attacks, and can defend against attacks on both homophily and heterophily graphs.

Attractive properties of GNNGuard

  • Defense against a variety of attacks: GNNGuard is a general defense approach that is effective against a variety of training-time attacks, including directly targeted, influence, and non-targeted attacks.
  • Integrates with any GNNs: GNNGuard can defend any modern GNN architecture against adversarial attacks.
  • State-of-the-art performance on clean graphs: In real-world settings, we do not know whether a graph has been attacked or not. GNNGuard can restore state-of-the-art performance of a GNN when the graph is attached as well as sustain the original performance on non-attacked graphs.
  • Homophily and heterophily graphs: GNNGuard is the first technique that can defend GNNs against attacks on homophily and heterophily graphs. GNNGuard can be easily generalized to graphs with abundant structural equivalences, where connected nodes have different node features yet similar structural roles.

Publication

GNNGuard: Defending Graph Neural Networks against Adversarial Attacks
Xiang Zhang and Marinka Zitnik
NeurIPS 2020 [arXiv] [poster]

@inproceedings{zhang2020gnnguard,
title     = {GNNGuard: Defending Graph Neural Networks against Adversarial Attacks},
author    = {Zhang, Xiang and Zitnik, Marinka},
booktitle = {Proceedings of Neural Information Processing Systems, NeurIPS},
year      = {2020}
}

Code and datasets

Pytorch implementation of GNNGuard and all datasets are available in the GitHub repository.

Authors

Latest News

Apr 2025:   ATOMICA - A Universal Model of Molecular Interactions

Mar 2025:   On Biomedical AI in Harvard Gazette

Read about AI in medicine in the latest Harvard Gazette and New York Times.

Mar 2025:   TxAgent: AI Agent for Therapeutic Reasoning

TxAgent is an AI agent for therapeutic reasoning that consolidates 211 tools from trusted sources, including all US FDA-approved drugs since 1939 and validated clinical insights. [Project website] [TxAgent] [ToolUniverse]

Mar 2025:   Multimodal AI predicts clinical outcomes of drug combinations from preclinical data

Mar 2025:   KGARevion: AI Agent for Knowledge-Intensive Biomedical QA

KGARevion is an AI agent designed for complex biomedical QA that integrates the non-codified knowledge of LLMs with the structured, codified knowledge found in knowledge graphs. [ICLR 2025 publication]

Feb 2025:   MedTok: Unlocking Medical Codes for GenAI

Meet MedTok, a multimodal medical code tokenizer that transforms how AI understands structured medical data. By integrating textual descriptions and relational contexts, MedTok enhances tokenization for transformer-based models—powering everything from EHR foundation models to medical QA. [Project website]

Feb 2025:   What If You Could Rewrite Biology? Meet CLEF

What if we could anticipate molecular and medical changes before they happen? Introducing CLEF, an approach for counterfactual generation in biological and medical sequence models. [Project website]

Feb 2025:   Digital Twins as Global Health and Disease Models

Jan 2025:   LLM and KG+LLM agent papers at ICLR

Jan 2025:   Artificial Intelligence in Medicine 2

Excited to share our new graduate course on Artificial Intelligence in Medicine 2.

Jan 2025:   ProCyon AI Highlighted by Kempner

Thanks to Kempner Institute for highlighting our latest research, ProCyon, our protein-text foundation model for modeling protein functions.

Jan 2025:   AI Design of Proteins for Therapeutics

Dec 2024:   Unified Clinical Vocabulary Embeddings

New paper: A unified resource provides a new representation of clinical knowledge by unifying medical vocabularies. (1) Phenotype risk score analysis across 4.57 million patients, (2) Inter-institutional clinician panels evaluate alignment with clinical knowledge across 90 diseases and 3,000 clinical codes.

Dec 2024:   SPECTRA in Nature Machine Intelligence

Are biomedical AI models truly as smart as they seem? SPECTRA is a framework that evaluates models by considering the full spectrum of cross-split overlap: train-test similarity. SPECTRA reveals gaps in benchmarks for molecular sequence data across 19 models, including LLMs, GNNs, diffusion models, and conv nets.

Nov 2024:   Ayush Noori Selected as a Rhodes Scholar

Congratulations to Ayush Noori on being named a Rhodes Scholar! Such an incredible achievement!

Nov 2024:   PocketGen in Nature Machine Intelligence

Oct 2024:   Activity Cliffs in Molecular Properties

Oct 2024:   Knowledge Graph Agent for Medical Reasoning

Zitnik Lab  ·  Artificial Intelligence in Medicine and Science  ·  Harvard  ·  Department of Biomedical Informatics