Defense Against Adversarial Attacks

Published: Jun 17, 2020

GNNGuard can defend graph neural networks against a variety of training-time attacks. Remarkably, GNNGuard can restore state-of-the-art performance of any GNN in the face of adversarial attacks.

Zitnik Lab  ·  Artificial Intelligence in Medicine and Science  ·  Harvard  ·  Department of Biomedical Informatics