Unified Framework for Fair and Stable Graph Representation Learning

As the representations output by Graph Neural Networks (GNNs) are increasingly employed in real-world applications, it becomes important to ensure that these representations are fair and stable. We establish a key connection between counterfactual fairness and stability and use it to develop a novel framework, NIFTY (uNIfying Fairness and stabiliTY), which can be used with any GNN to learn fair and stable representations.

We establish a key connection between counterfactual fairness and stability and leverage it to develop NIFTY (uNIfying Fairness and stabiliTY), a novel framework that can be used with any GNN to learn fair and stable representations.

We introduce a novel objective function that simultaneously accounts for fairness and stability and develop a layer-wise weight normalization using the Lipschitz constant to enhance neural message passing in GNNs. In doing so, we enforce fairness and stability both in the objective function as well as in the GNN architecture. Further, we show theoretically that our layer-wise weight normalization promotes counterfactual fairness and stability in the resulting representations.

We introduce three new graph datasets comprising of high-stakes decisions in criminal justice and financial lending domains. Extensive experimentation with the above datasets demonstrates the efficacy of our framework.

Publication

Towards a Unified Framework for Fair and Stable Graph Representation Learning
Chirag Agarwal, Himabindu Lakkaraju*, Marinka Zitnik*
Conference on Uncertainty in Artificial Intelligence, UAI 2021 [arXiv] [poster]

@inproceedings{agarwal2021towards,
  title={Towards a Unified Framework for Fair and Stable Graph Representation Learning},
  author={Agarwal, Chirag and Lakkaraju, Himabindu and Zitnik, Marinka},
  booktitle={Proceedings of Conference on Uncertainty in Artificial Intelligence, UAI},
  year={2021}
}

Motivation

Over the past decade, there has been a surge of interest in leveraging GNNs for graph representation learning. GNNs have been used to learn powerful representations that enabled critical predictions in downstream applications—e.g., predicting protein-protein interactions, drug repurposing, crime forecasting, news and product recommendations.

As GNNs are increasingly implemented in real-world applications, it becomes important to ensure that these models and the resulting representations are safe and reliable. More specifically, it is important to ensure that:

  • these models and the representations they produce are not perpetrating undesirable discriminatory biases (i.e., they are fair), and
  • these models and the representations they produce are robust to attacks resulting from small perturbations to the graph structure and node attributes (i.e., they are stable).

NIFTY framework

We first identify a key connection between counterfactual fairness and stability. While stability accounts for robustness w.r.t. small random perturbations to node attributes and/or edges, counterfactual fairness accounts for robustness w.r.t. modifications of the sensitive attribute.

We leverage this connection to develop NIFTY that can be used with any existing GNN model to learn fair and stable representations. Our framework exploits the aforementioned connection to enforce fairness and stability both in the objective function as well as in the GNN architecture.

More specifically, we introduce a novel objective function which simultaneously optimizes for counterfactual fairness and stability by maximizing the similarity between representations of the original nodes in the graph, and their counterparts in the augmented graph. Nodes in the augmented graph are generated by slightly perturbing the original node attributes and edges or by considering counterfactuals of the original nodes where the value of the sensitive attribute is modified. We also develop a novel method for improving neural message passing by carrying out layer-wise weight normalization using the Lipschitz constant.

We theoretically show that this normalization promotes counterfactual fairness and stability of learned representations. To the best of our knowledge, this work is the first to tackle the problem of learning node representations that are both fair and stable.

The figure above gives an overview of NIFTY. NIFTY can learn node representations that are both fair and stable (i.e., invariant to the sensitive attribute value and perturbations to the graph structure and non-sensitive attributes) by maximizing the similarity between representations from diverse augmented graphs.

Datasets

We introduce and experiment with three new graph datasets comprising of critical decisions in criminal justice (if a defendant should be released on bail) and financial lending (if an individual should be given loan) domains.

  • German credit graph has 1,000 nodes representing clients in a German bank that are connected based on the similarity of their credit accounts. The task is to classify clients into good vs. bad credit risks considering clients’ gender as the sensitive attribute.
  • Recidivism graph has 18,876 nodes representing defendants who got released on bail at the U.S state courts during 1990-2009. Defendants are connected based on the similarity of past criminal records and demographics. The task is to classify defendants into bail (i.e., unlikely to commit a violent crime if released) vs. no bail (i.e., likely to commit a violent crime) considering race information as the protected attribute.
  • Credit defaulter graph has 30,000 nodes representing individuals that we connected based on the similarity of their spending and payment patterns. The task is to predict whether an individual will default on the credit card payment or not while considering age as the sensitive attribute.

Code

Source code is available in the GitHub repository.

Authors

Latest News

Jul 2021:   Best Paper Award at ICML Interpretable ML for Healthcare

Our short paper on Interactive Visual Explanations for Deep Drug Repurposing received the Best Paper Award at ICML 2021 Interpretable ML in Healthcare Workshop. Stay tuned for more news on this evolving project.

Jul 2021:   Five presentations at ICML 2021

Jun 2021:   Theory and Evaluation for Explanations

We introduce the first axiomatic framework for theoretically analyzing, evaluating, and comparing GNN explanation methods. We formalize key properties that all methods should satisfy to generate reliable explanations: faithfulness, stability, and fairness.

Jun 2021:   Deep Contextual Learners for Protein Networks

New preprint on contextualized protein embeddings aims to characterize genes with disease-specific interactions and elucidate disease manifestation in specific cell types.

May 2021:   New Paper Accepted at UAI

Our unified framework for fair and stable graph representation learning has just been accepted at UAI. We establish a theoretical connection between counterfactual fairness and stability and use it in a framework that can be used with any GNN to learn fair and stable embeddings.

Apr 2021:   Hot Off the Press: COVID-19 Repurposing in PNAS

Hot off the press! We deployed AI/ML and network medicine algorithms to rank 6,340 drugs for their expected efficacy against SARS-CoV-2. We screened in human cells the top-ranked drugs, identifying six drugs that reduced viral infection, four of which could be repurposed to treat COVID-19.

Apr 2021:   Representation Learning for Biomedical Nets

In our survey on representation learning for biomedical networks we discuss how long-standing principles of network biology and medicine provide the conceptual grounding for representation learning, explain its successes, and inform future advances.

Mar 2021:   Receiving Amazon Research Award

We are excited about receiving Amazon Faculty Research Award on Actionable Graph Learning for Finding Cures for Emerging Diseases. Thank you to Amazon Science for supporting our research.

Mar 2021:   Michelle's Graduate Research Fellowship

Michelle M. Li won the NSF Graduate Research Fellowship Award. Congratulations!

Mar 2021:   Hot Off the Press: Multiscale Interactome

Hot off the press! We develop a multiscale interactome approach to explain disease treatments. The approach can predict drug-disease treatments, identify proteins and biological functions related to treatment, and identify genes that alter treatment’s efficacy and adverse reactions.

Mar 2021:   Graph Networks in Computational Biology

We are excited to share slides from our recent lecture on Graph Neural Networks in Computational Biology, which we gave at Stanford ML for Graphs course.

Mar 2021:   Fair and Stable Graph Representation Learning

We are thrilled to share the latest preprint on fair and stable graph representation learning.

Feb 2021:   New Preprint on Therapeutics Data Commons

Jan 2021:   An Algorithmic Approach to Patient Safety

The new algorithmic approach investigates population-scale patient safety data and reveals inequalities in adverse events before and during COVID-19 pandemic.

Jan 2021:   Workshop on AI in Health at the Web Conference

We are excited to co-organize Workshop on AI in Health: Transferring and Integrating Knowledge for Better Health at the Web (WWW) conference. The call for papers is open! We also announce the AI in Health Data Challenge.

Jan 2021:   Tutorial on ML for Drug Development

We will present a tutorial on ML/AI for drug discovery and development at IJCAI conference. See the tutorial website.

Dec 2020:   Two New Papers Published

Dec 2020:   Bayer Early Excellence in Science Award

Our research won the Bayer Early Excellence in Science Award. We are honored to have received this recognition!

Nov 2020:   Therapeutics Data Commons (TDC)

We are thrilled to announce Therapeutics Data Commons (TDC)! We invite you to join TDC. TDC is an open-source and community-driven effort.

Nov 2020:   National Symposium on the Future of Drugs

On behalf of the NSF, we are organizing the National Symposium on Drug Repurposing for Future Pandemics. We have a stellar lineup of invited speakers! Register at www.drugsymposium.org.

Zitnik Lab  ·  Harvard  ·  Department of Biomedical Informatics