Unified Framework for Fair and Stable Graph Representation Learning

As the representations output by Graph Neural Networks (GNNs) are increasingly employed in real-world applications, it becomes important to ensure that these representations are fair and stable. We establish a key connection between counterfactual fairness and stability and use it to develop a novel framework, NIFTY (uNIfying Fairness and stabiliTY), which can be used with any GNN to learn fair and stable representations.

We establish a key connection between counterfactual fairness and stability and leverage it to develop NIFTY (uNIfying Fairness and stabiliTY), a novel framework that can be used with any GNN to learn fair and stable representations.

We introduce a novel objective function that simultaneously accounts for fairness and stability and develop a layer-wise weight normalization using the Lipschitz constant to enhance neural message passing in GNNs. In doing so, we enforce fairness and stability both in the objective function as well as in the GNN architecture. Further, we show theoretically that our layer-wise weight normalization promotes counterfactual fairness and stability in the resulting representations.

We introduce three new graph datasets comprising of high-stakes decisions in criminal justice and financial lending domains. Extensive experimentation with the above datasets demonstrates the efficacy of our framework.

Publication

Towards a Unified Framework for Fair and Stable Graph Representation Learning
Chirag Agarwal, Himabindu Lakkaraju*, Marinka Zitnik*
Conference on Uncertainty in Artificial Intelligence, UAI 2021 [arXiv] [poster]

@inproceedings{agarwal2021towards,
  title={Towards a Unified Framework for Fair and Stable Graph Representation Learning},
  author={Agarwal, Chirag and Lakkaraju, Himabindu and Zitnik, Marinka},
  booktitle={Proceedings of Conference on Uncertainty in Artificial Intelligence, UAI},
  year={2021}
}

Motivation

Over the past decade, there has been a surge of interest in leveraging GNNs for graph representation learning. GNNs have been used to learn powerful representations that enabled critical predictions in downstream applications—e.g., predicting protein-protein interactions, drug repurposing, crime forecasting, news and product recommendations.

As GNNs are increasingly implemented in real-world applications, it becomes important to ensure that these models and the resulting representations are safe and reliable. More specifically, it is important to ensure that:

  • these models and the representations they produce are not perpetrating undesirable discriminatory biases (i.e., they are fair), and
  • these models and the representations they produce are robust to attacks resulting from small perturbations to the graph structure and node attributes (i.e., they are stable).

NIFTY framework

We first identify a key connection between counterfactual fairness and stability. While stability accounts for robustness w.r.t. small random perturbations to node attributes and/or edges, counterfactual fairness accounts for robustness w.r.t. modifications of the sensitive attribute.

We leverage this connection to develop NIFTY that can be used with any existing GNN model to learn fair and stable representations. Our framework exploits the aforementioned connection to enforce fairness and stability both in the objective function as well as in the GNN architecture.

More specifically, we introduce a novel objective function which simultaneously optimizes for counterfactual fairness and stability by maximizing the similarity between representations of the original nodes in the graph, and their counterparts in the augmented graph. Nodes in the augmented graph are generated by slightly perturbing the original node attributes and edges or by considering counterfactuals of the original nodes where the value of the sensitive attribute is modified. We also develop a novel method for improving neural message passing by carrying out layer-wise weight normalization using the Lipschitz constant.

We theoretically show that this normalization promotes counterfactual fairness and stability of learned representations. To the best of our knowledge, this work is the first to tackle the problem of learning node representations that are both fair and stable.

The figure above gives an overview of NIFTY. NIFTY can learn node representations that are both fair and stable (i.e., invariant to the sensitive attribute value and perturbations to the graph structure and non-sensitive attributes) by maximizing the similarity between representations from diverse augmented graphs.

Datasets

We introduce and experiment with three new graph datasets comprising of critical decisions in criminal justice (if a defendant should be released on bail) and financial lending (if an individual should be given loan) domains.

  • German credit graph has 1,000 nodes representing clients in a German bank that are connected based on the similarity of their credit accounts. The task is to classify clients into good vs. bad credit risks considering clients’ gender as the sensitive attribute.
  • Recidivism graph has 18,876 nodes representing defendants who got released on bail at the U.S state courts during 1990-2009. Defendants are connected based on the similarity of past criminal records and demographics. The task is to classify defendants into bail (i.e., unlikely to commit a violent crime if released) vs. no bail (i.e., likely to commit a violent crime) considering race information as the protected attribute.
  • Credit defaulter graph has 30,000 nodes representing individuals that we connected based on the similarity of their spending and payment patterns. The task is to predict whether an individual will default on the credit card payment or not while considering age as the sensitive attribute.

Code

Source code is available in the GitHub repository.

Authors

Latest News

May 2022:   George Named the 2022 Wojcicki Troper Fellow

May 2022:   New preprint on PrimeKG

New preprint on building knowledge graphs to enable precision medicine applications.

Apr 2022:   Webster on the Cover of Cell Systems

Webster is on the cover of April issue of Cell Systems. Webster uses cell viability changes following gene perturbation to automatically learn cellular functions and pathways from data.

Apr 2022:   NASA Space Biology

Dr. Zitnik will serve on the Science Working Group at NASA Space Biology.

Mar 2022:   Yasha's Graduate Research Fellowship

Yasha won the National Defense Science and Engineering Graduate (NDSEG) Fellowship. Congratulations!

Mar 2022:   AI4Science at ICML 2022

We are excited to be selected to organize the AI4Science meeting at ICML 2022. Stay tuned for details. http://www.ai4science.net/icml22

Mar 2022:   Graph Algorithms in Biomedicine at PSB 2023

Excited to be organizing a session on Graph Algorithms at PSB 2023. Stay tuned for details.

Mar 2022:   Multimodal Learning on Graphs

New preprint! We introduce REMAP, a multimodal AI approach for disease relation extraction and classification. Project website.

Feb 2022:   Explainable Graph AI on the Capitol Hill

Owen has been selected to present our research on explainable biomedical AI to members of the US Congress at the “Posters on the Hill” symposium. Congrats Owen!

Feb 2022:   Graph Neural Networks for Time Series

Hot off the press at ICLR 2022. Check out Raindrop, our graph neural network with unique predictive capability to learn from irregular time series. Project website.

Feb 2022:   Biomedical Graph ML Tutorial Accepted to ISMB

Excited to present a tutorial at ISMB 2022 on graph representation learning for precision medicine. Congratulations, Michelle!

Feb 2022:   Marissa Won the Gates Cambridge Scholarship

Marissa Sumathipala is among the 23 outstanding US scholars selected be part of the 2022 class of Gates Cambridge Scholars at the University of Cambridge. Congratulations, Marissa!

Jan 2022:   Inferring Gene Multifunctionality

Jan 2022:   Deep Graph AI for Time Series Accepted to ICLR

Paper on graph representation learning for time series accepted to ICLR. Congratulations, Xiang!

Jan 2022:   Probing GNN Explainers Accepted to AISTATS

Jan 2022:   Marissa Sumathipala selected as Churchill Scholar

Marissa Sumathipala is selected for the prestigious Churchill Scholarship. Congratulations, Marissa!

Jan 2022:   Therapeutics Data Commons User Meetup

We invite you to join the growing open-science community at the User Group Meetup of Therapeutics Data Commons! Register for the first live user group meeting on Tuesday, January 25 at 11:00 AM EST.

Jan 2022:   Workshop on Graph Learning Benchmarks

Dec 2021:   NASA: Precision Space Health System

Human space exploration beyond low Earth orbit will involve missions of significant distance and duration. To effectively mitigate myriad space health hazards, paradigm shifts in data and space health systems are necessary to enable Earth independence. Delighted to be working with NASA and can share our recommendations!

Zitnik Lab  ·  Harvard  ·  Department of Biomedical Informatics