Unified Framework for Fair and Stable Graph Representation Learning

As the representations output by Graph Neural Networks (GNNs) are increasingly employed in real-world applications, it becomes important to ensure that these representations are fair and stable. We establish a key connection between counterfactual fairness and stability and use it to develop a novel framework, NIFTY (uNIfying Fairness and stabiliTY), which can be used with any GNN to learn fair and stable representations.

We establish a key connection between counterfactual fairness and stability and leverage it to develop NIFTY (uNIfying Fairness and stabiliTY), a novel framework that can be used with any GNN to learn fair and stable representations.

We introduce a novel objective function that simultaneously accounts for fairness and stability and develop a layer-wise weight normalization using the Lipschitz constant to enhance neural message passing in GNNs. In doing so, we enforce fairness and stability both in the objective function as well as in the GNN architecture. Further, we show theoretically that our layer-wise weight normalization promotes counterfactual fairness and stability in the resulting representations.

We introduce three new graph datasets comprising of high-stakes decisions in criminal justice and financial lending domains. Extensive experimentation with the above datasets demonstrates the efficacy of our framework.

Publication

Towards a Unified Framework for Fair and Stable Graph Representation Learning
Chirag Agarwal, Himabindu Lakkaraju*, Marinka Zitnik*
Conference on Uncertainty in Artificial Intelligence, UAI 2021 [arXiv] [poster] [ICML 2021 Socially Responsible ML]

@inproceedings{agarwal2021towards,
  title={Towards a Unified Framework for Fair and Stable Graph Representation Learning},
  author={Agarwal, Chirag and Lakkaraju, Himabindu and Zitnik, Marinka},
  booktitle={Proceedings of Conference on Uncertainty in Artificial Intelligence, UAI},
  year={2021}
}

Motivation

Over the past decade, there has been a surge of interest in leveraging GNNs for graph representation learning. GNNs have been used to learn powerful representations that enabled critical predictions in downstream applications—e.g., predicting protein-protein interactions, drug repurposing, crime forecasting, news and product recommendations.

As GNNs are increasingly implemented in real-world applications, it becomes important to ensure that these models and the resulting representations are safe and reliable. More specifically, it is important to ensure that:

  • these models and the representations they produce are not perpetrating undesirable discriminatory biases (i.e., they are fair), and
  • these models and the representations they produce are robust to attacks resulting from small perturbations to the graph structure and node attributes (i.e., they are stable).

NIFTY framework

We first identify a key connection between counterfactual fairness and stability. While stability accounts for robustness w.r.t. small random perturbations to node attributes and/or edges, counterfactual fairness accounts for robustness w.r.t. modifications of the sensitive attribute.

We leverage this connection to develop NIFTY that can be used with any existing GNN model to learn fair and stable representations. Our framework exploits the aforementioned connection to enforce fairness and stability both in the objective function as well as in the GNN architecture.

More specifically, we introduce a novel objective function which simultaneously optimizes for counterfactual fairness and stability by maximizing the similarity between representations of the original nodes in the graph, and their counterparts in the augmented graph. Nodes in the augmented graph are generated by slightly perturbing the original node attributes and edges or by considering counterfactuals of the original nodes where the value of the sensitive attribute is modified. We also develop a novel method for improving neural message passing by carrying out layer-wise weight normalization using the Lipschitz constant.

We theoretically show that this normalization promotes counterfactual fairness and stability of learned representations. To the best of our knowledge, this work is the first to tackle the problem of learning node representations that are both fair and stable.

The figure above gives an overview of NIFTY. NIFTY can learn node representations that are both fair and stable (i.e., invariant to the sensitive attribute value and perturbations to the graph structure and non-sensitive attributes) by maximizing the similarity between representations from diverse augmented graphs.

Datasets

We introduce and experiment with three new graph datasets comprising of critical decisions in criminal justice (if a defendant should be released on bail) and financial lending (if an individual should be given loan) domains.

  • German credit graph has 1,000 nodes representing clients in a German bank that are connected based on the similarity of their credit accounts. The task is to classify clients into good vs. bad credit risks considering clients’ gender as the sensitive attribute.
  • Recidivism graph has 18,876 nodes representing defendants who got released on bail at the U.S state courts during 1990-2009. Defendants are connected based on the similarity of past criminal records and demographics. The task is to classify defendants into bail (i.e., unlikely to commit a violent crime if released) vs. no bail (i.e., likely to commit a violent crime) considering race information as the protected attribute.
  • Credit defaulter graph has 30,000 nodes representing individuals that we connected based on the similarity of their spending and payment patterns. The task is to predict whether an individual will default on the credit card payment or not while considering age as the sensitive attribute.

Code

Source code is available in the GitHub repository.

Authors

Latest News

Dec 2024:   Unified Clinical Vocabulary Embeddings

New paper: A unified resource provides a new representation of clinical knowledge by unifying medical vocabularies. (1) Phenotype risk score analysis across 4.57 million patients, (2) Inter-institutional clinician panels evaluate alignment with clinical knowledge across 90 diseases and 3,000 clinical codes.

Dec 2024:   SPECTRA in Nature Machine Intelligence

Are biomedical AI models truly as smart as they seem? SPECTRA is a framework that evaluates models by considering the full spectrum of cross-split overlap: train-test similarity. SPECTRA reveals gaps in benchmarks for molecular sequence data across 19 models, including LLMs, GNNs, diffusion models, and conv nets.

Nov 2024:   Ayush Noori Selected as a Rhodes Scholar

Congratulations to Ayush Noori on being named a Rhodes Scholar! Such an incredible achievement!

Nov 2024:   PocketGen in Nature Machine Intelligence

Oct 2024:   Activity Cliffs in Molecular Properties

Oct 2024:   Knowledge Graph Agent for Medical Reasoning

Sep 2024:   Three Papers Accepted to NeurIPS

Exciting projects include a unified multi-task time series model, a flow-matching approach for generating protein pockets using geometric priors, and a tokenization method that produces invariant molecular representations for integration into large language models.

Sep 2024:   TxGNN Published in Nature Medicine

Aug 2024:   Graph AI in Medicine

Excited to share a new perspective on Graph Artificial Intelligence in Medicine in Annual Reviews.

Aug 2024:   How Proteins Behave in Context

Harvard Medicine News on our new AI tool that captures how proteins behave in context. Kempner Institute on how context matters for foundation models in biology.

Jul 2024:   PINNACLE in Nature Methods

PINNACLE contextual AI model is published in Nature Methods. Paper. Research Briefing. Project website.

Jul 2024:   Digital Twins as Global Health and Disease Models of Individuals

Paper on digitial twins outlining strategies to leverage molecular and computational techniques to construct dynamic digital twins on the scale of populations to individuals.

Jul 2024:   Three Papers: TrialBench, 3D Structure Design, LLM Editing

Jun 2024:   TDC-2: Multimodal Foundation for Therapeutics

The Commons 2.0 (TDC-2) is an overhaul of Therapeutic Data Commons to catalyze research in multimodal models for drug discovery by unifying single-cell biology of diseases, biochemistry of molecules, and effects of drugs through multimodal datasets, AI-powered API endpoints, new tasks and benchmarks. Our paper.

May 2024:   Broad MIA: Protein Language Models

Apr 2024:   Biomedical AI Agents

Zitnik Lab  ·  Artificial Intelligence in Medicine and Science  ·  Harvard  ·  Department of Biomedical Informatics