Unified Clinical Vocabulary Embeddings for Advancing Precision Medicine

Integrating clinical knowledge into AI remains challenging despite standardized medical protocols and vocabularies. Medical codes, central to healthcare systems, often reflect operational patterns shaped by geographic factors, national policies, insurance frameworks, and physician practices rather than the precise representation of clinical knowledge. This disconnect hampers AI in representing clinical relationships, raising concerns about bias, transparency, and generalizability.

Here, we developed a resource of 67,124 clinical vocabulary embeddings derived from a clinical knowledge graph tailored to electronic health record vocabularies, spanning over 1.3 million edges. Using graph transformer neural networks, we generated clinical vocabulary embeddings that provide a new representation of clinical knowledge unified across seven medical vocabularies. We validated these embeddings through a phenotype risk score analysis involving 4.57 million patients from Clalit Healthcare Services, demonstrating their ability to stratify individuals by survival outcomes. Inter-institutional panels of clinicians evaluated the alignment of embedding with established clinical knowledge across 90 diseases and 3,000 clinical codes, confirming their robustness and transferability.

This resource addresses the gap in integrating clinical vocabularies in AI models and training datasets and supports population and patient modelsin precision medicine.

Introduction

Medicine is grounded in centuries of medical knowledge and the pursuit of individualized patient care through meticulous reasoning and evidence-based practice. The extensive development of standardized vocabularies and ontologies in the medical field has fostered a unified representation of clinical information, enabling interoperability and data exchange across healthcare systems globally. These standardized coding systems provide a consistent framework for effectively representing clinical knowledge, forming the backbone of precision medicine initiatives. With the widespread adoption of electronic health records (EHRs) and the standardization of medical data, precision medicine has shifted toward large-scale, data-driven approaches that directly leverage structured EHR data. More than half of healthcare foundation models now rely exclusively on structured clinical codes, such as billing data and medication records.

We developed a resource that constructs embeddings for 67,124 medical codes, defining a unifying latent space of clinical knowledge. Using state-of-the-art relational graph transformers and a clinical knowledge graph, we created a cohesive, machine-readable map that captures relationships among seven clinical vocabularies, including laboratory tests, diagnosis codes, and medications, without requiring manual curation. By integrating verified knowledge bases and medical ontologies into a knowledge graph of standardized EHR codes, this resource reduces the risk of propagating inaccuracies while ensuring interpretability and transparency.

Our resource provides a hypothesis-free approach to generating clinically insightful representations of medical codes. It offers three main applications:

  • Integrating clinical knowledge into precision medicine patient models,
  • Enabling patient-agnostic generalizable models of populations that can be safely exchanged across institutions, and
  • Providing insights into the organization of clinical knowledge.

The latent space of medical codes reveals patterns consistent with human anatomy and disease presentations, capturing symptomatic and clinical presentations of diseases that can be decomposed into symptom embeddings. We demonstrate the predictive utility of these embeddings through a large-scale phenotype risk score analysis for three chronic diseases across 4.57 million patients from Clalit Healthcare Services.

An expert clinical evaluation across 90 diseases and 3,000 clinical codes conducted with clinician panels in the United States and Israel validates the alignment of these embeddings with established medical knowledge. Our findings establish unified medical code embeddings as a foundational resource for advancing AI-driven healthcare.

Unified clinical vocabulary embeddings can facilitate collaborative, scalable efforts in clinical AI and offer a tool for deepening our understanding of disease relationships, laboratory tests, diagnosis codes, medications, and their underlying mechanisms.

Clinical embeddings capture knowledge of human anatomy and clinical subspecialties

Latent embedding space reflects patterns of disease presentation and diagnostic processes

Unified vocabulary embeddings enable disease risk stratification and severity prediction

Clinical vocabulary embeddings capture medical knowledge consensus across institutions

Publication

Unified Clinical Vocabulary Embeddings for Advancing Precision Medicine
Ruth Johnson, Uri Gottlieb, Galit Shaham, Lihi Eisen, Jacob Waxman, Stav Devons-Sberro, Curtis R. Ginder, Peter Hong, Raheel Sayeed, Ben Y. Reis, Ran D. Balicer, Noa Dagan, and Marinka Zitnik
In Review 2024 [medRxiv]

@article{johnson2024unified,
  title={Unified Clinical Vocabulary Embeddings for Advancing Precision Medicine},
  author={Johnson, Ruth and Gottlieb, Uri and Shaham, Galit and Eisen, Lihi and Waxman, Jacob and Devons-Sberro, Stav and Ginder, Curtis R. and Hong, Peter and Sayeed, Raheel and Reis, Ben Y. and Balicer, Ran D. and Dagan, Noa and Zitnik, Marinka},
  journal={medrxiv},
  url={https://www.medrxiv.org/content/10.1101/2024.12.03.24318322},
  year={2024}
}

Code and Data Availability

Pytorch implementation of PocketGen is available in the GitHub repository.

Data and clinical concept embeddings, as well as the PheKG knowledge graph, are available via Harvard Dataverse. Due to national and organizational data privacy regulations, CHS individual-level data from this study cannot be shared publicly.

Authors


This research would not be possible without the generous support by The Ivan and Francesca Berkowitz Family Living Laboratory Collaboration at Harvard Medical School and Clalit Research Institute.

Latest News

Dec 2024:   Unified Clinical Vocabulary Embeddings

New paper: A unified resource provides a new representation of clinical knowledge by unifying medical vocabularies. (1) Phenotype risk score analysis across 4.57 million patients, (2) Inter-institutional clinician panels evaluate alignment with clinical knowledge across 90 diseases and 3,000 clinical codes.

Dec 2024:   SPECTRA in Nature Machine Intelligence

Are biomedical AI models truly as smart as they seem? SPECTRA is a framework that evaluates models by considering the full spectrum of cross-split overlap: train-test similarity. SPECTRA reveals gaps in benchmarks for molecular sequence data across 19 models, including LLMs, GNNs, diffusion models, and conv nets.

Nov 2024:   Ayush Noori Selected as a Rhodes Scholar

Congratulations to Ayush Noori on being named a Rhodes Scholar! Such an incredible achievement!

Nov 2024:   PocketGen in Nature Machine Intelligence

Oct 2024:   Activity Cliffs in Molecular Properties

Oct 2024:   Knowledge Graph Agent for Medical Reasoning

Sep 2024:   Three Papers Accepted to NeurIPS

Exciting projects include a unified multi-task time series model, a flow-matching approach for generating protein pockets using geometric priors, and a tokenization method that produces invariant molecular representations for integration into large language models.

Sep 2024:   TxGNN Published in Nature Medicine

Aug 2024:   Graph AI in Medicine

Excited to share a new perspective on Graph Artificial Intelligence in Medicine in Annual Reviews.

Aug 2024:   How Proteins Behave in Context

Harvard Medicine News on our new AI tool that captures how proteins behave in context. Kempner Institute on how context matters for foundation models in biology.

Jul 2024:   PINNACLE in Nature Methods

PINNACLE contextual AI model is published in Nature Methods. Paper. Research Briefing. Project website.

Jul 2024:   Digital Twins as Global Health and Disease Models of Individuals

Paper on digitial twins outlining strategies to leverage molecular and computational techniques to construct dynamic digital twins on the scale of populations to individuals.

Jul 2024:   Three Papers: TrialBench, 3D Structure Design, LLM Editing

Jun 2024:   TDC-2: Multimodal Foundation for Therapeutics

The Commons 2.0 (TDC-2) is an overhaul of Therapeutic Data Commons to catalyze research in multimodal models for drug discovery by unifying single-cell biology of diseases, biochemistry of molecules, and effects of drugs through multimodal datasets, AI-powered API endpoints, new tasks and benchmarks. Our paper.

May 2024:   Broad MIA: Protein Language Models

Apr 2024:   Biomedical AI Agents

Mar 2024:   Efficient ML Seminar Series

We started a Harvard University Efficient ML Seminar Series. Congrats to Jonathan for spearheading this initiative. Harvard Magazine covered the first meeting focusing on LLMs.

Zitnik Lab  ·  Artificial Intelligence in Medicine and Science  ·  Harvard  ·  Department of Biomedical Informatics