Our paper on artificial intelligence foundation for therapeutic science is published in Nature Chemical Biology.
The recording of our tutorial on using graph AI to advance precision medicine is available. Tune into four hours of interactive lectures about state-of-the-art graph AI methods and applications in precision medicine.
New preprint! We introduce a resource for broad evaluation of the quality and reliability of GNN explanations, addressing challenges and providing solutions for GNN explainability. Project website.
We are excited to host the AI4Science meeting at NeurIPS discussing AI-driven scientific discovery, implementation and verification of AI in science, the influence AI has on the conduct of science, and more.
Check out our half-day tutorial with resources on methods and applications in graph representation learning for precision medicine.
Welcoming a research fellow Julia Balla and three Summer students, Nicholas Ho, Satvik Tripathi, and Isuru Herath.
Excited to share a preprint on self-supervised method for pre-training. Project website with evaluation on eight datasets, including electrodiagnostic testing, human daily activity recognition, and health state monitoring.
Excited to welcome George Dasoulas and Huan He, new postdocs joining us this Summer.
Congratulations to George Dasoulas, our incoming postdoctoral fellow, on being named the 2022 Wojcicki Troper HDSI Postdoctoral Fellow. We are delighted to welcome George in our group.
Webster is on the cover of April issue of Cell Systems. Webster uses cell viability changes following gene perturbation to automatically learn cellular functions and pathways from data.
Yasha won the National Defense Science and Engineering Graduate (NDSEG) Fellowship. Congratulations!
Owen has been selected to present our research on explainable biomedical AI to members of the US Congress at the “Posters on the Hill” symposium. Congrats Owen!
Excited to present a tutorial at ISMB 2022 on graph representation learning for precision medicine. Congratulations, Michelle!
Marissa Sumathipala is among the 23 outstanding US scholars selected be part of the 2022 class of Gates Cambridge Scholars at the University of Cambridge. Congratulations, Marissa!
Hot off the press in Cell Systems. Webster is a tool to infer gene multifunctionality from high-dimensional gene perturbation data by applying sparse representation learning to large CRISPR-Cas9 fitness screens. Explore Webster’s web portal.
Paper on probing GNN explainers through rigorous theoretical and empirical analysis of GNN explanation methods accepted to AISTATS. Congratulations, Chirag!
Marissa Sumathipala is selected for the prestigious Churchill Scholarship. Congratulations, Marissa!
Human space exploration beyond low Earth orbit will involve missions of significant distance and duration. To effectively mitigate myriad space health hazards, paradigm shifts in data and space health systems are necessary to enable Earth independence. Delighted to be working with NASA and can share our recommendations!
Hot off the press in Nature Communications. Excited to share an ML approach for predicting functional interactions between human genes using the phylogenetic profiles across 1,154 eukaryotic species.
The COVID-19 pandemic has reshaped health and medicine in ways both dramatic and subtle. Some of the less obvious shifts can only emerge from analysis of millions of pieces of data—patient records, medical notes, clinical encounter reports. Read the story in Harvard Medicine News highlighting our research.
New preprint! We introduce Raindrop, a graph-guided network for learning representations of irregularly sampled multivariate time series.
Hot off the press in Nature Computational Science! We develop an algorithmic approach for massive analysis of drug adverse events. Our analyses of 10,443,476 adverse event reports have implications for safe medication use and public health policy, and can enable comparison of COVID-19 pandemic to other health emergencies.
Hot off the press in Nature Communications! We developed OnClass, an algorithm and accompanying software for automatically classifying cells into cell types that are part of the controlled vocabulary that forms the Cell Ontology.
We will be organizing a meeting on Trustworthy AI for Healthcare at AAAI 2022. Stay tuned for details and call for papers.
Our latest paper on Therapeutics Data Commons: Machine Learning Datasets and Tasks for Drug Discovery and Development will appear at NeurIPS. We are excited to contribute novel datasets and benchmarks in the broad area of therapeutics.
We are organizing the AI for Science workshop at NeurIPS 2021 and have a stellar lineup of invited speakers.
Our short paper on Interactive Visual Explanations for Deep Drug Repurposing received the Best Paper Award at the ICML Interpretable ML in Healthcare Workshop. Stay tuned for more news on this evolving project.
We are excited to be at ICML 2021 where we will present 1 paper at Workshop on Socially Responsible Machine Learning, 1 paper at Workshop on Theoretic Foundation, Criticism, and Application Trend of Explainable AI, 2 papers at Workshop on Interpretable Machine Learning in Healthcare, and 1 paper at Workshop on Computational Biology. Congratulations to fantastic students!
We introduce the first axiomatic framework for theoretically analyzing, evaluating, and comparing GNN explanation methods. We formalize key properties that all methods should satisfy to generate reliable explanations: faithfulness, stability, and fairness.
New preprint on contextualized protein embeddings aims to characterize genes with disease-specific interactions and elucidate disease manifestation in specific cell types.
Our unified framework for fair and stable graph representation learning has just been accepted at UAI. We establish a theoretical connection between counterfactual fairness and stability and use it in a framework that can be used with any GNN to learn fair and stable embeddings.