Domain Adaptation for Time Series Under Feature and Label Shifts

The transfer of models trained on labeled datasets from a source domain to unlabeled target domains is facilitated by unsupervised domain adaptation (UDA). However, when dealing with complex time series models, transferability becomes challenging due to differences in dynamic temporal structures between domains, which can result in feature shifts and gaps in time and frequency representations. Additionally, the label distributions in the source and target domains can be vastly different, making it difficult for UDA to address label shifts and recognize labels unique to the target domain.

We introduce Raincoat, a domain adaptation method for time series that can handle both feature and label shifts.

Despite multiple recent methods being proposed to solve the time series UniDA problem under the assumption of feature shift, none of them take into account situations where changes in the frequency domain also act as an implicit feature shift. Furthermore, the field of universal DA for time series without making assumptions about label overlap between source and target domains is an unexplored area of research.

Raincoat Approach

Raincoat has three steps, as illustrated in the following figure:

  • Step 1: Align - It employs time and frequency-based encoders to learn time series representations, using Sinkhorn divergence for source-target feature alignment as frequency features may not have the same support.

  • Step 2: Correct - It retrains an encoder on the target domain to correct any potential misalignments.

  • Step 3: Inference - It calculates the difference between the aligned and corrected representations of target samples to identify unknown target samples through a bi-modality test and binary classification task.

Properties of Raincoat

To address feature shift, Raincoat takes into account implicit frequency feature shift and incorporates additional frequency feature inductive bias in the encoder, to uncover potential invariant features across domains and enhance transferability.

To address label shifts, it employs target-specific feature encoders that retain the semantic meaning of the target domain, enabling inference without relying on user-specified input.

Our experiments using 5 datasets and comparing with 13 state-of-the-art domain adaptation methods show that Raincoat outperforms these methods in the presence of both feature and label shifts. The following figure illustrates the superiority of Raincoat for closed-set domain adaptation.

Publication

Domain Adaptation for Time Series Under Feature and Label Shifts
Huan He, Owen Queen, Teddy Koker, Consuelo Cuevas, Theodoros Tsiligkaridis, Marinka Zitnik
In Review 2023 [arXiv]

@inproceedings{he2023domain,
title = {Domain Adaptation for Time Series Under Feature and Label Shifts},
author = {He, Huan and Queen, Owen and Koker, Teddy and Cuevas, Consuelo and Tsiligkaridis, Theodoros and Zitnik, Marinka},
booktitle = {https://arxiv.org/abs/2302.03133},
year      = {2023}
}

Datasets

Code

PyTorch implementation together with documentation and examples of usage is available in the GitHub repository.

Authors

Latest News

Mar 2023:   New Paper in Nature Machine Intelligence

New paper with NASA in Nature Machine Intelligence on biomonitoring and precision health in deep space supported by artificial intelligence.

Mar 2023:   New Paper in Nature Machine Intelligence

Mar 2023:   TxGNN - Zero-shot prediction of therapeutic use

Mar 2023:   GraphXAI published in Scientific Data

Feb 2023:   Welcoming New Postdoctoral Fellows

A warm welcome to postdoctoral fellows Ruth Johnson and Wanxiang Shen. We are thrilled to have them joining us soon and look forward to working together.

Feb 2023:   New Preprint on Distribution Shifts

Feb 2023:   PrimeKG published in Scientific Data

Jan 2023:   GNNDelete published at ICLR 2023

Jan 2023:   New Network Principle for Molecular Phenotypes

Dec 2022:   Can we shorten rare disease diagnostic odyssey?

New preprint! Geometric deep learning for diagnosing patients with rare genetic diseases. Implications for using deep learning on sparsely-labeled medical datasets. Thankful for this collaboration with Zak Lab. Project website.

Nov 2022:   Can AI transform the way we discover new drugs?

Our conversation with Harvard Medicine News highlights recent developments and new features in Therapeutics Data Commons.

Oct 2022:   New Paper in Nature Biomedical Engineering

New paper on graph representation learning in biomedicine and healthcare published in Nature Biomedical Engineering.

Sep 2022:   New Paper in Nature Chemical Biology

Our paper on artificial intelligence foundation for therapeutic science is published in Nature Chemical Biology.

Sep 2022:   Self-Supervised Pre-Training at NeurIPS 2022

New paper on self-supervised contrastive pre-training accepted at NeurIPS 2022. Project page. Thankful for this collaboration with the Lincoln National Laboratory.

Sep 2022:   Best Paper Honorable Mention Award at IEEE VIS

Our paper on user-centric AI of drug repurposing received the Best Paper Honorable Mention Award at IEEE VIS 2022. Thankful for this collaboration with Gehlenborg Lab.

Sep 2022:   Multimodal Representation Learning with Graphs

Aug 2022:   On Graph AI for Precision Medicine

The recording of our tutorial on using graph AI to advance precision medicine is available. Tune into four hours of interactive lectures about state-of-the-art graph AI methods and applications in precision medicine.

Aug 2022:   Evaluating Explainability for GNNs

New preprint! We introduce a resource for broad evaluation of the quality and reliability of GNN explanations, addressing challenges and providing solutions for GNN explainability. Project website.

Jul 2022:   New Frontiers in Graph Learning at NeurIPS

Excited to organize the New Frontiers in Graph Learning workshop at NeurIPS.

Jul 2022:   AI4Science at NeurIPS

We are excited to host the AI4Science meeting at NeurIPS discussing AI-driven scientific discovery, implementation and verification of AI in science, the influence AI has on the conduct of science, and more.

Zitnik Lab  ·  Artificial Intelligence in Medicine and Science  ·  Harvard  ·  Department of Biomedical Informatics