User Tools

Site Tools


foundationmodels

We believe that this new class of models – called foundation models – may lead to more affordable, easily adaptable health AI.

In a blog post at HAI we discuss the opportunities foundation models offer in terms of a better paradigm of doing “AI in healthcare.” First, we outline what foundation models are and their relevance to healthcare. Then we highlight what we believe are key opportunities provided by the next generation of medical foundation models, specifically:

  • AI Adaptability with Fewer Manually Labeled Examples
  • Modular, Reusable, and Robust AI
  • Making Multimodality the New Normal
  • New Interfaces for Human-AI Collaboration
  • Easing the Cost of Developing, Deploying, and Maintaining AI in Hospitals

See the full post at How Foundation Models Can Advance AI in Healthcare. To support the claims made in the post, we have built and released two foundation models:

  1. CLMBR (clinical language modeling based representations) is a 141 million parameter autoregressive foundation model pretrained on 2.57 million deidentified EHRs from Stanford Medicine. This model is originally described in Steinberg et al. 2021. As input, this model expects a sequence of coded medical events that have been mapped to Standard Concepts within the OMOP-CDM vocabulary. The model generates representations of patients which can then be used for downstream prediction tasks. Such patient representation schemes enable a 3.5% mean improvement in AUROC on five prediction tasks compared to standard baselines, with the average improvement rising to 19% when only a small number of patient records are available for training the clinical prediction model. The model is available at – https://huggingface.co/StanfordShahLab/clmbr-t-base.
  2. MOTOR (Many Outcome Time Oriented Representations) is a self-supervised, time-to-event (TTE) 143M parameter foundation model which is pretrained on timestamped sequences of events in 55 million electronic health records (EHR) comprising 9 billion clinical events. This model is originally described in Steinberg et al. 2024. We evaluate MOTOR's performance on 19 tasks, across 3 patient databases (a private EHR system, MIMIC-IV, and Merative claims data). Task-specific models adapted from MOTOR improve time-dependent C statistics by 4.6% over state-of-the-art, improve label efficiency by up to 95% ,and are more robust to temporal distributional shifts. The model is available at - https://huggingface.co/StanfordShahLab/motor-t-base
foundationmodels.txt · Last modified: 2024/03/15 15:04 by nigam