User Tools

Site Tools


foundationmodels

This is an old revision of the document!


We believe that this new class of models – called foundation models – may lead to more affordable, easily adaptable health AI.

In a blog post at HAI we discuss the opportunities foundation models offer in terms of a better paradigm of doing “AI in healthcare.” First, we outline what foundation models are and their relevance to healthcare. Then we highlight what we believe are key opportunities provided by the next generation of medical foundation models, specifically:

  • AI Adaptability with Fewer Manually Labeled Examples
  • Modular, Reusable, and Robust AI
  • Making Multimodality the New Normal
  • New Interfaces for Human-AI Collaboration
  • Easing the Cost of Developing, Deploying, and Maintaining AI in Hospitals

See the full post at How Foundation Models Can Advance AI in Healthcare. To support the claims made in the post, we have built and released two foundation models:

  1. CLMBR (clinical language modeling based representations) is a 141 million parameter autoregressive foundation model pretrained on 2.57 million deidentified EHRs from Stanford Medicine. This model is based on the CLMBR architecture originally described in Steinberg et al. 2021. As input, this model expects a sequence of coded medical events that have been mapped to Standard Concepts within the OMOP-CDM vocabulary. The model generates representations of patients which can then be used for downstream prediction tasks. The model is available at – https://huggingface.co/StanfordShahLab/clmbr-t-base
  2. MOTOR (Many Outcome Time Oriented Representations) is a self-supervised, time-to-event (TTE) 143M parameter foundation model which is pretrained on timestamped sequences of events in 55 million electronic health records (EHR) comprising 9 billion clinical events.
foundationmodels.1709860842.txt.gz · Last modified: 2024/03/07 17:20 by nigam