This is an old revision of the document!
We believe that this new class of models – called foundation models – may lead to more affordable, easily adaptable health AI.
In a blog post at HAI we discuss the opportunities foundation models offer in terms of a better paradigm of doing “AI in healthcare.” First, we outline what foundation models are and their relevance to healthcare. Then we highlight what we believe are key opportunities provided by the next generation of medical foundation models, specifically:
AI Adaptability with Fewer Manually Labeled Examples
Modular, Reusable, and Robust AI
Making Multimodality the New Normal
New Interfaces for Human-AI Collaboration
Easing the Cost of Developing, Deploying, and Maintaining AI in Hospitals
See the full post at How Foundation Models Can Advance AI in Healthcare. To support the claims made in the post, we have built and released two foundation models:
CLMBR (clinical language modeling based representations) is a 141 million parameter autoregressive foundation model pretrained on 2.57 million deidentified EHRs from Stanford Medicine. This model is based on the CLMBR architecture originally described in
Steinberg et al. 2021. As input, this model expects a sequence of coded medical events that have been mapped to Standard Concepts within the OMOP-CDM vocabulary. The model generates representations of patients which can then be used for downstream prediction tasks.
For details see – https://huggingface.co/StanfordShahLab/clmbr-t-base