User Tools

Site Tools


foundationmodels

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision Both sides next revision
foundationmodels [2024/03/07 17:19]
nigam
foundationmodels [2024/03/07 17:20]
nigam
Line 11: Line 11:
 See the full post at [[https://hai.stanford.edu/news/how-foundation-models-can-advance-ai-healthcare|How Foundation Models Can Advance AI in Healthcare]]. To support the claims made in the post, we have built and released two foundation models: See the full post at [[https://hai.stanford.edu/news/how-foundation-models-can-advance-ai-healthcare|How Foundation Models Can Advance AI in Healthcare]]. To support the claims made in the post, we have built and released two foundation models:
  
-  - [[https://clmbr.stanford.edu/ | CLMBR (clinical language modeling based representations)]] is a 141 million parameter autoregressive foundation model pretrained on 2.57 million deidentified EHRs from Stanford Medicine. This model is based on the CLMBR architecture originally described in [[https://www.sciencedirect.com/science/article/pii/S1532046420302653 |Steinberg et al. 2021]]. As input, this model expects a sequence of coded medical events that have been mapped to Standard Concepts within the OMOP-CDM vocabulary. The model generates representations of patients which can then be used for downstream prediction tasks. +  - [[https://clmbr.stanford.edu/ | CLMBR (clinical language modeling based representations)]] is a 141 million parameter autoregressive foundation model pretrained on 2.57 million deidentified EHRs from Stanford Medicine. This model is based on the CLMBR architecture originally described in [[https://www.sciencedirect.com/science/article/pii/S1532046420302653 |Steinberg et al. 2021]]. As input, this model expects a sequence of coded medical events that have been mapped to Standard Concepts within the OMOP-CDM vocabulary. The model generates representations of patients which can then be used for downstream prediction tasks. The model is available at – https://huggingface.co/StanfordShahLab/clmbr-t-base 
- +  - [[https://goto.stanford.edu/motor | MOTOR (Many Outcome Time Oriented Representations)]] is a self-supervised, time-to-event (TTE) 143M parameter foundation model which is pretrained on timestamped sequences of events in 55 million electronic health records (EHR) comprising 9 billion clinical events.
-For details see – https://huggingface.co/StanfordShahLab/clmbr-t-base+
  
foundationmodels.txt · Last modified: 2024/03/15 15:04 by nigam