Differences
This shows you the differences between two versions of the page.
Both sides previous revision
Previous revision
Next revision
|
Previous revision
Last revision
Both sides next revision
|
foundationmodels [2024/03/07 17:17] nigam |
foundationmodels [2024/03/07 17:51] nigam |
* Easing the Cost of Developing, Deploying, and Maintaining AI in Hospitals | * Easing the Cost of Developing, Deploying, and Maintaining AI in Hospitals |
| |
See the full post at [[https://hai.stanford.edu/news/how-foundation-models-can-advance-ai-healthcare|How Foundation Models Can Advance AI in Healthcare]]. | See the full post at [[https://hai.stanford.edu/news/how-foundation-models-can-advance-ai-healthcare|How Foundation Models Can Advance AI in Healthcare]]. To support the claims made in the post, we have built and released two foundation models: |
| |
| - [[https://clmbr.stanford.edu/ | CLMBR (clinical language modeling based representations)]] is a 141 million parameter autoregressive foundation model pretrained on 2.57 million deidentified EHRs from Stanford Medicine. This model is based on the CLMBR architecture originally described in [[https://www.sciencedirect.com/science/article/pii/S1532046420302653 |Steinberg et al. 2021]]. As input, this model expects a sequence of coded medical events that have been mapped to Standard Concepts within the OMOP-CDM vocabulary. The model generates representations of patients which can then be used for downstream prediction tasks. Such patient representation schemes enable a 3.5% mean improvement in AUROC on five prediction tasks compared to standard baselines, with the average improvement rising to 19% when only a small number of patient records are available for training the clinical prediction model. The model is available at – https://huggingface.co/StanfordShahLab/clmbr-t-base. |
| - [[https://goto.stanford.edu/motor | MOTOR (Many Outcome Time Oriented Representations)]] is a self-supervised, time-to-event (TTE) 143M parameter foundation model which is pretrained on timestamped sequences of events in 55 million electronic health records (EHR) comprising 9 billion clinical events. We evaluate MOTOR's performance on 19 tasks, across 3 patient databases (a private EHR system, MIMIC-IV, and Merative claims data). Task-specific models adapted from MOTOR improve time-dependent C statistics by 4.6% over state-of-the-art, improve label efficiency by up to 95% ,and are more robust to temporal distributional shifts. The model is available at - https://huggingface.co/StanfordShahLab/motor-t-base |
| |