This shows you the differences between two versions of the page.
clmbr [2023/12/13 10:47] nigam created |
clmbr [2023/12/13 10:47] (current) nigam |
||
---|---|---|---|
Line 1: | Line 1: | ||
- | CLMBR (clinical language modeling based representations) is a 141 million parameter autoregressive foundation model pretrained on 2.57 million deidentified EHRs from Stanford Medicine. This is the model from ([[https:// | + | ====== |
+ | |||
+ | |||
+ | This is a 141 million parameter autoregressive foundation model pretrained on 2.57 million deidentified EHRs from Stanford Medicine. This is the model from ([[https:// | ||
As input, this model expects a sequence of coded medical events that have been mapped to Standard Concepts within the OMOP-CDM vocabulary. The model generates representations of patients which can then be used for downstream prediction tasks. | As input, this model expects a sequence of coded medical events that have been mapped to Standard Concepts within the OMOP-CDM vocabulary. The model generates representations of patients which can then be used for downstream prediction tasks. | ||
For details see -- https:// | For details see -- https:// |