User Tools

Site Tools


clmbr

CLMBR (clinical language modeling based representations)

This is a 141 million parameter autoregressive foundation model pretrained on 2.57 million deidentified EHRs from Stanford Medicine. This is the model from ( Wornow et al. 2023), and is based on the CLMBR architecture originally described in ( Steinberg et al. 2021)

As input, this model expects a sequence of coded medical events that have been mapped to Standard Concepts within the OMOP-CDM vocabulary. The model generates representations of patients which can then be used for downstream prediction tasks.

For details see – https://huggingface.co/StanfordShahLab/clmbr-t-base

clmbr.txt · Last modified: 2023/12/13 10:47 by nigam