User Tools

Site Tools


rail

This is an old revision of the document!


Responsible AI in Healthcare

In healthcare, “Standard AI” models estimate the risk of having some underlying condition or developing it in the future. Whether a model is usefulness depends on the interplay between the model's output, the intervention it triggers, and the intervention’s benefits and harms. We study this interplay for bringing AI to the clinic, safely, cost-effectively and ethically and to inform the work of the Data Science Team at Stanford Healthcare

Blog posts at HAI summarize our work in easily accessible manner. Our research stemmed from the effort in improving palliative care using machine learning. Ensuring that machine learning models are clinically useful requires estimating the hidden deployment cost of predictive models as well as quantifying the impact of work capacity constraints on achievable benefit, estimating individualized utility, and learning optimal decision thresholds. Pre-empting ethical challenges often requires keeping humans in the loop and focus on examining the consequences of model-guided decision making in the presence of clinical care guidelines.



Given the high interest in using large language models (LLMs) in medicine, the creation and use of LLMs in medicine needs to be actively shaped by provisioning relevant training data, specifying the desired benefits, and evaluating the benefits via testing in real-world deployments.

We build clinical foundation models such as CLMBR, MOTOR and verify benefits such as robustness over time, populations and sites. In addition we make available de-identified datasets such as EHRSHOT for few-shot evaluation of foundation models as well as for benchmarking instruction following by commercial LLMs ( MedAlign). We also conduct research to assess whether commercial language models support real-world needs.


Russ Altman and Nigam Shah taking an in-depth look at the growing influence of “data-driven medicine.”

rail.1701822971.txt.gz · Last modified: 2023/12/05 16:36 by nigam