User Tools

Site Tools


aihc

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
aihc [2022/09/23 15:19]
nigam
aihc [2023/11/09 10:11] (current)
nigam
Line 1: Line 1:
-====== Stanford Medicine Program for Artificial Intelligence in Healthcare ====== +#REDIRECT :aihc 
- +#redirect :rail
-In healthcare, predictive models play a role not unlike that of blood tests, X-rays, or MRIsThey influence decisions about whether an intervention is appropriate. Whether a model is usefulness depends on the interplay between the model's output, the intervention it triggers, and the intervention’s benefits and harms. We are working on a set of efforts collectively referred to as the [[https://stanfordhealthcare.org/stanford-health-now/ceo-report/advancing-precision-health-takes-real-smarts.html|Stanford Medicine Program for Artificial Intelligence in Healthcare]], with the mission of bringing AI technologies to the clinic, safely, cost-effectively and ethically.  +
- +
-{{::model-interplay.png?nolink&400|}} +
- +
-Our research evolved from the [[http://stanmed.stanford.edu/2018summer/artificial-intelligence-puts-humanity-health-care.html|effort]] in improving palliative care using machine learning [[https://jamanetwork.com/journals/jama/fullarticle/2748179?guestAccessKey=8cef0271-616d-4e8e-852a-0fddaa0e5101|Ensuring that machine learning models are clinically useful]] requires [[https://www.nature.com/articles/s41591-019-0651-8| estimating the hidden deployment cost of predictive models]] as well as quantifying the [[http://academic.oup.com/jamia/article/28/6/1149/6045012|impact of work capacity constraints]] on achievable benefit, estimating [[https://www.sciencedirect.com/science/article/pii/S1532046421001544|individualized utility]], and learning [[https://pubmed.ncbi.nlm.nih.gov/34350942/|optimal decision thresholds]]. Pre-empting [[https://www.nejm.org/doi/full/10.1056/NEJMp1714229|ethical challenges]] often requires keeping [[https://hai.stanford.edu/news/when-algorithmic-fairness-fixes-fail-case-keeping-humans-loop|humans in the loop]]. +
- +
- +
----- +
- +
-{{youtube>GNTIoEADfY4?small | Artificial Intelligence transforms health care}} +
- +
-Russ Altman and Nigam Shah taking an in-depth look at the growing influence of “data-driven medicine.” +
- +
----- +
- +
-{{youtube>gQu2HbusrGQ?small&start=39 | Keeping the Human in the Loop for Equitable and Fair Use of ML in Healthcare}} +
- +
-Keeping the Human in the Loop for Equitable and Fair Use of ML in Healthcare, at AIMiE 2018 +
- +
----- +
- +
-{{youtube>xW3drA3ijRc?small | Building a Machine Learning Healthcare System, at XLDB 2018}} +
- +
-Building a Machine Learning Healthcare System, at XLDB, April 30 2018 +
- +
aihc.1663971555.txt.gz · Last modified: 2022/09/23 15:19 by nigam