User Tools

Site Tools


rail

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision Both sides next revision
rail [2024/03/17 16:06]
nigam [Making machine learning models clinically useful]
rail [2024/03/17 16:06]
nigam [Making machine learning models clinically useful]
Line 14: Line 14:
 ===== Making machine learning models clinically useful ===== ===== Making machine learning models clinically useful =====
  
-Whether a classifier or prediction [[ https://jamanetwork.com/journals/jama/article-abstract/2748179 | model is usefulness]] in guiding care depends on the interplay between the model's output, the intervention it triggers, and the intervention’s benefits and harms. +Whether a classifier or prediction [[ https://jamanetwork.com/journals/jama/article-abstract/2748179 | model is usefulness]] in guiding care depends on the interplay between the model's output, the intervention it triggers, and the intervention’s benefits and harms. Our work stemmed from the effort [[http://stanmed.stanford.edu/2018summer/artificial-intelligence-puts-humanity-health-care.html|in improving palliative care]] using machine learning. [[https://www.tinyurl.com/hai-blogs | Blog posts at HAI]] summarize our work in easily accessible manner
  
 {{  :model-interplay.png?400&nolink&  }} {{  :model-interplay.png?400&nolink&  }}
Line 20: Line 20:
 We study how to quantify the [[https://www.sciencedirect.com/science/article/pii/S1532046423000400|impact of work capacity constraints]] on achievable benefit, estimate [[https://www.sciencedirect.com/science/article/pii/S1532046421001544|individualized utility]], and learn [[https://pubmed.ncbi.nlm.nih.gov/34350942/|optimal decision thresholds]]. We question conventional wisdom on whether models [[https://tinyurl.com/donot-explain | need to be explainable]], and [[https://www.nature.com/articles/s41591-023-02540-z |generalizable]]. We examine if consequences of using [[https://hai.stanford.edu/news/when-algorithmic-fairness-fixes-fail-case-keeping-humans-loop | algorithm guided care are fair]] and how to [[https://hai.stanford.edu/news/how-do-we-ensure-healthcare-ai-useful | ensure that healthcare models are useful]]. We study this interplay to guide the work of the [[https://dsatshc.stanford.edu/ | Data Science Team at Stanford Healthcare]].  We study how to quantify the [[https://www.sciencedirect.com/science/article/pii/S1532046423000400|impact of work capacity constraints]] on achievable benefit, estimate [[https://www.sciencedirect.com/science/article/pii/S1532046421001544|individualized utility]], and learn [[https://pubmed.ncbi.nlm.nih.gov/34350942/|optimal decision thresholds]]. We question conventional wisdom on whether models [[https://tinyurl.com/donot-explain | need to be explainable]], and [[https://www.nature.com/articles/s41591-023-02540-z |generalizable]]. We examine if consequences of using [[https://hai.stanford.edu/news/when-algorithmic-fairness-fixes-fail-case-keeping-humans-loop | algorithm guided care are fair]] and how to [[https://hai.stanford.edu/news/how-do-we-ensure-healthcare-ai-useful | ensure that healthcare models are useful]]. We study this interplay to guide the work of the [[https://dsatshc.stanford.edu/ | Data Science Team at Stanford Healthcare]]. 
  
-Our work stemmed from the effort [[http://stanmed.stanford.edu/2018summer/artificial-intelligence-puts-humanity-health-care.html|in improving palliative care]] using machine learning. [[https://www.tinyurl.com/hai-blogs | Blog posts at HAI]] summarize our work in easily accessible manner.  
  
  
  
rail.txt · Last modified: 2024/05/02 17:27 by acallaha