User Tools

Site Tools


furm

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
furm [2024/02/26 19:20]
nigam created
furm [2024/03/13 21:39] (current)
nigam
Line 1: Line 1:
- <font 20pt/Arial,sans-serif;;inherit;;inherit>Standing on FURM ground - A framework for evaluating __F__air__U__seful, and __R__eliable AI __M__odels in healthcare systems</font> +====== Standing on FURM ground - A framework for evaluating FairUseful, and Reliable AI Models in healthcare systems ====== 
- <font 11pt/Arial,sans-serif;;inherit;;inherit>The impact of using artificial intelligence (AI) to guide patient care or operational processes is an interplay of the AI model’s output, the decision-making protocol based on that output, and the capacity of the stakeholders involved to take the necessary subsequent action. Estimating the effects of this interplay before deployment, and studying it in real time afterwards, are essential to bridge the chasm between AI model development and achievable benefit. To accomplish this, the Data Science team at Stanford Health Care has developed a mechanism to identify __f__air__u__seful and __r__eliable AI __m__odels (FURM) by conducting an ethical review to identify potential value mismatches, simulations to estimate usefulness, financial projections to assess sustainability, as well as analyses to determine IT feasibility, design a deployment strategy, and recommend a prospective monitoring and evaluation plan. We report on FURM assessments done to evaluate six AI guided solutions for potential adoption, spanning clinical and operational settings, each with the potential to impact from several dozen to tens of thousands of patients each year. We describe the assessment process, summarize the six assessments, and share our framework to enable others to conduct similar assessments. Of the six solutions we assessed, three have moved into an implementation planning phase. Our novel contributions – usefulness estimates by simulation, financial projections to quantify sustainability, and a process to do ethical assessments – as well as their underlying methods and open source tools, are available for other healthcare systems to conduct actionable evaluations of candidate AI solutions.</font>+ 
 +The impact of using artificial intelligence (AI) to guide patient care or operational processes is an interplay of the AI model’s output, the decision-making protocol based on that output, and the capacity of the stakeholders involved to take the necessary subsequent action. Estimating the effects of this interplay before deployment, and studying it in real time afterwards, are essential to bridge the chasm between AI model development and achievable benefit. To accomplish this, the Data Science team at Stanford Health Care has developed a mechanism to identify fairuseful and reliable AI models (FURM) by conducting an ethical review to identify potential value mismatches, simulations to estimate usefulness, financial projections to assess sustainability, as well as analyses to determine IT feasibility, design a deployment strategy, and recommend a prospective monitoring and evaluation plan. We report on FURM assessments done to evaluate six AI guided solutions for potential adoption, spanning clinical and operational settings, each with the potential to impact from several dozen to tens of thousands of patients each year. We describe the assessment process, summarize the six assessments, and share our framework to enable others to conduct similar assessments. Of the six solutions we assessed, two have moved into a planning and implementation phase. Our novel contributions – usefulness estimates by simulation, a process to do ethical assessments and financial projections to quantify sustainability – as well as their underlying methods and open source tools, are available for other healthcare systems to conduct actionable evaluations of candidate AI solutions. 
 + 
 +  * arxiv doi – http://arxiv.org/abs/2403.07911 
 +  * direct link to pdf - https://arxiv.org/ftp/arxiv/papers/2403/2403.07911.pdf 
 +  * Underlying tools - https://tinyurl.com/Stanford-APLUS
  
  
furm.1709004038.txt.gz · Last modified: 2024/02/26 19:20 by nigam