Differences
This shows you the differences between two versions of the page.
Both sides previous revision
Previous revision
|
Next revision
Both sides next revision
|
datascience [2022/09/23 15:38] nigam |
datascience [2022/09/23 15:40] nigam |
To address the gaps found, we developed [[https://academic.oup.com/jamia/article/28/6/1149/6045012 | a framework to estimate usefulness]], and a way to assess fairness in terms of the [[https://informatics.bmj.com/content/29/1/e100460 | consequences of using a model to guide care]]. To translate to practice, we conducted a [[https://www.frontiersin.org/articles/10.3389/fdgth.2022.943768/full | fairness audit]] which required 115 person-hours across 8–10 months. To disseminate our work, we joined in the founding team for [[https://www.coalitionforhealthai.org/ | The Coalition for Health AI]], whose mission is to provide guidelines regarding an ever-evolving landscape of health AI tools to ensure high quality care, increase credibility amongst users, and meet health care needs. | To address the gaps found, we developed [[https://academic.oup.com/jamia/article/28/6/1149/6045012 | a framework to estimate usefulness]], and a way to assess fairness in terms of the [[https://informatics.bmj.com/content/29/1/e100460 | consequences of using a model to guide care]]. To translate to practice, we conducted a [[https://www.frontiersin.org/articles/10.3389/fdgth.2022.943768/full | fairness audit]] which required 115 person-hours across 8–10 months. To disseminate our work, we joined in the founding team for [[https://www.coalitionforhealthai.org/ | The Coalition for Health AI]], whose mission is to provide guidelines regarding an ever-evolving landscape of health AI tools to ensure high quality care, increase credibility amongst users, and meet health care needs. |
| |
The research that underpins our work is supported by the [[:aihc | Stanford Medicine Program for AI in Healthcare]] and the framework guiding the development and evaluation of **F**air, **U**seful, and **R**eliable **M**odels (FURM) is below. The stages are: 1) problem specification and clarification, 2) development and validation of the model, 3) analysis of utility and impacts on the clinical workflow that is triggered by the model, and 4) monitoring and maintenance of the deployed model as well as evaluation of the running system comprised of the model-triggered workflow | The research that underpins our work is supported by the [[:aihc | Stanford Medicine Program for AI in Healthcare]] and the framework guiding the development and evaluation of **F**air, **U**seful, and **R**eliable **M**odels (**FURM**) is below. |
| |
| {{ ::furm-4-steps.png?nolink&600 |}} |
| |
| The four stages are: 1) problem specification and clarification, 2) development and validation of the model, 3) analysis of utility and impacts on the clinical workflow that is triggered by the model, and 4) monitoring and maintenance of the deployed model as well as evaluation of the running system comprised of the model-triggered workflow |