User Tools

Site Tools


paihc

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
paihc [2019/07/16 07:29]
nigam
paihc [2020/09/21 11:01] (current)
nigam
Line 1: Line 1:
-====== Stanford Medicine Program for Artificial Intelligence in Healthcare  ====== +~~REDIRECT>:aihc~~
- +
-Consider the case of Vera, a 60 year old woman of Asian descent with a history of hypertension and asthma, entering the doctor's office with symptoms consistent with a diagnosis of pneumonia. Her primary care physician must diagnose and treat the acute illness, but also manage risk for chronic diseases such as heart attack, stroke, renal failure, and osteoporosis. Ideally, treatment decisions for Vera are guided by risk stratification to decide if to treat, and evidence based selection of how to treat (perhaps[[:inf-consult | learning from similar patients]]).  +
- +
-However, without accurate risk-stratification to decide who to treat, and when, Vera's care remains reactive and suboptimal. Imagine how Vera’s experience would change if we could predict risks of specific events and take proactive action. +
- +
-====== Our efforts ====== +
- +
-We are working on a set of efforts, which are collectively referred to as the [[https://stanfordhealthcare.org/stanford-health-now/ceo-report/advancing-precision-health-takes-real-smarts.html | Stanford Medicine Program for Artificial Intelligence in Healthcare]], with the mission of bringing AI technologies to the clinic, safely, cost-effectively and ethically. {{ ::shc_program_for_ai_in_healthcare.pdf |See brochure}}. The four key components are: +
-  - **Training**: We pair informatics experts with clinician domain experts who provide guidance on the clinical workflow, and input on “what would they do differently” if they had a prediction in hand. +
-  - **Implementation**: We partner with analytics and IT teams at Stanford Hospital to deploy predictive models in care delivery workflows. See our [[http://stanmed.stanford.edu/2018summer/artificial-intelligence-puts-humanity-health-care.html | first effort]] in improving palliative care. +
-  - **Rethinking Utility**: Clinical utility of making a prediction, and taking actions depends on factors beyond model accuracy, such as lead time offered by the prediction, the existence of a mitigating action, the cost and ease of intervening, the logistics of the intervention, and incentives of the providers. We are working on creating a utility assessment framework that examines prediction-action pairs. +
-  - **Safety, ethics, and health system effects**: We map the multiple groups involved in executing a prediction-action pair and study their varying perspectives, positions, stakes, and commitments to pre-empt ethical challenges. We believe that the use of AI can lead to good decisions if we keep human intelligence in the loop. Read our perspective on [[https://www.nejm.org/doi/full/10.1056/NEJMp1714229 | addressing ethical challenges]]. +
- +
-If done right, the adoption of efficacious prediction-action pairs can massively improve the ability of a health system to find patients at risk and act early. As part of this effort, we search for clinical situations where the application of risk-stratification and proactive action can provide cost-effective, health system level benefits.  +
- +
paihc.1563287380.txt.gz · Last modified: 2019/07/16 07:29 by nigam