This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
paihc [2019/03/05 17:08] nigam |
paihc [2020/09/21 10:59] nigam |
||
---|---|---|---|
Line 1: | Line 1: | ||
- | ====== Stanford Medicine Program for Artificial Intelligence in Healthcare | + | ~~REDIRECT> |
- | + | ||
- | Consider the case of Vera, a 60 year old woman of Asian descent with a history of hypertension and asthma, entering the doctor' | + | |
- | + | ||
- | However, without support to decide who to treat, and when, Vera's care remains reactive and suboptimal. Imagine how Vera’s experience would change if we could predict risks of specific events and take proactive action. | + | |
- | + | ||
- | ====== Our efforts ====== | + | |
- | + | ||
- | We are working on a set of efforts, which are collectively referred to as the Stanford Medicine Program for Artificial Intelligence in Healthcare, with the mission of bringing AI technologies to the clinic, safely, cost-effectively and ethically. {{ :: | + | |
- | - **Training**: | + | |
- | - **Human in the loop implementations**: | + | |
- | - **Utility assessment**: | + | |
- | - **Safety, ethics, and health system effects**: We map the multiple groups involved in executing a prediction-action loop and study their varying perspectives, | + | |
- | + | ||
- | If done right, the adoption of efficacious prediction-action loops can massively improve the ability of a health system to find patients at risk and act early. As part of this effort, we search for clinical situations where the application of risk-stratification and proactive action can provide cost-effective, | + | |
- | + |