User Tools

Site Tools


aihc

This is an old revision of the document!


Stanford Medicine Program for Artificial Intelligence in Healthcare

We are working on a set of efforts collectively referred to as the Stanford Medicine Program for Artificial Intelligence in Healthcare, with the mission of bringing AI technologies to the clinic, safely, cost-effectively and ethically. See brochure. The four key components are:

  1. Implementation: We partner with the Data Science team in Technology and Digital Solutions at Stanford Healthcare to deploy predictive models in care delivery workflows. See our effort in improving palliative care and its coverage in Statnews.
  2. Ensuring that models are useful: The utility of making a prediction and taking actions depends on factors beyond model accuracy, such as lead time offered by the prediction, the existence of a mitigating action, the cost and ease of intervening, the logistics of the intervention, and incentives of making the intervention.
    1. Read about our views on the need for rigorous evaluation and guardrails when creating and using clinical AI tools, and how explainability is overrated.
    2. Read about our framework for quantifying the impact of work capacity constraints on achieved benefit, estimating individualized utility, and learning optimal decision thresholds from aggregate clinician behavior.
  3. Safety, ethics, and health system effects: We map the multiple groups involved in taking action in response to a prediction and study their varying perspectives, positions, stakes, and commitments to pre-empt ethical challenges. Read our perspective on addressing ethical challenges. We believe that the use of AI can lead to good decisions if we keep human intelligence in the loop.
  4. Training and Partnerships: We partner with multiple groups to figure out “what would we do differently” if we had a prediction from a model and to investigate the pros and cons of using AI to guide care. For example, we examined the accuracy vs coverage trade off in patient facing diagnosis models, and partnered with Google on efforts to enable scalable and accurate deep learning with electronic health records. Stanford students, check out the AI for Healthcare Bootcamp.

If done right, the adoption of efficacious prediction-action pairs can massively improve the ability of a health system to find patients at risk and act early. As part of this effort, we search for clinical situations where the application of risk-stratification and proactive action can provide cost-effective, health system level benefits.


Russ Altman and Nigam Shah taking an in-depth look at the growing influence of “data-driven medicine.”


Keeping the Human in the Loop for Equitable and Fair Use of ML in Healthcare, at AIMiE 2018


Building a Machine Learning Healthcare System, at XLDB, April 30 2018

aihc.1649371802.txt.gz · Last modified: 2022/04/07 15:50 by nigam