Differences
This shows you the differences between two versions of the page.
Both sides previous revision
Previous revision
|
Next revision
Both sides next revision
|
aihc [2022/04/07 15:51] nigam |
aihc [2022/04/07 15:54] nigam |
- **Safety, ethics, and health system effects**: We map the multiple groups involved in taking action in response to a prediction and study their varying perspectives, positions, stakes, and commitments to pre-empt ethical challenges. Read our perspective on [[https://www.nejm.org/doi/full/10.1056/NEJMp1714229|addressing ethical challenges]]. We believe that the use of AI can lead to good decisions if we keep [[https://hai.stanford.edu/news/when-algorithmic-fairness-fixes-fail-case-keeping-humans-loop|human intelligence in the loop]]. | - **Safety, ethics, and health system effects**: We map the multiple groups involved in taking action in response to a prediction and study their varying perspectives, positions, stakes, and commitments to pre-empt ethical challenges. Read our perspective on [[https://www.nejm.org/doi/full/10.1056/NEJMp1714229|addressing ethical challenges]]. We believe that the use of AI can lead to good decisions if we keep [[https://hai.stanford.edu/news/when-algorithmic-fairness-fixes-fail-case-keeping-humans-loop|human intelligence in the loop]]. |
- **Training and Partnerships**: We partner with multiple groups to figure out “what would we do differently” if we had a prediction from a model and to investigate the pros and cons of using AI to guide care. For example, we [[https://www.researchgate.net/publication/341829909_The_accuracy_vs_coverage_trade-off_in_patient-facing_diagnosis_models|examined the accuracy vs coverage trade off in patient facing diagnosis models]], and partnered with Google on efforts to enable [[https://www.nature.com/articles/s41746-018-0029-1| scalable and accurate deep learning with electronic health records]]. Stanford students, check out the [[https://stanfordmlgroup.github.io/programs/aihc-bootcamp/|AI for Healthcare Bootcamp]]. | - **Training and Partnerships**: We partner with multiple groups to figure out “what would we do differently” if we had a prediction from a model and to investigate the pros and cons of using AI to guide care. For example, we [[https://www.researchgate.net/publication/341829909_The_accuracy_vs_coverage_trade-off_in_patient-facing_diagnosis_models|examined the accuracy vs coverage trade off in patient facing diagnosis models]], and partnered with Google on efforts to enable [[https://www.nature.com/articles/s41746-018-0029-1| scalable and accurate deep learning with electronic health records]]. Stanford students, check out the [[https://stanfordmlgroup.github.io/programs/aihc-bootcamp/|AI for Healthcare Bootcamp]]. |
If done right, the adoption of efficacious prediction-action pairs can massively improve the ability of a health system to find patients at risk and act early. As part of this effort, we search for clinical situations where the application of risk-stratification and proactive action can provide cost-effective, health system level benefits. | |
| |
| We believe that the adoption of efficacious prediction-action pairing can massively improve the ability of a health system to find patients at risk and act early. |
| |
---- | ---- |