Project Summary

Increased availability of healthcare data collected during patient care, combined with advances in artificial intelligence (AI) methods, have made healthcare organizations increasingly interested in how AI tools can be used to improve care, ease clinical staff workload and improve efficiency. However, existing uses of AI both in health care and in other settings have made clear that significant ethical problems can arise. These issues include bias (uneven model performance for different groups of patients), unintended use and overreliance of human users on model output as well as questions about informed consent, communication of model results, privacy and conflicts of interest. 

Identifying ethical concerns with healthcare AI tools before problems result has become a stated goal of design oversight groups and regulatory agencies, such as the US Food and Drug Administration. Yet, the lack of an accepted, workable methodology for ethical analysis of proposed uses of AI in health care is a critical obstacle to achieving this goal. Other promising medical technologies, like gene therapy, have led to patient harm in clinical research because of failures to identify and address ethical concerns early. If unaddressed, ethical problems of AI tools could threaten the ability to ensure that all patients benefit from safe, useful, equitable applications of AI in their care.  

Using a structured assessment process called FURM (Fair, Useful, Reliable Model), which the team developed in response to a request to provide ethical review of AI tools proposed for use across Stanford Health Care, the study team will develop a robust, practical method for ethical review of healthcare AI tools that a variety of healthcare organizations can employ to identify ethical concerns before they become consequential.  

The study’s aims are to: 

  1. Develop a timely, practicable process for healthcare organizations to identify and address ethical concerns with proposed uses of AI in patient care. Building on pilot work at Stanford, the study team will develop and deploy a rapid-cycle, structured process for eliciting and comparing the perspectives of patients and other stakeholders on potential ethical and values issues arising from healthcare AI use cases (proposed applications of an AI-based model to solve a particular clinical, diagnostic or operational problem). For each use case, researchers will identify “values collisions” among stakeholder groups, use them to identify high-priority ethical issues and make recommendations for resolving these issues. They will then assess and refine the ethical review process to make it as robust, practical and useful as possible for other healthcare organizations. Researchers will also identify what different use cases have in common, to help streamline ethical review for future use cases, and will develop a playbook to guide healthcare organizations in reviewing AI.
  2. Develop and share a computer modeling tool that measures biases in healthcare AI that arise from multiple sources. To determine whether particular uses of healthcare AI might benefit (or harm) some subgroups of patients more than others, the team will develop a novel computer tool called FairFlow that simulates how an AI model is likely to affect patients in a particular setting. While most existing bias measurements focus on the performance of the model itself, this tool will integrate information about biases that can arise because of how models are deployed (i.e., how care providers use the model within the flow of their daily work) and because of the specific mix of patients affected. The research team will identify subgroups of patients who may be at increased risk of not sharing equitably in the benefits of a particular AI use and measure differences in outcomes for these subgroups. The team will then use FairFlow’s findings to explore strategies that the teams implementing AI can use to reduce any biases identified. 

The expected outcome of this research is an ethical assessment methodology, playbook and quantitative tool that healthcare organizations can use to identify and proactively address ethical concerns arising from proposed AI deployments.

Project Information

Danton Char, M.D., M.S.
Stanford University School of Medicine
$1,050,000 *

Key Dates

36 months *
April 2024

*All proposed projects, including requested budgets and project periods, are approved subject to a programmatic and budget review by PCORI staff and the negotiation of a formal award contract.


Project Status
Award Type
State State The state where the project originates, or where the primary institution or organization is located. View Glossary
Last updated: April 23, 2024