Project Summary

Machine learning (ML) algorithms have become important tools for leveraging healthcare data to improve patient outcomes and streamline hospital processes. Nevertheless, there remain concerns regarding the reliability of these algorithms because they can perform poorly in certain populations. ML algorithms can also become “outdated” and gradually decay in performance as the medical system evolves over time. When the performance of a ML algorithm does not match specifications, it is important to understand what the major reasons are. Once the main causes are identified, hospitals and data scientists can deploy strategies to close this performance gap, such as updating the ML algorithm and/or its input data.

There are currently no tools that provide sufficiently precise explanations for a performance gap. This project will develop new computational tools to address this methodological gap. The methods will be thoroughly validated through theoretical analyses, computer simulations, and evaluation on real-world datasets. The research team will regularly consult its stakeholder engagement team, which includes a clinical informatician, clinicians, regulatory experts, biostatistician, and a bioethicist. Open-source software for running the methods will be published online.

*Methods to Support Innovative Research on AI and Large Language Models Supplement
This study received supplemental funding to build on existing PCORI-funded comparative clinical effectiveness research (CER) methods studies to improve understanding of emerging innovations in large language models (LLMs).

Project Information

Jean Feng, PhD, MS
University of California, San Francisco
$1,049,842

Key Dates

36 months
November 2022
2022

Tags

Award Type
State State The state where the project originates, or where the primary institution or organization is located. View Glossary
Last updated: February 28, 2024