In circumstances in which randomized controlled trials are not practical or ethical, large-scale data collected from “real-world” clinical settings are a fertile ground for patient-centered outcomes research. Analyses from these databases have important clinical implications regarding the recommendations that clinicians make for certain therapy or surgical interventions. Traditional statistical methods for analyzing clinical databases have generally focused on binary treatment options (e.g., a treatment versus a control). However, patients are often provided with several treatment or surgical options, and current methods can be limited in these settings, particularly related to assumptions underlying study conclusions. Ideally, researchers handling these databases would be equipped to contrast several pairs of treatments simultaneously to make an optimal choice and to better understand if and how unmeasured factors could impact results.
In this project, we propose a new approach that leverages Bayesian machine learning techniques for estimating treatment effect of multiple treatments. By conducting extensive simulation analyses, we examine the operating characteristics of our proposed approach under a variety of contextually motivated settings and compare its performance against existing methods. We further develop a novel and interpretable Bayesian framework to evaluate the sensitivity of our method to unmeasured variables. Finally, we apply proposed methods for estimating causal effects and handling the underlying assumptions to a Surveillance, Epidemiology, and End Results- Medicare (SEER-Medicare) data for lung cancer to compare the effectiveness of multiple surgical management approaches, including novel approaches using robotic systems, on perioperative mortality and occurrence of adverse events—an important and emerging cancer research question.
The proposed methods will provide investigators with advanced yet easy-to-implement tools for estimating causal effects from complicated healthcare databases and allow them to use their clinical insights to surmise the amount of unobserved confounding to properly re-evaluate treatment effects. Additionally, our proposed framework yields coherent estimates of uncertainty regarding treatment comparison. We expect to engage a variety of stakeholders, from clinicians to data experts, by making user-friendly tools publicly available and supplying detailed guidelines for the proposed methods.