Patients, providers, and other stakeholders need to know when they can trust observational analyses (i.e., analyses that do not rely on random experimental treatment assignment) to learn about the effects of different treatments. To determine when observational analyses can be trusted, one commonly used strategy is to compare the results of observational analyses designed to closely emulate existing randomized trials against the results of those trials. We refer to such comparisons as benchmarking the observational analyses against the trial.
Benchmarking comparisons can have important practical implications: if observational analyses can produce estimates with the same interpretation as those of trials for research questions where trials are indeed available, then patients and providers may be able to trust observational analyses enough to rely on them when trials are infeasible or inadequate. Furthermore, when benchmarking comparisons suggest that observational analyses (for a particular clinical topic) can be trusted, it may be desirable to jointly analyze the trial data with the observational data to improve efficiency. For example, when examining heterogeneity of treatment effects for critical, patient-relevant outcomes, for which trials are typically under-powered, joint analysis may be beneficial.
However, there is no good framework for conducting and interpreting benchmarking comparisons. Furthermore, methods for joint analysis of trial and observational data are not well-developed and have not been carefully tested. Our long-term goal is to support patient-centered outcomes research and care by developing observational analysis methods that can be trusted when trials are infeasible or inadequate. To move toward this goal, we seek to accomplish four Specific Aims:
(1) Develop a framework for benchmarking observational analyses against trials.
(2) Develop statistical methods for joint analysis of data from trials and observational analyses.
(3) Evaluate the methods developed under Aims 1 & 2 in simulation studies.
(4) Implement the methods developed for Aims 1 & 2 in in two use cases involving comparative effectiveness studies of treatments for myocardial infarction.
To achieve these Aims, with input from 2 stakeholder panels (a Technical Expert Panel and a Patient & Provider panel), we propose to develop a comprehensive benchmarking framework by combining state-of-the-science causal inference and statistical methods. The framework will allow investigators to conduct and interpret comparisons between trials and observational analyses designed to emulate the trials.
Building on this framework, we will develop novel methods for the joint analysis of trial and observational data. The methods will be appropriate for outcomes that patients care about, including outcomes assessed at the end of the study (e.g., status at 30 days after treatment), outcomes assessed over time (e.g., repeat measurements of functional status), and time-to-event outcomes (e.g., overall survival or time free-from-recurrence).
Next, we will evaluate the benchmarking and joint analysis methods in large-scale, simulation studies that will resemble real-world data structures. Last, we will implement the methods to analyze data from two large, registry-based trials of treatments for myocardial infarction and their corresponding observational analyses (conducted with registry data, including individuals who were eligible for but did not participate in the trials). These applied analyses have the potential to directly influence clinical practice and will also aid the dissemination of the methods to practitioners.