Results Summary
What was the research about?
Comparative effectiveness research compares two or more treatments to see which one works better for certain patients. For example, research can see if medicines or stents work better for people with heart problems. Such research may include:
- Observational studies. A research team studies what happens when patients and their clinicians choose the treatments. Traits, such as age or health, may affect patients’ treatment choices. These traits may also affect patients’ responses to treatments. It may be hard for the team to tell if a patient’s traits, the treatment, or a mix of the two affected how well the treatment worked.
- Clinical trials. The team assigns patients to a treatment by chance. Traits may affect a patient’s ability to join a clinical trial.
In this study, the team tested ways to improve understanding of which treatment works better. First, the team compared different methods that account for things, such as patients’ traits, that could affect results of observational studies. In the second part of the study, the team worked on ways to use all available data with a method called meta-analysis. This method combines data from both study types.
What were the results?
The research team first looked at combined observational data for all patients who could and patients who couldn’t take part in clinical trials. Patients who could take part in clinical trials had, on average, better survival than patients who couldn’t take part in this type of study. The team found similar results about how treatments worked even when the team used different methods to account for things that could have affected the results.
The team found that meta-analysis results were more precise when the team combined data from groups of patients with data from other individual patients. Having more data improved precision.
What did the research team do?
The research team looked at medical data for patients with heart disease. These patients received medicine, stents, or open-heart surgery. Data came from 23,247 patients in the Duke Databank for Cardiovascular Disease. Data for 75,225 other patients came from published clinical trial and observational studies.
The team compared different statistical methods to account for things that could affect treatment results. The team also looked at data from people who could and who couldn’t take part in a published clinical trial for heart disease treatments. Some patients, for example, may have been too sick to take part. Finally, the team used new meta-analysis methods to combine data from multiple studies.
What were the limits of the study?
The research team used patient data from only one disease database that had some missing patient data. The methods the team used may not fully account for this problem.
Future research could look at how well these methods work in studies of other common health problems.
How can people use the results?
Researchers may want to use this study’s results for improving research methods for observational and meta-analysis studies.
Professional Abstract
Objective
To advance statistical methodologies for comparative effectiveness research by (1) comparing direct and indirect risk adjustments to observational data for causal inference and (2) developing network meta-analytic methods to integrate data from randomized controlled trials (RCTs) and observational studies in the first aim for evidence synthesis
Study Design
Design Element | Description |
---|---|
Design | Empirical analysis |
Data Sources and Data Sets |
Duke Databank for Cardiovascular Disease for patients treated with the 3 most common interventions: medical therapy, percutaneous coronary intervention, coronary artery bypass surgery
|
Analytic Approach |
|
Outcomes |
|
Observational studies and RCTs can compare treatment benefits and harms. However, these alternative comparative effectiveness research study designs may yield conflicting conclusions due to differences in eligibility criteria and other confounding factors. This empirical study compared and developed improved statistical methods for comparative effectiveness research.
Researchers obtained patient data from the Duke Databank for Cardiovascular Disease for three coronary artery disease treatments. First, researchers studied methods to adjust for treatment selection bias with missing data in observational survival data to see if the methods altered causal inferences about relative treatment benefits. To control for confounders in individual-level registry data, researchers compared five regression methodologies (Cox proportional hazards model and two propensity score matching methods: optimal full propensity matching and inverse probability of treatment weighting, each with two methods for combining imputations). Researchers investigated whether these methods led to different comparative effectiveness estimates of the three treatments and whether patient eligibility for trial participation could partly explain such differential effects.
Next, researchers developed a comprehensive evidence synthesis method called cumulative network meta-analysis that integrated RCT evidence with study- and individual-level registry and observational data. This approach continuously summarized and updated evidence as new studies emerged, offering improvements in power to detect treatment effects and to generalize inferences.
A stakeholder panel met periodically to review results and provide feedback on methodological approaches. The panel included physicians, frequentist and Bayesian statisticians, database researchers, journal editors, and members of the PCORI Methodology Committee.
Results
Causal inference
- RCT-eligible patients differed significantly from RCT-ineligible patients.
- In pairwise treatment effect comparisons, all five regression methods produced similar survival curves for all patients.
- For survival curves separated by RCT eligibility, the five methods were generally equivalent. RCT-eligible patients sometimes showed evidence of survival advantage over RCT-ineligible patients.
Evidence synthesis
- Integrating observational and RCT studies in cumulative network meta-analysis did not change comparative survival results for these treatments.
- Including information from observational studies and individual patient data did not affect the relative comparative effectiveness of the treatments but did increase estimate precision. Newly performed head-to-head treatment comparisons resulted in the greatest increase in precision.
Limitations
Researchers used individual-level data from only one registry, which may have residual unmeasured confounding. Regarding evidence synthesis, researchers applied a single base case empirical estimate for weighting observational data with RCT data.
Conclusions and Relevance
The five regression methods produced essentially equivalent results when considering all patients, suggesting similar causal inferences when using observational data with these five methods to control for confounding factors.
Using cumulative network meta-analysis in evidence synthesis to combine information from RCTs and observational studies and integrating individual patient-level data from the studied regression methods may increase the power to detect treatment benefits and inform the consistency and generalizability of results across diverse settings and populations.
Future Research Needs
Future research could test these methodologies with different diseases and data sets to examine the generalizability of findings, along with sensitivity analyses to assess the impact of recent findings when treatment and practice patterns evolve over time.
Final Research Report
View this project's final research report
Journal Citations
Related Journal Citations
Peer-Review Summary
Peer review of PCORI-funded research helps make sure the report presents complete, balanced, and useful information about the research. It also assesses how the project addressed PCORI’s Methodology Standards. During peer review, experts read a draft report of the research and provide comments about the report. These experts may include a scientist focused on the research topic, a specialist in research methods, a patient or caregiver, and a healthcare professional. These reviewers cannot have conflicts of interest with the study.
The peer reviewers point out where the draft report may need revision. For example, they may suggest ways to improve descriptions of the conduct of the study or to clarify the connection between results and conclusions. Sometimes, awardees revise their draft reports twice or more to address all of the reviewers’ comments.
Peer reviewers commented and the researchers made changes or provided responses. Those comments and responses included the following:
- The reviewers suggested that the researchers expand their limitations section to list the caveats to consider when applying the methods described in the report. The researchers expanded this section; for example, they added a caveat that a rigorous systematic review is a necessary starting point for methods involving evidence synthesis as well as value-of-information analyses.
- The reviewers made several comments about the usability of this information for other researchers. The researchers expanded their discussion of the potential for study uptake, including information about real-world applications for these methods. The investigators also responded that their goal was to provide an example of how to apply these analytic methods to real-world data.
- The reviewers lauded the researchers on their use of robust data sets to test their methods, but they also expressed concern that the study results from using these methods would be less reliable when applied to clinical conditions that are not as well documented in the literature as the one they used, coronary heart disease. The researchers responded by stating that the report attempts to demonstrate the feasibility of their approach for both emerging data with few publications and mature data with many publications in the literature.