Results Summary

What was the research about?

A randomized trial is one of the best ways to learn if one treatment works better than another. Randomized trials assign patients to different treatments by chance. But they are not always affordable, and they take a long time to complete.

When randomized trials aren’t possible, researchers can use observational studies to learn how treatments work. In observational studies, researchers look at what happens when patients and their doctors choose the treatments. Traits such as age or health may affect treatment choices. These traits may also affect patients’ responses to treatment, making it hard to know if the treatment or the traits affected the patients’ responses.

Some study designs and statistical methods may help address this problem and make results from observational studies more useful. These methods can give researchers more data about whether treatments work and how the same treatment can affect groups of patients differently.

The research team conducted three studies to test different methods of designing and analyzing observational studies. They wanted to know if observational studies that used these methods produced results similar to randomized trials.

What were the results?

In study 1, the research team found that the methods for designing and analyzing observational studies had results similar to randomized trials for how well a treatment worked.

In study 2, the research team used statistical methods to divide patients into groups based on their risks of getting an illness. The methods helped show how patients who had high risk responded differently to the same treatment.

In study 3, some methods were better than others in finding out how the same treatment affects patients differently.

What did the research team do?

In the first study, the research team used methods to design observational studies to look like randomized trials. For example, they used data from health records to assess the effectiveness of two medicines for high blood pressure. Using different data from 11 randomized trials, the team then analyzed how effective the medicines were. The team compared the results from the observational study with the randomized trial results.

In the second study, the research team used randomized trial data to figure out patients’ predicted risk of getting an illness. Then the team looked at how groups of patients with different levels of predicted risk responded to the same treatment.

In the third study, the research team used a computer program to create data. They used the data to compare the different methods of finding out how the same treatment can affect patients differently. They also compared the methods using real data.

What were the limits of the study?

The methods used in this study may work only when data include patient traits such as age and other health problems.

Future research could test these methods using data from different data sources on different health problems and treatments.

How can people use the results?

Researchers can consider using these methods to design and analyze studies using observational data when randomized trials aren’t possible.

Final Research Report

View this project's final research report.

Peer-Review Summary

Peer review of PCORI-funded research helps make sure the report presents complete, balanced, and useful information about the research. It also assesses how the project addressed PCORI’s Methodology Standards. During peer review, experts read a draft report of the research and provide comments about the report. These experts may include a scientist focused on the research topic, a specialist in research methods, a patient or caregiver, and a healthcare professional. These reviewers cannot have conflicts of interest with the study.

The peer reviewers point out where the draft report may need revision. For example, they may suggest ways to improve descriptions of the conduct of the study or to clarify the connection between results and conclusions. Sometimes, awardees revise their draft reports twice or more to address all of the reviewers’ comments. 

Peer reviewers commented and the researchers made changes or provided responses. Those comments and responses included the following:

  • The reviewers brought attention to several past reports that they suggested were relevant to this work and should be cited, particularly related to the validity of using observational studies to make causal inferences. The researchers noted that the idea of using observational data for experimental studies goes as far back as the 1940s at least and has been especially widely discussed in fields where randomizing study participants was not feasible for practical or ethical reasons. They added language to the report acknowledging the potential use of observational studies, particularly when there were ethical or methodological barriers to randomized controlled trials. However, the researchers said that they would not want to frame their views as recommendations because providing recommendations was not within the scope of this project.
  • The reviewers questioned how the researchers estimated standard errors in the models they developed given the importance of estimating the standard error correctly when aiming to compare weighted with nonweighted study results. The researchers revised the report to describe the nonparametric bootstrap techniques they used to estimate standard errors. However, in their response to reviewers the researchers also noted that it would be useful to compare different methods to estimate standard errors to test the variance and validity of the estimates. The reviewers chose not to report all of the alternate approaches to standard errors because the number would be fairly large.
  • The reviewers suggested the report address additional issues, for example, the implications of varying sample sizes, including using small samples in observational studies. The researchers added some discussion to the report in response to the various suggestions but on the issue of sample size, the researchers said that they believe that inference from observational data generally requires large amounts of data, especially to assess heterogeneity of treatment effects.

Conflict of Interest Disclosures

Project Information

Issa J. Dahabreh, MD, MS
Brown University
$1,110,298 *
10.25302/06.2020.ME.130603758
Evaluating Observational Data Analyses: Confounding Control and Treatment Effect Heterogeneity

Key Dates

December 2013
September 2019
2013
2019

Study Registration Information

Final Research Report

View this project's final research report.

Journal Articles

Tags

Has Results
Award Type
State State The state where the project originates, or where the primary institution or organization is located. View Glossary
Last updated: January 20, 2023