Results Summary

What was the project about?

Cluster randomized trials, or CRTs, are studies that compare treatments across different groups of patients, or clusters. An example of a cluster is people who receive care at one clinic.

To reduce bias in CRT results, researchers assign clusters by chance to different treatments. But what happens after they assign treatment can lead to differences across clusters and bias the results. For example, patients who visit clinics assigned to a treatment may be older than patients who visit clinics not assigned to that treatment. Current statistical methods for analyzing data from CRTs don’t work well to account for these differences.

In this study, the research team developed new methods to account for differences across clusters after treatment assignment.

What did the research team do?

The research team created new methods to address three types of bias that can lead to differences across clusters in CRTs after treatment assignment:

  • Identification bias. This type of bias happens when the treatment affects which patients are eligible to take part in a trial. For example, if more patients receive a diagnosis at the clinic after treatment assignment, then more patients at that clinic would be eligible to take part in the trial.
  • Recruitment bias. This type of bias occurs when the treatment affects who enrolls in the trial. For example, patients may choose to go to a clinic because it offers a certain treatment.
  • Noncompliance bias. This type of bias happens when patients don’t stay on their assigned treatment during the trial. They may stop treatment or switch to a clinic assigned to a different treatment.

The research team developed new methods to estimate how well treatments work when identification or recruitment bias are present. The team applied the new methods to data from a completed CRT that looked at healthcare use among patients with opioid use disorder.

Then the research team developed two new methods to estimate how well treatments work when noncompliance bias is present. They applied current and new methods to data from a completed CRT on heart disease. The CRT compared the effect of taking low or high doses of aspirin on risk of heart-related health outcomes. The team compared results from current and new methods.

Two doctors and a statistician provided input during the study.

What were the results?

The new methods accounted for the three types of bias. In both completed CRTs, the new methods were more accurate in measuring how well treatments worked than existing methods.

The research team developed a software package for applying the new methods to data from CRTs.

What were the limits of the project?

The new methods worked with CRT data collected at one point in time. The methods may not work with data from CRTs collected over time.

Future research could test the methods with data collected over time.

How can people use the results?

Researchers can use the new methods to account for differences across clusters after treatment assignment in CRTs.

Final Research Report

This project's final research report is expected to be available by October 2024.

Peer-Review Summary

Peer review of PCORI-funded research helps make sure the report presents complete, balanced, and useful information about the research. It also assesses how the project addressed PCORI’s Methodology Standards. During peer review, experts read a draft report of the research and provide comments about the report. These experts may include a scientist focused on the research topic, a specialist in research methods, a patient or caregiver, and a healthcare professional. These reviewers cannot have conflicts of interest with the study.

The peer reviewers point out where the draft report may need revision. For example, they may suggest ways to improve descriptions of the conduct of the study or to clarify the connection between results and conclusions. Sometimes, awardees revise their draft reports twice or more to address all of the reviewers’ comments. 

Peer reviewers commented and the researchers made changes or provided responses. Those comments and responses included the following:

  • The statistical reviewer identified a number of notations in the report that required clarification or revision. The researchers addressed these concerns, explaining that in many cases the problems were created by transcribing information from their published research to this final research report.
  • The reviewers asked whether the analytic models the researchers developed were only applicable to cluster-randomized trials (CRT), as noted in the project title. The researchers acknowledged that post-randomization selection bias is applicable to trials with individual-level randomization also. The researchers added text to the report to indicate this finding and that the models would be somewhat different depending on the level of randomization — individual versus cluster.
  • The reviewers noted that the way in which the researchers labeled the racial groups in their data does not meet current guidelines and expectations related to the social construct of race. The researchers explained that since the data they used for this study were from already completed trials, they were only able to use the racial group categories from the original study and had no additional information to distinguish between biological origin and social construct of race. The researchers also explained that they used “white race” for the reference group in their statistical modeling because this was the largest racial subgroup.

Conflict of Interest Disclosures

Project Information

Fan Li, PhD
Duke University
New Causal Inference Methods for Cluster Randomized Trials with Post-Randomization Selection-Bias

Key Dates

November 2019
January 2024

Study Registration Information


Has Results
Award Type
State State The state where the project originates, or where the primary institution or organization is located. View Glossary
Last updated: March 14, 2024