Results Summary

What was the project about?

Researchers can use experiments to learn about what patients prefer. Discrete choice experiments, or DCEs, describe treatments with different features, such as out-of-pocket costs or wait times. Patients fill out surveys about which treatments they prefer. From their choices, researchers learn what is most important to patients and how they think about the different features.

DCEs can be hard to design and analyze. When surveys are complex, patients may ignore information or take shortcuts, which leads to inaccurate results.

To make DCE results more accurate, researchers can

  • Change the design of the DCE
  • Apply statistical methods

But current knowledge of how to do this is limited.

In this project, the research team looked at improving methods to design and analyze DCEs.

What did the research team do?

First, the research team looked at how changes to the design of the DCE affected results. Using a computer program and data from two DCEs, the team created test data for 100,000 patients. The team used the test data to see how changes in DCE design, such as the number of patients taking part, affected results. DCEs are complex, so researchers often test the design in a small pilot study, which informs the design of the main study. The team also looked at how the changes in pilot study designs affected the accuracy of results from the main studies.

Next, the research team looked at one type of statistical method used in DCEs called random parameter logit estimation with Halton draws. This method lets researchers measure what patients prefer while accounting for different preferences across patients. The team tested the method under different conditions, such as how much preferences vary from patient to patient. Then they looked at how many Halton draws were needed to get accurate results in a DCE study.

The research team worked with other DCE researchers to design this study.

What were the results?

When the DCE design included more patients, the results were more accurate for assessing patient preferences. If the pilot study had design errors, results from the main study were less accurate.

In random parameter logit estimation with Halton draws, the research team figured out the number of Halton draws needed to improve the accuracy of DCE results.

What were the limits of the project?

The research team used two DCEs and varied a few study design aspects. Results may differ for other data sets and design changes.

Future research could test the random parameter logit estimation with Halton draws with other data sets and designs.

How can people use the results?

Researchers can use the results to improve how they design and analyze DCEs.

Final Research Report

View this project's final research report.

Journal Citations

Peer-Review Summary

Peer review of PCORI-funded research helps make sure the report presents complete, balanced, and useful information about the research. It also assesses how the project addressed PCORI’s Methodology Standards. During peer review, experts read a draft report of the research and provide comments about the report. These experts may include a scientist focused on the research topic, a specialist in research methods, a patient or caregiver, and a healthcare professional. These reviewers cannot have conflicts of interest with the study.

The peer reviewers point out where the draft report may need revision. For example, they may suggest ways to improve descriptions of the conduct of the study or to clarify the connection between results and conclusions. Sometimes, awardees revise their draft reports twice or more to address all of the reviewers’ comments. 

Peer reviewers commented and the researchers made changes or provided responses. Those comments and responses included the following:

  • The reviewers suggested asking stakeholders about the implications of the report’s findings. The researchers asked the stakeholders for feedback regarding their ideas that the researchers used in this report, and the researchers incorporated stakeholders’ ideas into the report revisions. The researchers explained that early stakeholder input led to greater time spent reviewing literature and investigating analysis methods, which they said strengthened the study but took time. Because of time constraints, the researchers did not have the time to get stakeholders’ feedback on study implications before submitting the final report.
  • A reviewer took issue with the researchers’ recommendation that published discrete choice experiments present more methodological data, saying this was unlikely given journal space constraints and seemed to criticize the majority of literature on discrete choice experiments. The researchers adjusted their language to avoid seeming to blame authors of published studies for leaving out information, but they reiterated that the inadequacy of current methodologic reporting is a barrier to assessing study quality and that it is reasonable to expect more methodologic detail in supplementary material for published discrete choice experiments.
  • The reviewers questioned the validity and generalizability of the simulations in this methods study because the simulations were based on two choice-experiment studies that seemed unusual compared to other choice-experiment studies conducted in the last several years. According to the reviewers, these two studies seemed to differ from the rest of the literature in the number of choice questions that were asked, the number of study participants, and the types of attributes the choice questions measured. The researchers disagreed that both of the studies were unusual, pointing out that the second study had a similar sample size and similar goals to many of the similar studies. However, the researchers did revise their systematic review discussion to include findings on choice tasks, alternatives, and attributes, and compared the two studies they used for their simulations to the studies discussed in the systematic review.  
  • Reviewers said the researchers raised serious questions about the allocation of scarce research resources. While the researchers had to grapple with tradeoffs in their study design given realistic time and funding constraints, they recommended that other researchers make laborious efforts to fine tune their model designs. The researchers said they and the expert stakeholders they worked with agreed that this work suggests there is an important resource tradeoff. The researchers said they had backed away from implying that much of the literature in this area may be fatally flawed, but that may be the case, and they noted that they proposed a strategy at the end of the discussion section that may help analysts manage resource constraints without sacrificing study quality.

Conflict of Interest Disclosures

Project Information

Alan R. Ellis, PhD, MSW
North Carolina State University
$123,674
10.25302/03.2021.ME.160234572
Improving Study Design and Reporting for Stated Choice Experiments

Key Dates

December 2016
November 2020
2016
2020

Study Registration Information

Tags

Has Results
Award Type
State State The state where the project originates, or where the primary institution or organization is located. View Glossary
Last updated: March 14, 2024