Results Summary
What was the project about?
Researchers can use experiments to learn about what patients prefer. Discrete choice experiments, or DCEs, describe treatments with different features, such as out-of-pocket costs or wait times. Patients fill out surveys about which treatments they prefer. From their choices, researchers learn what is most important to patients and how they think about the different features.
DCEs can be hard to design and analyze. When surveys are complex, patients may ignore information or take shortcuts, which leads to inaccurate results.
To make DCE results more accurate, researchers can
- Change the design of the DCE
- Apply statistical methods
But current knowledge of how to do this is limited.
In this project, the research team looked at improving methods to design and analyze DCEs.
What did the research team do?
First, the research team looked at how changes to the design of the DCE affected results. Using a computer program and data from two DCEs, the team created test data for 100,000 patients. The team used the test data to see how changes in DCE design, such as the number of patients taking part, affected results. DCEs are complex, so researchers often test the design in a small pilot study, which informs the design of the main study. The team also looked at how the changes in pilot study designs affected the accuracy of results from the main studies.
Next, the research team looked at one type of statistical method used in DCEs called random parameter logit estimation with Halton draws. This method lets researchers measure what patients prefer while accounting for different preferences across patients. The team tested the method under different conditions, such as how much preferences vary from patient to patient. Then they looked at how many Halton draws were needed to get accurate results in a DCE study.
The research team worked with other DCE researchers to design this study.
What were the results?
When the DCE design included more patients, the results were more accurate for assessing patient preferences. If the pilot study had design errors, results from the main study were less accurate.
In random parameter logit estimation with Halton draws, the research team figured out the number of Halton draws needed to improve the accuracy of DCE results.
What were the limits of the project?
The research team used two DCEs and varied a few study design aspects. Results may differ for other data sets and design changes.
Future research could test the random parameter logit estimation with Halton draws with other data sets and designs.
How can people use the results?
Researchers can use the results to improve how they design and analyze DCEs.
Professional Abstract
Background
Researchers use discrete choice experiments (DCEs) to measure individual patient preferences. In DCEs, researchers give patients a survey describing scenarios with different options from which to choose. For example, a hypothetical DCE offers two healthcare interventions. The two interventions differ in their features, like out-of-pocket costs and wait times. Patients choose the intervention they prefer. Patients’ choices help researchers understand which features are most important to patients. Researchers also learn how patients think about the different options for each feature, such as different out-of-pocket costs.
Designing and analyzing a DCE is challenging. For example, in DCEs with complex options, patients may ignore information, which may lead to inconsistent responses and inaccurate analyses. Altering DCE design features and statistical model assumptions may increase accuracy of DCE results.
Objective
To improve understanding of the effects of selected DCE design features and statistical model assumptions on DCE results.
Study Design
Design Element | Description |
---|---|
Design | Simulations, empirical analysis |
Data Sources and Data Sets |
Empirical data on 2 DCEs:
|
Analytic Approach |
Simulations Random parameter logit estimation |
Outcomes | Estimates of bias, relative standard error, and D-error (measure of overall error in parameter estimation) |
Methods
First, the research team examined how different DCE designs affect study estimates. DCEs have two parts: a pilot study and a main study. The team created simulated DCE pilot and main studies by replicating two empirical DCEs in a simulated population of 100,000 individuals. They generated 864 simulations representing variations in DCE design such as sample size and the prevalence, correlations, and interactions of different variables. Using different analytic models, the team assessed estimation errors due to DCE design.
Next, the research team examined the effects of using Halton draws on estimates from a random parameter logit model. Halton draws are a sampling technique that generates random data points simulating the overall population. The random parameter logit model assumes that parameters, such as the strength of preference for a certain healthcare feature, are random and vary across individuals. The team identified the number of Halton draws and the number of parameters for generating accurate results.
DCE researchers helped design the study.
Results
In simulations, increasing sample size decreased random error. Random errors due to small sample size in the main study increased if the pilot study had a small sample size (n=30), unmeasured interactions, and selection bias.
Random parameter logit analysis estimates had greater bias when model parameters were highly correlated. With correlations of 0.1, 0.2, and 0.3, bias reached 8%, 16%, and 24%, respectively. Too few Halton draws or a greater number of random parameters violated model assumptions and produced inaccurate results. Estimates were more accurate with fewer random parameters (less than 10). Up to 20,000 Halton draws were needed when the random parameters exceeded 15.
Limitations
Simulation scenarios did not cover the full range of study designs. Random parameters followed normal distributions. Results may differ for other parameter distributions and data sets.
Conclusions and Relevance
Improving methods for designing and analyzing DCEs can help researchers study patient preferences. Using more Halton draws when more random parameters are present may increase accuracy of random parameter logit models for DCEs.
Future Research Needs
Future research could examine additional DCE design features with other data sets.
Final Research Report
View this project's final research report.
Journal Citations
Related Journal Citations
Peer-Review Summary
Peer review of PCORI-funded research helps make sure the report presents complete, balanced, and useful information about the research. It also assesses how the project addressed PCORI’s Methodology Standards. During peer review, experts read a draft report of the research and provide comments about the report. These experts may include a scientist focused on the research topic, a specialist in research methods, a patient or caregiver, and a healthcare professional. These reviewers cannot have conflicts of interest with the study.
The peer reviewers point out where the draft report may need revision. For example, they may suggest ways to improve descriptions of the conduct of the study or to clarify the connection between results and conclusions. Sometimes, awardees revise their draft reports twice or more to address all of the reviewers’ comments.
Peer reviewers commented and the researchers made changes or provided responses. Those comments and responses included the following:
- The reviewers suggested asking stakeholders about the implications of the report’s findings. The researchers asked the stakeholders for feedback regarding their ideas that the researchers used in this report, and the researchers incorporated stakeholders’ ideas into the report revisions. The researchers explained that early stakeholder input led to greater time spent reviewing literature and investigating analysis methods, which they said strengthened the study but took time. Because of time constraints, the researchers did not have the time to get stakeholders’ feedback on study implications before submitting the final report.
- A reviewer took issue with the researchers’ recommendation that published discrete choice experiments present more methodological data, saying this was unlikely given journal space constraints and seemed to criticize the majority of literature on discrete choice experiments. The researchers adjusted their language to avoid seeming to blame authors of published studies for leaving out information, but they reiterated that the inadequacy of current methodologic reporting is a barrier to assessing study quality and that it is reasonable to expect more methodologic detail in supplementary material for published discrete choice experiments.
- The reviewers questioned the validity and generalizability of the simulations in this methods study because the simulations were based on two choice-experiment studies that seemed unusual compared to other choice-experiment studies conducted in the last several years. According to the reviewers, these two studies seemed to differ from the rest of the literature in the number of choice questions that were asked, the number of study participants, and the types of attributes the choice questions measured. The researchers disagreed that both of the studies were unusual, pointing out that the second study had a similar sample size and similar goals to many of the similar studies. However, the researchers did revise their systematic review discussion to include findings on choice tasks, alternatives, and attributes, and compared the two studies they used for their simulations to the studies discussed in the systematic review.
- Reviewers said the researchers raised serious questions about the allocation of scarce research resources. While the researchers had to grapple with tradeoffs in their study design given realistic time and funding constraints, they recommended that other researchers make laborious efforts to fine tune their model designs. The researchers said they and the expert stakeholders they worked with agreed that this work suggests there is an important resource tradeoff. The researchers said they had backed away from implying that much of the literature in this area may be fatally flawed, but that may be the case, and they noted that they proposed a strategy at the end of the discussion section that may help analysts manage resource constraints without sacrificing study quality.