Background and Significance: Researchers use stated choice experiments (SCEs) to measure patient preferences. SCEs present multiple tasks in which respondents choose among options. For example, in one study, patients chose between two hypothetical organ donation recipients, each with 10 attributes (organ donor status, presence of dependents, etc.). SCEs make it possible to describe not only patients’ preferences between attribute levels (e.g., favoring recipients with vs. without dependents), but also the weights patients assign to different attributes (e.g., presence of dependents vs. organ donor status). SCEs can describe perceptions about trade-offs, assign dollar values to choices, and predict use of new treatments. Most SCEs do not follow appropriate procedures for determining sample-size requirements, and most measure preferences less precisely than it might seem.
Sample-Size Determination: Describing patient preferences reliably requires a large sample, but the complexity of SCEs makes it hard to know how large. One promising solution works only with certain statistical models, and its usefulness may depend on the nature of patient preferences and the way the SCE is designed. Testing the limits of this approach and generalizing it to other statistical models would strengthen SCEs as a tool for measuring patient preferences.
Measuring Precision: When a few people complete many choice tasks, the responses vary less than they would if a different person completed each task. This makes preferences seem more consistent than they really are. SCEs don’t usually take this into account, so we may be measuring patient preferences less precisely than we think. A method called bootstrapping might correct this problem. Demonstrating this method could help researchers report more accurately how well they were able to measure patient preferences.
Significance and Aims: To help researchers design better SCEs and better describe patients’ preferences, we propose a simulation study to address the following aims: 1) provide guidance for sample-size determination and study design in pilot and follow-up SCE studies, and 2) develop bootstrapped confidence intervals for SCE parameter estimates.
Study Description: We will conduct realistic simulations of large populations (n = 100,000) with preferences similar to those of real patients in four preference studies. We will simulate actual steps in SCE studies: conducting a pilot study (randomly selecting a few “people” from the simulated population and measuring their preferences), choosing a statistical model, setting up choice tasks, selecting a sample, and conducting the main study. We will simulate realistic study design choices—for example, we will try different sample-size formulas and set up choice tasks in different ways. By simulating each scenario 10,000 times, we will be able to test how well the sample-size formula and the bootstrapping method work, and how study conditions affect them.
- No information provided by awardee.
Other Stakeholder Partners
- Emily Lancsar, Monash University, Clayton, Victoria, Australia
- Esther de Bekker-Grob, Erasmus MC, Rotterdam, Netherlands
- Mandy Ryan, University of Aberdeen, Foresterhill, Aberdeen, United Kingdom