Evaluating the PCORI Way: First Steps

January 24, 2014 by Michele Orza, ScD, and Laura Forsythe, PhD, MPH

Laura Forsythe headshotMichele J. Orza headshotAs proponents of evidence-based practice, we at PCORI are eager to find out the extent to which our patient-centered approach to comparative effectiveness research leads to more useful healthcare information. To explore this issue, we recently established the PCORI Evaluation Group (PEG), a task force that will advise us on how best to measure and evaluate the effectiveness of our work—from the usefulness of the studies we fund, to the extent to which the findings from these studies affect health decisions. We plan to report on the development and progress of our evaluation activities in a series of blog posts.

The first steps have been the kickoff meeting of PEG on December 13, 2013, and discussions of evaluation that were held during meetings of PCORI Advisory Panels on January 13–14, 2014. View the slide presentation from the December. 13 meeting.

At the PEG kick-off meeting, the task force focused on two initial tasks. It identified important evaluation questions, prioritizing them on their potential to improve PCORI’s work and contribute to the field of patient-centered outcomes research (PCOR). The PEG also began specifying and developing ways to measure the three goals identified in PCORI’s strategic plan:

  • increasing the availability of information that is useful for making decisions about health
  • speeding the use of that information
  • influencing all clinical research to be more patient-centered

Identifying and Prioritizing Evaluation Questions

The PEG task force generated myriad questions encompassing all aspects of PCORI’s work. Categories of questions that they considered most important include:

  • how PCORI’s approach to topic generation, prioritization, and selection affects development of a portfolio of high-impact research.
  • the efficiency and effectiveness of the “PCORI way” of funding clinical comparative effectiveness research (including PCORI’s approach to merit review and active portfolio management, as well as PCORI’s Methodology Standards).
  • the effect of patient-centeredness and engagement of patients and other healthcare stakeholders on the usefulness and use of PCORI study results.
  • the extent and impact of PCORI’s infrastructure and capacity-building efforts (e.g., PCORnet).

The group stressed that to answer questions in those categories, PCORI must collect comprehensive data, both qualitative and quantitative, about our activities and results.

At its January meeting, the PCORI Patient Engagement Advisory Panel also discussed the onset of evaluation activities. The panel, which includes PEG member Kim Bailey, emphasized the necessity of identifying metrics for PCORI’s success that reflect the views of patients, caregivers, and communities. Because this task will be both essential and complex, the panel created a working group, including Kim Bailey, to help us develop such metrics.

Measuring Our Goals

At its kickoff meeting, the PEG also considered the primary measures of success proposed in PCORI’s strategic plan (see table).

Determining PCORI’s Success and Monitoring Progress


Key Question

Primary Measure

Increase Information

Are we producing high-quality, timely, useful, trustworthy information? 

Numbers (and proportions) of PCORI-funded studies that have “usable” results

Speed Implementation

Is the information we produce being used?

Numbers (and proportions) of study results implemented within 5 years

Influence Research

Are other funders of research following our lead?

Amount (and proportions) of total PCOR funding that comes from funders other than PCORI

The discussion of increasing useful information was centered on further clarifying and refining criteria (see box) that PCORI staff have proposed for defining usefulness and considering how they might apply to different parts of our portfolio, for example, to clinical trials versus studies to develop decision tools.

Draft Criteria to Assess the Potential Usefulness of Information from PCORI-funded Studies

• Question arises from people who would use the information
• People who would use the information have helped to shape or have vetted the question
• People who would use the information have been identified
• Specific uses for the information have been identified

• Results could provide a clear answer to the question
• Results could help to choose among relevant options
• Results could be acted upon by relevant decision makers

• Results are feasible for people outside of the study setting to apply
• Results could be tailored to individuals or subgroups
• Results could be scaled up or spread beyond the study setting

The PCORI Advisory Panels, in a plenary session at their January meeting, also discussed usefulness criteria. The panelists underscored the importance of identifying research that can meaningfully affect decision making, emphasized the need to think about usefulness from the perspective of a variety of end users (e.g., patients, clinicians, policy makers), and suggested adding a criterion to capture the durability of information.

The PEG also discussed measuring the speed of implementation of findings from PCORI-funded studies. In response to points raised in that discussion, PCORI staff are working to clarify what points on the continuum of use we hope our study findings will reach within five years, to expand our measures of uptake beyond those reflecting primarily the perspective of clinicians (e.g., to measure uptake by patients and payers), and to identify a subset of our studies for which we would track the impact of their findings all the way to national health outcomes.

The PEG discussion of influencing research funded by other organizations encouraged us to include incremental changes (e.g., adoption of key components of PCORI’s approach) as measures of success, versus equating success with wholesale adoption of the entire PCORI approach. This discussion also highlighted use of the PCORnet for future studies as a particular area in which PCORI will have an influence on the way others conduct health research.

Across all three goals, it was clear that PCORI must consider a variety of outcome measures to fully assess how well it achieves its goals.

Next Steps

We continue to digest the rich discussion and follow the many leads that the PEG and the advisory panels provided for additional sources of information to guide the development of our evaluation framework. Next steps for the PEG include:

• Considering how we would go about answering the questions that are emerging as the highest priorities. What is the appropriate role for PCORI in the evaluation? Should we perform an internal study or use an external evaluator? What are the best methods for the evaluation? What are the key considerations for conducting it?
• Specifying further how we will measure both the critical components of our approach (e.g., patient-centeredness and engagement) and the results (e.g., useful information) we hope to have. How might we validate these measures?
• Planning the collection of data. What data do we need? What are the sources for this data?

As our evaluation framework takes shape, we will continue to seek the input from our internal and external stakeholders in the design and conduct of our evaluation activities, the interpretation of their results, and the response to what we are learning. We encourage you to share your thoughts about potential questions, our draft criteria, and other aspects of evaluation. You can send your ideas to info@pcori.org.

Orza is the Senior Advisor to PCORI’s Executive Director.

Forsythe is a Senior Program Officer in PCORI’s Research Integration and Evaluation Program.

Leave a Reply