Results Summary
PCORI funded the Pilot Projects to explore how to conduct and use patient-centered outcomes research in ways that can better serve patients and the healthcare community. Learn more.
Background
Community health centers offer a variety of services—such as transportation and health education—to help low-income patients access and use the health care they need. These services are called enabling services. They are an important piece of the care that community health centers offer. However, researchers don’t know how well these services work from the perspectives of patients and healthcare providers.
Project Purpose
The research team wanted to test a way for patients and health center staff to rate the effectiveness of enabling services.
Study Design
The research team adapted and tested an existing approach for obtaining ratings from study participants to include patients’ and clinic staff members’ opinions.
The project had three phases:
Phase 1: Developing the list of enabling services
The research team brought together 4 patients and 13 staff and health professionals from community health centers in California for a one-day group meeting. Researchers asked participants to list all the enabling services that fell into six categories: case management, language interpretation, outreach, financial advice, social services, and health education. The final list included 276 enabling services.
Phase 2: Rating the enabling services
The research team brought together a panel of four patients, five experts in health policy, and four healthcare workers and leaders from community health centers for a one-day meeting. All of the participants had knowledge about enabling services. Researchers asked the panel to rate the 276 services identified in Phase 1 on how effective they were for improving patients’ ability to access, use, and understand their health care. Panelists did not factor a service’s cost into their ratings. Panelists rated the services twice. First, they rated each of the 276 enabling services on their own. Then, after a group discussion of ratings, they rerated 181 of the services. Participants rated the enabling services on a scale of 1–9, in which 1–3 was ineffective, 4–6 was uncertain, and 7–9 was effective.
Phase 3: After the ratings
The research team calculated a score for each service. This score represented what the panel as a whole thought about how helpful each service was for patients in community health centers. The scores helped the research team identify the most and least effective services.
Findings
Each of the six categories of services received an overall rating based on the average score of effectiveness given by panelists for all of the category’s services after the group discussion. From most effective to least effective, the panelists rated categories of enabling services as follows:
- Social services, 8.1
- Outreach, 7.5
- Financial advice, 7.4
- Case management, 7.1
- Language interpretation, 6.9
- Health education, 6.6
Panelists’ ratings after the group discussion were more similar than their ratings before the discussion. This finding indicates that the discussion changed the way panelists thought about the effectiveness of services.
Each panelist group (patients, community health center staff, and health policy experts) rated the services similarly within the group but differently from the other groups of panelists. Overall, patients rated enabling services as more effective than did the health professionals or staff.
Limitations
The selected rating method required panelists to review research, but there is little previous research about the effectiveness of enabling services. Without much research to discuss, panelists mostly discussed their own thoughts about the enabling services. The panel was also small. The discussion and ratings might be different with other groups of people. Ratings also might have been different if the panel had had more than one day for discussion.
Conclusions
The research team adapted and tested an existing method for obtaining ratings from study participants to include input from patients and healthcare professionals. The team found that patients, health center leaders and staff, and policy experts have different opinions about the effectiveness of enabling services. This finding supports involving patients in research about delivery of health care. The results also produced a ranking of enabling services according to their perceived effectiveness by patients, providers, and experts. This list could guide further research about enabling services.
Sharing the Results
The research team published a journal article about the research (see below).
Professional Abstract
PCORI funded the Pilot Projects to explore how to conduct and use patient-centered outcomes research in ways that can better serve patients and the healthcare community. Learn more.
Background
Enabling services (ES) are nonmedical services (e.g., transportation, health education) that primary care practices provide to help low-income patients access health care. Community health centers have made ES a core component of their care. However, there is little evidence about patient preferences for ES or methods to effectively engage patients in health care planning and delivery.
Project Purpose
The goal of this study was to test a novel approach for eliciting the viewpoints of diverse stakeholders—particularly those of patients—about the effectiveness of ES among individuals served by community health centers.
Study Design
The investigators developed and tested a “stakeholder-engaged” adaptation of the RAND-UCLA appropriateness method, which is a way of combining the best available scientific evidence about a procedure, service, or intervention with the collective judgment of experts. In the adapted version, the “experts” included patients and clinic staff. The overarching goal of the exercise was to systematically elicit and examine the ratings of participants to make potential improvements to ES. The study was designed and conducted in three phases: (1) pre-ratings, (2) ratings (modified Delphi panel), and (3) post-ratings.
Participants, Interventions, Settings, and Outcomes
For the pre-ratings phase, the investigators convened a one-day advisory council composed of four patients and 13 staff members and health professionals from California community health centers (including executives, clinicians, and health navigators, among others). All 17 participants were from California. Participants were asked to develop a framework that defined ES in six categories (case management, interpretation, outreach, financial counseling, transportation, and health education). Using appreciative inquiry techniques, participants brainstormed the different services that existed for each of the six categories. This yielded a list of 112 services and a number of variables related to service intensity. The combination of a service and an intensity variable formed a scenario; these scenarios made up the survey used in the ratings phase.
For the ratings phase, a 13-member panel knowledgeable about ES was convened to rate the effectiveness of the scenarios. The panel included stakeholders from around the United States—four patients, four community health center providers and executives, and five health policy experts. Panelists based their effectiveness rating on their assessment of how a service and its proposed level of intensity increased patients’ access to, use of, and understanding of their medical care. Ratings took place in two rounds. In the first round, each panelist independently completed ratings for 276 scenarios. In the second round, the panelists were convened to discuss their ratings from the first round (and the distribution of ratings), and then individually re-rated 181 of the scenarios. Participants were instructed not to consider cost in any of their ratings.
Quality of Data and Analysis
For the post-ratings phase, investigators analyzed the panelists’ ratings and identified the group’s recommendations. For each scenario, a median effectiveness score across the 13 panelists was calculated. Based on the median effectiveness scores, investigators examined patterns to identify outlying variables or services.
Findings
Method produced a robust and comprehensive categorization of a previously undefined field of healthcare services: The expert panel discussed and rated ES in the following categories: health education and supportive counseling (14 services); case management (20); outreach (22); interpretation (9); and financial counseling and eligibility assistance (12). Each category of services had between two and four stratifying variables; for example, social case management had three variables to discriminate between the effectiveness of services delivered by a licensed professional, unlicensed professional, promotora, community health worker, or peer. In descending order, the average median effectiveness rating by category was: social services = 8.1; outreach = 7.5; financial counseling and eligibility assistance = 7.4; social case management = 7.1; interpretation = 6.9; health education and supportive counseling = 6.6.
Adapted method passed preliminary tests of validity: For modified Delphi panel methods, a key indicator of validity is the degree of convergence between the first and last panels. This convergence of ratings between the rounds demonstrates that the individual panelists’ ratings are influenced by the group’s discussion. In the adapted method, investigators found a statistically significant level of convergence between the first and second rounds of rating—that is, there was a nonrandom narrowing in overall panelist agreement, disagreement, and uncertainty (p = 0.003).
Ratings varied by panelist type with patients having statistically significantly different beliefs: The panel was composed of community health center patients, providers, executives and stakeholders who did not work directly in health centers. Investigators found that the mean rating of effectiveness for ES was statistically significantly different across the panelist types (p = 0.02). They also found that patient panelists rated the services more highly for effectiveness than executive (p = 0.009) or policy (p = 0.03) stakeholders.
Limitations
The reliability of the panel process has been shown to be dependent on the quality of evidence. As the evidence for ES is limited, discussion on this subject was particularly dependent on panelists’ perspectives. Moreover, the methods used depend on sufficient time for discussion during the expert panel, and having more time for full discussion may have changed ratings. Findings were based on the perspectives of a small expert sample; the panel was not a statistically representative group of raters. Finally, full validation of the adaptation requires testing the clinical effectiveness of the various types of ES and correlating that level with the median rating from the panel. That analysis was beyond the scope of this project; its goal was to provide evidence for proof of concept of a patient-centered approach.
Conclusions
There are two high level conclusions from this project. First, with respect to the development of a patient-centered research method, although the traditional appropriateness method is a validated approach to solicit the input of scientific experts to determine the appropriateness of clinical interventions, the adaptation of this approach to include underserved patients and their representatives yielded important insights and substantively changed the ratings. The project provided some evidence about how providers and consumers of ES differentially rate their effectiveness. The insight that patients rated the services with higher effectiveness scores than health center executives or policy stakeholders validates the previously identified value of including the patient voice in research on delivery system redesign.
Second, the project generated a broad set of data about the effectiveness of ES, representing a significant contribution to a field that had limited peer-reviewed literature. The two patient and stakeholder panels defined six broad categories, 112 granular-level services, and various service intensity variables as a framework for evaluating the effectiveness of these services. Although not exhaustive, this list can serve as an important, initial classification scheme to guide further research. The distributions of ratings for the service intensity variables can guide the allocation of resources to the services where they are most needed.