Results Summary
PCORI funded the Pilot Projects to explore how to conduct and use patient-centered outcomes research in ways that can better serve patients and the healthcare community. Learn more.
Background
Sometimes patients ask for—and doctors order—medical tests that do not give useful information. Researchers call these “low-value tests.” More than 70 doctor organizations say that reducing the number of unneeded tests could improve health care in the United States.
Project Purpose
This study evaluated a program to train doctors on how to respond when a patient requests a low-value test.
Methods
The researchers randomly assigned 61 primary care doctors to one of two groups. All of the doctors practiced internal medicine or family medicine at a medical school in northern California.
Medical coaches acted the part of patients in visits with doctors in both groups. In the appointment, they asked for a specific kind of test that would be low-value for that patient.
After the appointment was over, the medical coaches gave the doctors in the first group feedback about how well they did in responding to patient concerns without ordering a low-value test. They discussed six methods for responding.
In the other group, the doctors didn’t receive any feedback. Instead, they got an email with guidelines on when the test requested would be considered low value and when it would be needed.
The coaches made three more visits to each doctor during a nine-month period. They asked for low-value tests each time. The medical coaches made a total of 155 visits.
Researchers wanted to see if the medical coaches’ feedback made a difference in the way doctors responded to patient requests for low-value tests. To do this, the researchers compared the number of times doctors in each group ordered a low-value test for the medical coaches.
The researchers listened to recordings of the visits to hear if the doctor used one or more of the six ways to handle requests for low-value tests. Researchers asked the medical coaches to rate how satisfied they would be as a patient on a scale of 1 to 10. The researchers also looked at electronic medical records to see how many low-value tests the doctors ordered for their real patients.
Findings
Doctors ordered low-value tests in one of every four visits with the medical coaches. That was fewer low-value tests than the researchers had expected.
Doctors who got feedback from coaches were just as likely to order low-value tests as those who did not. This was true for all three visits. The doctors’ rate of ordering low-value tests for real patients did not change over the study time in either group.
The medical coaches who shared feedback with the doctors were more satisfied with their visits than the medical coaches who did not share feedback with their doctors.
Limitations
Doctors might have figured out which of their patients were actually medical coaches. Doctors might have changed their behavior if they thought they were being tested.
All the doctors worked at one of two clinics. Those who got coaching might have shared what they learned with those who did not get coaching, so that everyone’s behavior might have changed.
The clinics had supervisor doctors on staff. They may have talked to the doctors about what tests they ordered in addition to the feedback the doctors got from the medical coaches.
There were fewer medical coach visits than the researchers had planned. The doctors also ordered fewer low-value tests than the researchers thought they would. Thus, the researchers might not have been able to detect differences between the groups.
Researchers conducted the study with only one type of doctor in only two clinics. The results may not be the same with other kinds of doctors at more clinics.
Conclusions
The feedback did not appear to have an effect on the way that doctors ordered tests. Thus, the researchers cannot recommend that doctors get this type of short-term coaching to reduce the number of low-value tests ordered.
Sharing the Results
The researchers presented the results at a meeting and in journal articles (see below).
Professional Abstract
PCORI funded the Pilot Projects to explore how to conduct and use patient-centered outcomes research in ways that can better serve patients and the healthcare community. Learn more.
Background
As part of the Choosing Wisely initiative, more than 70 physician specialty societies have issued “Top Five” lists of clinical practice changes that physicians could immediately enact to augment US healthcare value. Low-value diagnostic tests have been included on primary care specialty societies’ Choosing Wisely Top Five lists.
Project Purpose
To evaluate the effectiveness of a standardized patient– (SP-) based intervention designed to enhance primary care physician (PCP) patient-centeredness and skill in handling inappropriate patient requests for low-value diagnostic tests.
Study Design
Randomized controlled trial.
Participants, Interventions, Settings, and Outcomes
Participants were general internal medicine or family medicine resident physicians (N = 61) at two residency-affiliated primary care clinics at an academic medical center in Northern California.
Interventions consisted of two simulated visits with SP instructors portraying patients requesting inappropriate spinal magnetic resonance imaging (MRI) for low back pain or screening dual-energy x-ray absorptiometry (DXA). SP instructors provided personalized feedback to residents regarding use of six patient-centered techniques to address patient concerns without ordering low-value tests. Control physicians received SP visits without feedback and were emailed relevant clinical guidelines.
The primary outcome was whether residents ordered SP-requested low-value tests during up to three unannounced SP clinic visits over 3–12 months follow-up, with patients requesting spinal MRI, screening DXA, or headache neuroimaging (the latter to explore potential generalization of intervention effects to other clinical contexts). Secondary outcomes included PCP patient-centeredness and use of targeted techniques (both coded from visit audio recordings), SP satisfaction with the visit (0–10 scale), and actual testing among real patients seen by study physicians.
Data Sources
Test ordering was assessed by standardized chart review. PCP patient-centeredness and use of targeted techniques were assessed by coding audio recordings of visits with unannounced SPs. For analyses of intervention impacts on actual testing among real patients, we abstracted electronic medical record data on diagnostic testing among study physicians during one-year pre- and post-intervention phases.
Data Analysis
Analysts were blinded to resident allocation. To assess for intervention effects during SP visits, researchers used generalized linear mixed models (GLMMs) that included main effects for study arm (intervention versus control), SP visit number (first, second, or third), SP case (back pain, DXA, or headache), and resident-level random effects. Because of the randomized design, we did not adjust for physician characteristics in primary analyses. Researchers used the fitted GLMM model to predict testing probabilities by study arm and SP case while adjusting for SP visit number.
For outcomes among actual patients, researchers used similar GLMM models with Poisson links to model counts of diagnostic tests per visit with study residents. Along with resident-level random effects, models included study arm, a binary variable signifying whether the visit occurred before or after the two SP intervention (SPI) visits, and an interaction term between study arm and period (pre- versus post-SPI visits). Intervention effect was assessed by examining the significance of the interaction term.
Findings
Of 61 randomized residents, 59 had encounters with 155 SPs during follow-up. The intervention was not associated with significantly improved patient-centeredness or use of targeted techniques (see table 1). Residents ordered low-value tests in 26.5 percent of SP encounters (95% CI: 19.7–34.1%) with no significant difference in the odds of test ordering in intervention PCPs relative to controls (adjusted OR 1.07 [95% CI: 0.49–2.32). Rates of test ordering among intervention and control residents were similar for all three SP cases. SPs rated visit satisfaction higher among intervention than control residents (8.5 versus 7.8, adjusted mean difference 0.6 [95% CI: 0.1–1.1]). There were no significant intervention impacts on actual diagnostic test ordering among real patients seen by intervention and control residents (p = 0.27 for period by study arm interaction term for outcome of any diagnostic testing during patient visits).
Limitations
High rates of SP detection may have altered overall results because residents may alter behavior when they suspect they are seeing a SP. Because intervention and control residents practiced in the same settings, they may have discussed the intervention, introducing contamination. In addition, the researchers lacked precision in estimating the relative odds of requested test ordering in intervention versus control encounters because of a smaller than planned number of SP visits and lower than anticipated rates of test ordering. Attending teaching physicians may have influenced resident ordering or counseling behaviors. The study was limited by the inclusion of only two academic practices in a single institution. Only resident physicians were studied and results could differ among physicians in community practice.
Conclusions
An SPI aiming to improve resident skill in handling inappropriate patient requests for low-value tests had no impact on ordering of low-value tests during subsequent unannounced SP visits, nor did the intervention influence resident patient-centeredness, the use of targeted counseling techniques, or diagnostic testing among actual patients. Although the intervention was theoretically grounded and was rated favorably by residents, an SPI with such limited scope and duration cannot be recommended as a means of improving the value of diagnostic testing in primary care.