Results Summary
What was the research about?
Patients and their healthcare providers, such as doctors and nurses, can use survey scores to track the symptoms of illnesses like rheumatoid arthritis, or RA, over time. Tracking symptoms in this way can help them understand if a treatment is working well for a patient.
When researchers create and test these surveys, they want to be sure that patients’ survey scores match how severe patients feel their symptoms are. Researchers also want to know what changes in survey results show that symptoms have changed so much that patients might want to change treatment.
In this study, the research team had patients with RA and providers read stories that described what patients felt like with higher and lower scores of two symptoms:
- Fatigue, or lack of energy
- Pain interference, or how much pain interferes with their lives
Patients and providers decided whether each story showed a mild, moderate, or severe level of symptoms. They also gave their views about how large a change in scores would need to be to show that pain or fatigue was getting better or worse.
What were the results?
The research team found that patients varied in their opinions about which stories showed mild, moderate, or severe symptoms. Providers varied less. When choosing which stories show mild, moderate, or severe symptoms, patients chose higher scores compared with providers.
Patients and providers also agreed on the amount of change in scores that showed
- Pain and fatigue were getting better
- Fatigue was getting worse
But compared with providers, patients felt that a larger change in scores would be needed to show that pain was getting worse.
Who was in the study?
The study included 11 patients receiving treatment for RA and eight RA providers. Of the patients, six were white, three were black, one was Asian, and one was mixed race. The average age was 55. On average, patients had RA for 20 years. Of the providers, all were white, and the average age was 49.
What did the research team do?
First, the research team created a series of written stories about patients with RA and their symptoms. Each story represented a specific level of symptoms and included four or five descriptions of symptoms at that level.
The research team sorted the stories from least to most severe symptoms and then presented these stories to patients and providers. Patients and providers categorized which stories showed mild, moderate, or severe symptoms.
Next, patients and providers viewed a story that showed a patient with severe symptoms, followed by stories showing less severe symptoms. They then decided how much the symptoms would have to improve to show that a treatment for RA was working. They also viewed a story of a patient with mild symptoms, followed by other stories showing more severe symptoms. Patients and providers said how much symptoms would have to worsen to show that a patient needed a change in treatment.
Patients with RA and patient advocates helped design the study, analyze data, and interpret results.
What were the limits of the study?
The study included patients who’d had RA for a long time. Results might differ for patients who were diagnosed with RA more recently.
Future research could include patients who have more recent diagnoses of RA to understand how these patients think about symptom severity.
How can people use the results?
Researchers who work on surveys to track RA symptoms can take into account this study’s findings that patients and providers differed in the level of symptoms they thought were mild, moderate, or severe.
Professional Abstract
Objective
To estimate thresholds for mild, moderate, and severe levels of rheumatoid arthritis (RA) symptoms as measured by the PROMIS® Pain Interference and Fatigue scores and to estimate changes in PROMIS scores that indicate clinically meaningful changes in symptoms based on patient and provider assessment
Study Design
Design Elements | Description |
---|---|
Design | Empirical analysis |
Data Sources and Data Sets | Patient and provider reactions to vignettes representing a range of PROMIS scores |
Analytic Approach | Bookmarking method to identify thresholds for PROMIS scores; evaluation of patient and provider choice of thresholds for dividing vignettes into categories of mild, moderate, and severe symptoms; evaluation of meaningful changes in PROMIS scores identified by patients and providers |
Thresholds for mild, moderate, and severe PROMIS Pain Interference and Fatigue scores; clinically meaningful changes in scores that indicate improvement or worsening of symptoms |
Currently, researchers do not have a good understanding of how patients and providers with specific conditions, like RA, characterize their symptoms in terms of different levels of PROMIS scores. This empirical analysis sought to understand how patients and providers classify PROMIS Pain Interference and Fatigue scores and what patients and providers consider as meaningful changes in scores.
First, the research team created a series of patient vignettes describing the level of symptoms that a hypothetical patient would experience if they had a specific PROMIS score. PROMIS scores have a mean of 50 and a standard deviation of 10 in the general US population. For example, a vignette for a PROMIS fatigue score of 62 might describe a person as rarely having enough energy for strenuous exercise and often feeling too fatigued to plan for future activities. The team created one set of nine vignettes for PROMIS measures of pain interference and another for measures of fatigue.
The research team had patients and providers go through three scenarios. First, the team sorted the vignettes from least to most severe and presented the set of vignettes to the patients and providers. Patients and providers then indicated where they felt transitions between vignettes showed an increase in symptom severity (for example, from moderate to severe symptoms).
Second, patients and providers viewed another set of vignettes arranged from high to low levels of either pain or fatigue. Starting with a vignette with high symptom levels, patients and providers suggested which subsequent vignette would show that a treatment was working. Third, patients and providers viewed a vignette representing a low level of either pain or fatigue and suggested which subsequent vignette would indicate symptoms worsened so much that a change in treatment should be considered.
The research team recruited 11 patients receiving treatment for RA and eight rheumatology providers. Of the patients, six were white, three were black, one was Asian, and one was mixed race. The average age was 55. On average, patients had RA for 20 years. Providers included physicians, nurses, and psychologists. All providers were white, with an average age of 49.
Patients with RA and patient advocates helped design the study, analyze data, and interpret results.
Results
Patients varied widely in the thresholds they selected for mild, moderate, and severe symptoms; providers varied less. Patients also selected higher thresholds than providers for mild, moderate, and severe pain or fatigue.
When considering meaningful reduction in pain or fatigue, both patients and providers selected a vignette representing a 10-point change in PROMIS scores. For an increase in pain, patients selected a 10-point change, while providers suggested only a 5-point change. For an increase in fatigue, both patients and providers suggested a 5-point change.
Limitations
The study included patients who had had RA for a long time. Results might differ for patients diagnosed with RA more recently.
Conclusions and Relevance
The research team found that, compared with providers, patients differed in the level of symptoms they considered to be mild, moderate, and severe. Providers were more sensitive to changes in levels of pain interference.
Future Research Needs
Future research could include patients who have more recent diagnoses of RA to understand how these patients think about symptom severity.
Final Research Report
View this project's final research report.
Journal Citations
Related Journal Citations
Peer-Review Summary
Peer review of PCORI-funded research helps make sure the report presents complete, balanced, and useful information about the research. It also assesses how the project addressed PCORI’s Methodology Standards. During peer review, experts read a draft report of the research and provide comments about the report. These experts may include a scientist focused on the research topic, a specialist in research methods, a patient or caregiver, and a healthcare professional. These reviewers cannot have conflicts of interest with the study.
The peer reviewers point out where the draft report may need revision. For example, they may suggest ways to improve descriptions of the conduct of the study or to clarify the connection between results and conclusions. Sometimes, awardees revise their draft reports twice or more to address all of the reviewers’ comments.
Peer reviewers commented, and the researchers made changes or provided responses. The comments and responses included the following:
- Reviewers said the research described did not establish minimal important differences (MID) but provided only descriptive statistics about change scores associated with perceived change rather than statistical approaches noted in the literature to establish MID. The reviewers suggested that the authors focus on providing meaningful interpretations of change scores rather than MID, to avoid getting into a lengthy explanation of why they did not use some of the expected measures for calculating MID. The researchers explained that the methods and results related to MID were part of their funded-research plan. Therefore, they could not be deleted from the report. However, the researchers did add a statement acknowledging that some observers might have technical objections to various approaches used in the study. They also stated that they would examine other methods for estimating MID beyond the methods used in the report.
- Reviewers objected to the use of distribution-based methods for interpreting meaningful change, saying such methods indicate the amount of change but say nothing about its clinical relevance. The researchers explained that their intention in this methods-focused study was to evaluate different methods of estimating change, one of which was distribution-based methods, rather than determining the clinical relevance of that change.
- Reviewers said it was not clear why the study did not ask patients about anxiety and depression separately but instead combined them into one domain, emotions. The researchers said that they could not change the way they had asked the question in their survey but agreed that, in retrospect, perhaps they should have separated anxiety and depression. They noted that they collected information on depression and anxiety in a different part of the study, aim 2.