Moving Beyond Averages
January 2017—As PCORI-funded studies produce results of interest to patients and those who care for them, we are updating the stories of those projects. Here is one such update.
The current public and professional buzz about precision, or personalized, medicine arises from the vision of treatments tailored to the individual patient. Published results from a PCORI-funded project suggest one way to make that goal more attainable.
As summarized in several articles in major medical journals, the project shows how data from large clinical studies can, through a statistical analysis, provide not just the average effect of a treatment, as most studies now do, but indicate which patients are likely to benefit—or not.
On the basis of his findings, David M. Kent, MD, MS, of Tufts Medical Center in Boston, recommends all large trials consider applying the statistical method he and colleagues developed.
“Clinical evidence comes from groups of patients, but medical decisions must be made for individuals,” Kent says. “Individuals differ in so many characteristics that can influence their potential benefits and risks. Selecting the best treatment for an individual is a fundamentally different problem from estimating which treatment is better on average.”
The idea behind Kent’s approach to reducing that mismatch has been around for more than a decade. But PCORI funding allowed his group to test it rigorously—and document its usefulness.
Clinical evidence comes from groups of patients, but medical decisions must be made for individuals.David M. Kent, MD, MS
Taking a Closer Look
Other researchers have tried to personalize data from large studies by considering subgroups one characteristic at a time—male/female, old/young, with/without a certain condition. That approach is unlikely to lead to clinically useful results, Kent says. His method, called risk modeling, aims to consider all the relevant variables at the same time—providing more patient-centered estimates of what effects a treatment will have.
“Clinical trials typically lump everyone together. But patients in a clinical trial frequently have very different risks of having the bad outcome. And we expect the harm-benefit trade-offs for patients with very different risks to also be different,” Kent says.
In reanalyzing data from each trial, the team combined personal characteristics of the patients to create a mathematical model that predicts the risk of an undesirable health outcome. They then divided the patients into groups based on their predicted outcome risk.
In their most recent publication, Kent and colleagues reported major differences in disease progression or effectiveness of treatments in several of the 32 trials they examined. They chose to reanalyze large trials with publically available data. Where there were differences, the benefits of the therapies went largely to the highest-risk group.
“In several trials, the differences between risk groups were so large that clinical decision making should really be tailored to individual risk. We described these results in separate clinical papers,” Kent says. One of these (presented in our original feature below) reanalyzed data from a diabetes-prevention study, while another reconsidered data on patients with a condition called benign prostatic hyperplasia. A third reanalysis asked which patients with heart failure benefit from the drug digitalis.
Applying the Test
In each of the three cases, the highest-risk group showed a strong benefit from the treatment. The lower-risk groups experienced little or no benefit, and even, in one case, harm. For patients in those groups, clinicians can recommend against such treatments, with their costs and potential side effects, Kent says.
Kent now recommends that clinical trials should routinely include these risk-based statistical analyses: “This approach should be widely feasible.”
He adds, “There are many ways that results of such analyses can be made usable to clinicians—either through an easily calculable score, a risk diagram, or a tool embedded in a website or electronic health record.” Clinicians could then quickly check whether a treatment is right for an individual patient.
With a new PCORI award, Kent and his team will enable clinicians to do just that. A tool using Kent’s predictions about diabetes risk will be put in place at 50 clinics, so that clinicians may preferentially refer high-risk patients to diabetes prevention programs.
With additional PCORI funding, Kent’s team is planning a symposium on evidence and the individual patient, to be held early in 2018.
ORIGINAL FEATURE (JULY 2015)
Bray Patrick-Lake, MFS, is not an average patient. When she participated in a clinical trial in 2008, she experienced nearly every adverse event on the checklist of possibilities, but she also received maximum benefit from the new therapy. So it’s not surprising that she has teamed up with a researcher who aims to move beyond averages to interpret clinical trial results in terms of what’s best for individuals.
Results of clinical trials are usually reported as averages for the entire group studied. This can mask both benefit and risk because patients with the same medical condition differ from one another in many other ways that can affect study outcomes, says David M. Kent, MD, MS, Director of the Predictive Analytics and Comparative Effectiveness (PACE) Center at Tufts Medical Center in Boston. The phenomenon of an intervention affecting study participants differently is known as “heterogeneity of treatment effect.”
In work funded by PCORI, Kent applies mathematical models to results of completed clinical trials to identify both the types of patient who are most likely to benefit from the treatment being studied and those apt to suffer more harm than good.
"Evidence from trials is always aggregated in groups, but doctors make decisions one patient at a time,” he says. “You can break down the group-based evidence in many ways that are potentially informative to give you different answers for particular patients."
One of PCORI’s core principles is that study methods matter when it comes to generating valuable evidence that will help patients and the rest of the healthcare community make better-informed decisions. We require that the studies we fund adhere to our Methodology Standards, and we also support research designed to improve the techniques used in patient-centered outcomes research. Kent’s study is a good example of how research on study methodology can lead to better information that can be used to make treatment decisions and, potentially, improve patient outcomes.
Who Benefits Most?
Improving Clinical Trials
Patrick-Lake, who serves as a patient partner for Kent’s study, knows quite a bit about the heterogeneity phenomenon. She has a condition called patent foramen ovale (PFO), a hole between two chambers of her heart. The hole is normally present in fetuses but usually seals soon after birth. In about a quarter of people, it doesn’t completely close but typically causes no problems. However, in a few people, it leads to migraines or strokes. Patrick-Lake was one of the unlucky ones; she developed migraines so severe she would become paralyzed and unable to speak.
When doctors couldn’t help her, Patrick-Lake located a study investigating the relationship between migraines and PFO. She enrolled and was randomly assigned to receive an implanted heart device that fortunately stopped her migraines but unfortunately led to complications. The benefits ended up outweighing the harms in her opinion, but, Patrick-Lake says, “The risks and benefits as they were explained in the trial were very generalized and not specific to my personal characteristics.”
Patrick-Lake also notes that patients had not been consulted on the trial design, which had several undesirable aspects: a sham invasive procedure, restricted migraine medication use, and a burdensome record-keeping requirement. The study was ultimately stopped for low enrollment, and Patrick-Lake believes that engaging patients in the design phase could have resulted in a better trial in which more patients would have chosen to participate.
That study experience led Patrick-Lake to found the nonprofit PFO Research Foundation. She later became Director of Stakeholder Engagement for the Clinical Trials Transformation Initiative, an organization that identifies and promotes practices to increase the quality and efficiency of clinical trials. She had been aware of Kent’s research on PFO and stroke since 2009 and jumped at the chance to join his team.
“Dr. Kent’s work is really smart in that it’s helping patients answer the question, ‘Based on my personal characteristics, what can I expect?’ We can take methods like Dr. Kent’s and give patients better information. That was why I wanted to work with him,” she says.
Balancing Benefit and Risk
Kent’s approach begins with patient-specific factors that can affect the outcomes of interventions. His team compiles those factors into mathematical models and applies them to databases containing the results of completed clinical trials.
One trial the team examined is the 2002 landmark Diabetes Prevention Program (DPP), which measured participants’ progression over three years from prediabetes to diabetes. The trial found that intensive diet and exercise reduced that progression by 58 percent, and the diabetes drug metformin reduced it by 31 percent.
The DPP investigators had considered certain patient characteristics, such as age, weight, and blood sugar levels, at the start of the study. But, Kent says, they looked at just one variable at a time—a technique common in many clinical trials but that doesn’t adequately capture patient heterogeneity. For one thing, patients differ from one another on many variables. Also, the more single-variable comparisons that are done, the greater the likelihood of a result that looks significant but is actually just due to chance, Kent explains.
In an article published in February the in the journal BMJ, Kent and his colleagues report that when they applied their model, which considered 17 variables to stratify patients by their likelihood of developing diabetes, they found that the diet-and-exercise intervention benefited everyone. However, its effect was much greater in the higher-risk participants (see graph), the article reported.
What’s more, according to the article, metformin didn’t benefit the lower-risk individuals at all. Nearly all of the diabetes prevention associated with metformin was concentrated in the highest-risk quarter of participants. This finding focuses attention on the drug’s potential side effects. Although metformin is usually safe, in some people it causes a potentially serious side effect called lactic acidosis, Kent notes. In a person who is unlikely to benefit from the drug, he asks, why take that chance?
In weighing risks versus benefits, another consideration is what a patient is likely to face without any therapy. "Benefits might be more important in someone at higher risk of a bad outcome, but less so in someone with just a small chance of problems," Kent notes. "The risk-benefit trade-off in various groups of patients can be very different."
Examining More Studies
With PCORI’s support, Kent is now using risk models to analyze results from more than 30 clinical trials. Most of these involved treatments for heart disease and stroke. Learning which patients will respond best to clot-busting drugs, for instance, is crucial because the treatment in rare cases causes fatal bleeds.
Ultimately, Kent says, “we believe that all—or at least most—clinical trials should be analyzed and reported using risk models,” both to help healthcare providers recommend the best treatments for individual patients and to provide information for clinicians and patients to consider as they share decision making. “This is especially likely to be helpful when the bad outcomes we are trying to avoid are less common and more predictable—and also when treatments are associated with even a small amount of serious treatment-related harm.”
And to the extent possible, Patrick-Lake says, the approach can also help improve the patient experience in clinical trials. “It’s important to know before you enroll in a study whether you’d be expected to receive maximum benefit or do poorly, based on your personal characteristics. We can do better than wadding it into one ball and taking our best guess.”
Equally important, Patrick-Lake believes, is involving patients in study design from the beginning of the research process. If the investigators of the PFO migraine trial had done that, they might have avoided the flaws that led to the poor enrollment and subsequent termination. “A lot of studies are planned by statisticians and regulators. Having patients at the table can help identify how to overcome problems with study design and feasibility, develop realistic recruitment plans, and identify real-world applicability,” Patrick-Lake says.
And that’s what Patrick-Lake is doing in Kent’s project, beginning with helping to write a summary of the project for laypeople. “His work is highly technical, but once he could explain it to me in layperson language, I said ‘Oh, my gosh, this is so important to patients. We have to communicate what you’re doing to patients and the public.’ I feel very strongly that if we can pull together all of the pieces of the research system, we can do better research all together.”
At a Glance
<p><strong>Assessing and Reporting Heterogeneity of Treatment Effect in Clinical Trials</strong></p>
<p><strong>Principal Investigator:</strong> David M. Kent, MD, MS</p>
<p><strong>Goal</strong>: To use a mathematical model to risk-stratify patients in order to better interpret the effects of clinical trial results for individuals within the study population.</p>
<p><strong><a href="/node/4439">View Project Details</a> | <a href="/node/4439#toc-related-pcori-dissemination-and-implementation-projects">View Related Materials</a></strong></p>
Posted: July 30, 2015; Updated: March 28, 2017