Results Summary

What was the research about?

Systematic reviews combine the results of many studies. In health research, these reviews can help determine which treatments or types of care work best.

As part of a systematic review, researchers find and record important study information, such as design and results, from published journal articles. This process, called abstraction, takes time. If researchers make errors during this process, the systematic review may come to incorrect conclusions, which can affect healthcare decisions.

Researchers abstract information in different ways. In single abstraction and verification, one person abstracts information and a second person reviews it for accuracy. In dual abstraction, two people abstract information on their own and compare the results.

In this study, the research team created and tested a new software program to help with abstraction. In the new software, researchers place flags within a journal article, displayed next to a data collection form on a computer screen, to easily find abstracted information. The team compared three approaches for abstracting information:

  • Single abstraction and verification with the new software
  • Single abstraction and verification without the new software
  • Dual abstraction without the new software

The research team looked at how accurate the abstractions were and how much time it took to do them.

What were the results?

All three approaches resulted in a similar proportion of errors, about 16 percent. Most errors resulted from abstracting the wrong information rather than leaving out correct information. Abstractors were more likely to leave out important study information when they used the new software than when they didn’t.

Single abstraction with the new software was

  • 46 minutes faster than dual abstraction
  • 20 minutes slower than single abstraction without the software

Single abstraction without the new software took the least amount of time to do.

What did the research team do?

The research team invited 52 people who had data abstraction experience to test the three approaches. These abstractors worked in 26 pairs. The research team paired a person with more experience with abstraction with a person with less experience. Each pair reviewed six journal articles, using two articles for each approach. The articles were on different topics, such as preventing falls or treatments for depression.

  • Single abstraction and verification with the new software. The less experienced person used the new software to abstract information. Then the more experienced person reviewed the first person’s work for accuracy.
  • Single abstraction and verification without the new software. The less experienced person abstracted information without using the new software. Then the more experienced person reviewed the first person’s work for accuracy.
  • Dual abstraction without the new software. Each person completed the abstraction on their own. They then compared their work and settled any differences.

The research team created an answer key to look for errors. The abstractors also timed how long it took to complete abstraction with each approach.

Patients, policy makers, healthcare industry workers, and researchers with experience in systematic reviews helped design the study and analyze the results.

What were the limits of the study?

The error proportions may have been higher than usual because abstractors were unfamiliar with the software or the review topics.

Future research could test other ways to fit the new software into the systematic review process.

How can people use the results?

Researchers can use the results when considering methods to abstract data for a systematic review.

Final Research Report

View this project's final research report.

Peer-Review Summary

Peer review of PCORI-funded research helps make sure the report presents complete, balanced, and useful information about the research. It also assesses how the project addressed PCORI’s Methodology Standards. During peer review, experts read a draft report of the research and provide comments about the report. These experts may include a scientist focused on the research topic, a specialist in research methods, a patient or caregiver, and a healthcare professional. These reviewers cannot have conflicts of interest with the study.

The peer reviewers point out where the draft report may need revision. For example, they may suggest ways to improve descriptions of the conduct of the study or to clarify the connection between results and conclusions. Sometimes, awardees revise their draft reports twice or more to address all of the reviewers’ comments. 

Peer reviewers commented, and the researchers made changes or provided responses. The comments and responses included the following:

  • Reviewers said the level of experience of abstractors involved in the study may not reflect the level of expertise of those conducting actual systematic reviews. The reviewers suggested that not enough was done to validate the expertise of the volunteer abstractors. The researchers noted that they summarized the backgrounds of the trial participants in Tables 2 and 3. They pointed out that 90 percent of participants had previously abstracted data from 10 or more studies and all had received some form of training in systematic reviews. The researchers noted that nearly all participants described themselves as “somewhat or moderately experienced” or “very experienced.” The researchers did not feel the need to make changes to the report on the subject of abstractor expertise.
  • Reviewers pointed out that normally abstractors are well informed in the areas in which they prepare reviews and suggested that volunteers in this study were not likely to be as motivated as coauthors of a review. The researchers agreed that the motivation of data abstractors in the trial may be different from those preparing real-world systematic reviews, but they noted that some speculate that participating in a research study could lead to greater motivation rather than less. They added that the error and time reported in this study were consistent with error and time reported in other studies. Thus,  the researchers did not feel there was enough evidence to support assumptions about abstractor motivation in either direction.
  • Reviewers suggested that it may have been better to recruit data abstractors who were planning to work on reviews for publication. The researchers said they chose their research design with the goal of recruiting a large number of participants who were relatively representative of data abstractors. The researchers said they would encourage future research that considers alternate designs.
  • Reviewers noted that aim 1 lacked a theoretical framework for technology adoption that could have guided study design and led to explorations in a wider range of outcome measures. The researchers responded, noting that aim 1 just focused on software development and usability. Although additional work on promoting the adoption of the technology is a good idea, it is beyond the scope of this project.

Conflict of Interest Disclosures

Project Information

Tianjing Li, MD, PhD
Johns Hopkins University
$1,114,165
10.25302/04.2020.ME.131007009
Develop, Test, and Disseminate a New Technology to Modernize Data Abstraction in Systematic Reviews

Key Dates

July 2014
May 2019
2014
2019

Study Registration Information

Tags

Has Results
Award Type
State State The state where the project originates, or where the primary institution or organization is located. View Glossary
Last updated: November 30, 2022