Results Summary

What was the project about?

Electronic health records, or EHRs, have information about a patient’s health such as test results, diagnoses, and treatments. EHRs also have clinical notes that doctors and patients can use to track goals and decisions.

Clinical notes may be useful for research or to help improve care. But it’s hard to get information from these notes across large groups of patients. The notes may use different ways to describe the same thing. For example, high blood pressure may be called hypertension. Also, the notes may use abbreviations or have spelling mistakes.

In this project, the research team designed and built a search engine to make EHR notes easier to search and use for patient care and research.

What did the research team do?

To develop a new method for searching clinical notes in EHRs, the research team used 66 million clinical notes from patient visits at Nationwide Children’s Hospital in Ohio from 2006 to 2016. Using the new method, the team built a search engine called QREK. QREK stands for Query Refinement by word Embedding and Knowledge base. QREK finds and pulls out EHR notes that are related to keywords entered into it. It can also suggest other relevant keywords and common alternatives.

The research team tested QREK in two ways. First, they asked three doctors to rate the relevance of terms suggested by QREK across 11 searches. Second, the team looked at how often QREK correctly suggested a synonym for a known medical term.

The research team tested the final version of QREK under nine different scenarios with people at Nationwide Children’s Hospital to get feedback about its usefulness. For example, some people used QREK to do research; others used QREK to help improve care.

Patients, hospital administrators, health insurers, health information technology specialists, researchers, and clinicians provided input during the study.

What were the results?

The research team found that about 72 percent of the terms suggested by QREK were relevant to the original search term. Also, of the first 60 terms suggested by QREK, 54 percent matched synonyms on a standard list of known medical terms.

In the nine scenarios tested, people reported that QREK improved their use of EHR notes.

What were the limits of the project?

Testing occurred at a children’s hospital using keywords for children’s care.

Future research could continue to refine QREK and test QREK with EHR notes from adult care settings.

How can people use the results?

Researchers and hospital staff can use QREK to search and use notes in EHRs. QREK is available free of charge.

How this project fits under PCORI’s Research Priorities
The research reported in this results summary was conducted using PCORnet®, the National Patient-Centered Clinical Research Network. PCORnet® is intended to improve the nation’s capacity to conduct health research, particularly comparative effectiveness research (CER), efficiently by creating a large, highly representative network for conducting clinical outcomes research. PCORnet® has been developed with funding from the Patient-Centered Outcomes Research Institute® (PCORI®).

Final Research Report

View this project's final research report.

Peer-Review Summary

Peer review of PCORI-funded research helps make sure the report presents complete, balanced, and useful information about the research. It also assesses how the project addressed PCORI’s Methodology Standards. During peer review, experts read a draft report of the research and provide comments about the report. These experts may include a scientist focused on the research topic, a specialist in research methods, a patient or caregiver, and a healthcare professional. These reviewers cannot have conflicts of interest with the study.

The peer reviewers point out where the draft report may need revision. For example, they may suggest ways to improve descriptions of the conduct of the study or to clarify the connection between results and conclusions. Sometimes, awardees revise their draft reports twice or more to address all of the reviewers’ comments. 

Peer reviewers commented and the researchers made changes or provided responses. Those comments and responses included the following:

  • The reviewers commended the researchers on an interesting study. They found many sections easy to understand but other sections were overly technical. The researchers addressed this issue by adding plain-language summaries at the end of technical sections of the report and added a glossary to explain unfamiliar terms.
  • The reviewers suggested that the source code for the language processing tool the researchers created should be made available publicly so the tool could be applied in other locations. The researchers added information to the report about how readers could request the source code, which would be made available without cost.
  • The reviewers noted that user feedback included in the report appeared to be anecdotal rather than systematic and asked the researchers to demonstrate their systematic approach to collecting feedback from usability testing. The researchers responded that they completed systematic usability testing through other funding and reported on that in a published article.  Therefore, they obtained only informal feedback from individual research teams for the current report.

Conflict of Interest Disclosures

Project Information

Yungui Huang, PhD, MBA
Huan Sun, PhD
Nationwide Children's Hospital/The Ohio State University
$1,047,525
10.25302/11.2022.ME.2017C16413
Unlocking Clinical Text in EMR by Query Refinement Using Both Knowledge Bases and Word Embedding

Key Dates

November 2017
December 2022
2017
2022

Study Registration Information

Tags

Has Results
Award Type
State State The state where the project originates, or where the primary institution or organization is located. View Glossary
Last updated: March 14, 2024