• AI
  • Molecular Imaging
  • CT
  • X-Ray
  • Ultrasound
  • MRI
  • Facility Management
  • Mammography

Standardization, feedback improve images and data

Article

With three shifts, weekend staff, and portable, offsite, and outpatient units all providing computed radiography services, the radiology department at Texas Children's Hospital has plenty of incentive to standardize and improve the quality of the images it collects. With 70% of its case volume coming from CR, the department has been using PACS to track rejected images and develop training for staff.

With three shifts, weekend staff, and portable, offsite, and outpatient units all providing computed radiography services, the radiology department at Texas Children's Hospital has plenty of incentive to standardize and improve the quality of the images it collects. With 70% of its case volume coming from CR, the department has been using PACS to track rejected images and develop training for staff.

With some number crunching, the PACS administrator and applications trainer can determine which technologist is prone to overexpose images, who mispositions infants for chest studies, and who uses too much contrast. The department sets a benchmark reject rate and works with individual technologists on the skills they need to hit that mark.

Such projects are one of the often overlooked advantages of PACS: the ability to collate data that previously took more time than most departments had available and to turn it into meaningful quality improvement feedback. That's only the beginning. By its nature, an informatics-driven department also can minimize the risk of clerical errors and lost films; efficiency is a major factor in many quality improvement endeavors. As hospital systems become more thoroughly linked, performance feedback could influence referring physicians' ordering patterns as well.

Updating departments can cause as many errors as it solves. A filmless department no longer needs film librarians to track down missing jackets throughout the hospital. Instead, it needs PACS administrators to reconcile studies for Fry and Frye, who are the same patient with one incorrectly spelled name. Technologists can crop digital images to focus attention on the target anatomy, but in the process they might inadvertently edit out significant peripheral findings. A digital department can be more efficient, but only if the right systems are in place.

At Texas Children's, rejected CR images get a new lease on life. By placing "None" in front of the name, identification number, and other fields, the rejecting technologist or radiologist guarantees that the case will fail to be validated by the radiology information system and will attract the attention of a PACS analyst. By keeping some record that the images were acquired, the department can also track the cumulative radiation exposure for individual patients.

Working with daily reports from PACS analysts and the reject archives, department applications trainers plan semiannual in-services to go over the most common errors department-wide (see table). Team leaders look at each technologist's reject files every month and work individually on needed skills. Some supervisors use this as a learning tool for new employees' 90-day probation period.

"We set a benchmark of a 4.36% reject rate for all CR technologists," said Stephanie Carr, the department applications trainer. "You can see that a new employee might be having problems with positioning. We work with pediatric patients, so there's a certain amount of movement, and the new employee needs to learn techniques to overcome that."

For now the department is limiting its efforts to CR, but Carr and PACS manager Melissa Blado have considered moving the tools to MR and CT once the process is more standardized. Other centers have focused on CR, too, and suggest that it may be more vulnerable to error than other modalities.

"Traditionally, other modalities have been digital from the start," said Ellen Charkot, a technologist at Toronto's Hospital for Sick Children, who promoted the idea of a radiology quality specialist at the 2004 Society for Computer Applications in Radiology meeting. "These technologists were trained on analog and aren't necessarily educated about how CR or DR are created. Things look different, and it takes time to learn how to produce image quality that is consistent with what one is used to seeing. After we started CR in Toronto, for a while the radiologists wondered why every patient looked like they had interstitial pneumonia."

Beyond positioning and exposure errors, the system is as vulnerable as plain film to misidentified or misfiled studies. PACS administrators make manual corrections daily from technologist and radiologist alerts to ensure that what ultimately ends up in the PACS matches what was requested on the RIS.

Although Texas Children's reject system has been in use for several years, it is improvised within the limitations of a four-year-old PACS and an older CR system. Newer software packages allow technologists or radiologists to designate images for teaching files or make sticky notes on screen about specific problems or questions. They also include tools such as modality work lists that reduce the odds that a patient will be misidentified and the file incorrectly archived.

Such problems are serious and time-consuming enough that one healthcare system's radiology department set up a zero-tolerance policy. Tom Hasley, now a digital solutions advisor with Fuji Medical Systems, developed an error reporting and feedback mechanism during his years as a PACS analyst at Rex Healthcare in Raleigh, NC. Errors included images sent to PACS with the wrong accession number, the wrong name, or under the wrong exam title, as well as the wrong patient selected from the work list and sent to PACS.

Technologists who made errors received the following actions:

- first offense, verbal warning;

- second offense, written warning and review training;

- third offense, level 2 written warning and further review training; and

- fourth offense and beyond, disciplinary action and/or termination.

If employees go six weeks without further violations following a verbal warning, their record is cleared unless a pattern of repeat errors develops.

In a department that has 150,000 cases sent to PACS each year, the project rolled out slowly from modality to modality. Between 1999 and 2003, only one employee tested the limits of the program and was fired, Hasley said. The rate of errors dropped, and the system is still in place.

It's a model Dr. Paul Nagy is looking to put in place at the Medical College of Wisconsin, although he plans to take more of a carrot-and-stick approach than one that is strictly punitive.

"Technology isolates the user groups, so it's important to build communication skills," he said. "Radiologists depend on technologists, and technologists need to learn and get feedback. That system needs to be built in and not slow down either party. Radiologists can get frustrated and stop reporting problems if they don't hear that the problems are being addressed."

At Wisconsin, a radiologist who sees a problem with an annotation or technique can right-click it and save a pointer next to it, then put the study in a public area of the PACS with a note on what's wrong. A technologist supervisor in each section checks the reject folder throughout the day and resolves each issue with the appropriate technologist. A logging feature sends an e-mail to the original radiologist with a note that the problem has been addressed.

Such feedback mechanisms are only an addition to actual interpersonal relationships, not a substitute for them, Nagy said. Collaboration and education need to be part of the interaction between radiologist and technologist in the first place. That's easier in interventional radiology and ultrasound, which are collaborative by nature, but radiography can be fairly isolated.

The stakes are higher in informatics environments, because PACS is unforgiving of human error. Missing medical record numbers, patient names, or procedure types can render studies unviewable. Simple steps such as selecting a patient name from the RIS rather than typing it into the study can reduce transcription errors.

SPREADING QUALITY

As radiology informatics evolves, the gaps for error are closing. A RIS can pull patient demographic data from hospital information systems when a referring physician orders a study. At the modality, the technologist can select the patient's name from a work list and have that data automatically placed into the appropriate fields of the image file. If an identification error does occur, many modalities allow the user to correct information, and newer RIS and PACS should offer options to repair errors as well. Such points are important considerations when purchasing new equipment, said Dr. David Channin, chief of imaging informatics at Northwestern University.

Below the surface, Integrating the Healthcare Enterprise standardization is trying to eliminate nonhuman glitches.

"You have to make sure that natural checks in human systems are supported by automated ones," Channin said. "You need to not only send a file from the CT workstation to PACS, but confirm that PACS got and stored it properly before the CT station can delete it."

Adding consistency to the hardware end and to the human side of radiology holds even greater promise for improving healthcare quality, according to Channin. More detailed clinical and demographic data pulled through the RIS will give radiologists better clues about what to look for in images. Structured reporting will provide referring physicians with more feedback about diagnostic probabilities. Standardizing radiology's language will allow even greater data mining from PACS for research. The RSNA is working on RADLEX, a defined set of terms that can be used in conjunction with SNOMED, the general medical lexicon.

"These lexicons need to define ordering information as well as clinical terms. Something that is 'chest two views' at one facility may be 'x-ray chest PA and LAT' at another," Channin said. "After we get the terms fixed, we can move to develop a true ontology, which defines relationships in a lexicon. 'CT of the trunk' could be universally understood to be CT of the chest, pelvis, and abdomen."

As their products expand from the radiology department throughout the hospital, vendors have embraced the idea that feedback and data-gathering tools can be used to make healthcare more consistent.

"The Holy Grail is a clinical best practices guideline available at the point of order that respects the physician's judgment," said Jess Edwards, PACS product line manager at Eastman Kodak. "You want to help someone make the right decision about whether to order a plain chest film or a body CT."

A sophisticated RIS could incorporate those features, with decision support based on appropriateness criteria, and it could even learn from common ordering patterns at a facility over time. Referring physicians and radiologists would be free to use their best judgment about which study to order, but the preferred workup would be available to them, and hospital quality managers could look at patterns for education detailing or outcomes research.

For the moment, informatics and quality are merging in much less sweeping ways in places like Texas and North Carolina. The data generated with each rejected film let managers benchmark their own performance and customize education. Blado, in particular, is looking forward to a day when she can compare her internal data at Texas Children's Hospital with those from colleagues around the country to see how her department's efforts stack up.

"If you find someone else who's doing this let me know," she said.

--

Top 10 CR Error Types

The following are the most common causes of rejected CR studies at Texas Children's Hospital.

Technical: Patient mispositioned 46.37%

Quality: Low LGM/contrast 9.09%

Technical: Inadequate inspiration 6.60%

Technical: Cassette mispositioned 5.94%

Quality: High LGM/contrast 4.90%

Appropriateness: Wrong exam performed 3.27%

Appropriateness: Not indicated 0.06%

Quality: equipment artifact 3.05%

Quality: patient artifact 2.19%

Technical: motion .97%

Source: M. Blado

Recent Videos
Current and Emerging Insights on AI in Breast Imaging: An Interview with Mark Traill, Part 1
Addressing Cybersecurity Issues in Radiology
Computed Tomography Study Shows Emergence of Silicosis in Engineered Stone Countertop Workers
Can an Emerging AI Software for DBT Help Reduce Disparities in Breast Cancer Screening?
Skeletal Muscle Loss and Dementia: What Emerging MRI Research Reveals
Magnetoencephalopathy Study Suggests Link Between Concussions and Slower Aperiodic Activity in Adolescent Football Players
Radiology Study Finds Increasing Rates of Non-Physician Practitioner Image Interpretation in Office Settings
Assessing a Landmark Change in CMS Reimbursement for Diagnostic Radiopharmaceuticals
Addressing the Early Impact of National Breast Density Notification for Mammography Reports
2 KOLs are featured in this series.
Related Content
© 2024 MJH Life Sciences

All rights reserved.