• AI
  • Molecular Imaging
  • CT
  • X-Ray
  • Ultrasound
  • MRI
  • Facility Management
  • Mammography

PACS reading protocols tame information overload

Article

The imaging data generated in a typical episode of patient care is too much and generally too complex for radiologists to manage unassisted.

The imaging data generated in a typical episode of patient care is too much and generally too complex for radiologists to manage unassisted. Harnessing this information and converting it into better health outcomes is the objective of PACS developers, who think that computerized protocols for image presentation will help break the logjam.

Dr. Keith Dreyer, vice chair of radiology computing and information sciences at Massachusetts General Hospital, considers the ability of PACS to show images of all types, as well as documents and other data, in a user-friendly format critical to keeping up with the onslaught of information. Radiologists could improve their efficiency by 50% by using workstations equipped with reading protocols that present 3D images and other pertinent patient data in an automated, consistent format, according to Dreyer.

In an interview, Dreyer explained what reading protocols can do to enhance radiographic diagnosis and how 3D imaging and other forms of advanced image processing will be integrated into PACS.

Ogle: Some people define reading protocols as "automated reading," while others think of them as electronic film-hanging protocols. What's your definition?

Dreyer: Just so people have a reference point, reading protocols contain within them the notion of hanging protocols. But they are much more than that, in that hanging protocols are a static concept in which you're typically dealing with just images from known modalities-CT and MR are the most common, although plain film and ultrasound also come into play. In thinking about reading protocols, you also need to consider the exam protocol; that is, how the exam is performed. Do you use contrast or not, do you obtain thick cuts or multisections? How much contrast do you give, when do you give it, and when do you scan?

Reading protocols will be to some extent driven by this information, but they will also key into indicators from the patient and the patient's demographics. One of the main things to be included will be prior exams. What complementary imaging studies have been performed that the user needs to see at the same time? This is already done to some extent with hanging protocols, but additional information such as imaging studies done outside of radiology, including cardiology and endoscopy, has been pretty hard to come by. There will also soon be genomic data coming across that we'll want to see prior to rendering a diagnosis.

Ogle: Are you implying that without an electronic medical record system you can't do this?

Dreyer: Where they have EMRs, some people are leaning on them to fill this gap in reading protocols. If you need to know a patient's lab value, or you want some information about patient history, you currently have to go to the EMR or make an awful lot of phone calls. You can imagine a reading protocol of the future that gathers this information through Clinical Context Object Workgroup (CCOW) or some other mechanism and makes it available to radiologists without them having to leave their PACS reading environment. We need a way to share context-sensitive information at the point of care, which is what CCOW is about.

Ogle: What's driving the use of reading protocols?

Dreyer: The main reason is the need to respond to information overload. They make you far more efficient, they can make you far more consistent and, I believe, potentially more accurate.

Ogle: Can you break that down?

Dreyer: Reading protocols make you more efficient because you don't have to do a lot of the prep work to view images. In theory, you can set up a PACS viewing experience-how you want to have images presented-for now and going forward, for certain types of conditions. Say a patient has had a CT exam using a certain exam protocol. The reading protocols let you view an image in 3D in one area, let you look at 2D data in another quadrant of the screen, and perhaps allow you to examine a prior somewhere else. The protocol saves you the time of having to set that study up the way you like it, in the same way you save time when someone else hangs films for you on an alternator.

As for consistency, you are not likely to miss images doing things this way. If you know that you have an MR hanging protocol with images from four pulse sequences appearing in the four quadrants, you don't have to drag them up one at a time. You're consistently looking at the same thing in the same place every time. You know the enhanced image is to the left and the nonenhanced to the right, for example. You don't have to mentally retrain yourself every time you're looking at it.

Accuracy falls out of that because you're not actually looking in the wrong direction or missing an entire series coming up because it wasn't staged to come up that way for you. This is similar to what happens with an alternator if the filing person doesn't put the last three images up for you.

Ogle: So you can customize your hanging protocols to suit your taste?

Dreyer: Yes, but that's both good and bad because you want to have consistency, and you may even want to have consistency among users in your department. One way to create a reading protocol is to say, we're going to create a master reading protocol for all neuroradiologists reading this type of exam, or all body radiologists reading that type of exam. To accomplish this, the PACS administrator meets with the subspecialized radiologists and asks them how they want to read their images. They seek consensus.

There are also individual preferences to take into account. People still flip images a certain way and look at things differently. Some like to have four up; some like to have one up. Report delivery would probably be more efficient and consistent if everyone did it the same way in a department, but that's not the real world. You have to allow for individual modifications. These individual changes should be persistent throughout the system, however.

Ogle: So is it through this customization that some of the progress will come in refining the product?

Dreyer: Undoubtedly. It's also true that nothing stays the same in medicine, so when someone develops three new pulse sequences for MR, someone else will have to figure out the best way to deal with this on the PACS viewing end.

Things become far more complicated and mission-critical when we get to 3D and volume visualization. The good news about having a multislice CT study with 3000 images of the chest is the fact that at least you know what you've got-3000 axial slices of the chest. But with enhanced image visualization, you can also have MIPs, MPRs, curved reformations, and other things like double oblique images coming in. And they're simply piled on top of the 2D image sets.

Now what's the protocol for adding 3D visualization and how do you make it consistent? And if that's not going to be well defined, then how does a hanging protocol work and should it be more like a reading protocol? Should there, for example, be a window dedicated to 3D that just runs the 3D viewer and lets the user figure out the MIPs and MPRs and whatever else they want?

Maybe the next stage will permit such things as, when we're looking for the circle of Willis, we have to be able to rotate the image at this angle and do a MIP with a thickness of 2 cm. All this needs to be defined, but currently there are no standards. Fortunately, I think the presence of protocols will allow standards to be useful. If we did not have reading protocols, it would be hard to implement standards.

Ogle: It sounds like we're still at the front edge of this?

Dreyer: That's true for 3D and volume visualization. We're right at the front door of this technology being present in most workstations.

Ogle: Do you think radiologists will respond to 3D as a good thing for them, or as just one more thing to worry about?

Dreyer: That's a good distinction. It's like PET/CT: If you really understand it, PET/CT is phenomenal, but if you don't, then it's probably another burden. If you use 3D appropriately and-big assumption-there's a 3D application that's available for you to use appropriately, then it will be a tremendous aid. This implies that it's easy to use, to repeat, to obtain data from, and it's very fast and easy to display. If it falls short in these areas, then it's going to be a burden.

There are ways in which 3D can speed you up but also ways it can slow you down. There are ways it can make you more accurate and ways to make you less accurate, depending on how you use it. It's one of those things you can't shy away from and expect to go away, however. It's here. We're waiting for that set of 3D applications that are very easy for radiologists to use that don't burden their time.

In our 3D imaging lab at Mass General, we have supertechs who have been trained to prepare the 3D protocols that we're talking about. It's their charge to deliver semiautomated, processed data in a consistent fashion for the radiologist to read. The goal is to relieve radiologists of the burden to do any 3D processing yet allow them to benefit from the advantages of 3D visualization.

Ogle: How long will that model last?

Dreyer: I don't know. There is a shortage of technologists too. The ideal way to do this is to have it automated and run right out of the scanners.

Ogle: Do reading protocols improve your performance as a radiologist?

Dreyer: I haven't seen any studies that prove it, but it seems intuitive that we have improved our accuracy representing the data in a far more consistent fashion. I believe we can be more accurate in arriving at the correct diagnosis. This will eventually need to be documented.

Ogle: What other forms of automation could reading protocols embrace?

Dreyer: The capture of the indications for an exam would be very helpful. We want to know why a person is having an exam and exactly what exam protocol is being performed. This would give us a chance of automating 3D or of providing useful information to the user at the time of interpretation, such as display of differential diagnosis or the most recent information surrounding tumor staging, if someone needs to know that.

It would also be nice to have a reading protocol that includes a decision tree based on decisions or actions of the radiologist. They could say, "I don't see anything in the brain, so now I'm more concerned about something in the neck or in the ear," and the protocol would reconstruct the representative data. This is a lot of software work, but once it's ready, it's going to save a lot of time, so it seems inevitable.

Ogle: What are the implications for reading protocols that theoretically make it easier for nonradiologists to make a diagnosis?

Dreyer: In my mind, that's where we want to go. Making it difficult for people to use imaging is not a good place to be. If reading protocols enhance their ability to understand and use imaging, then the last thing we should be worried about is losing the business to somebody else. I would go further, to say our job is to make it easier for everybody else. If we do a 2D imaging exam, like MR, that has the capability of 3D reconstruction or volume visualization, it should be our task to provide QC of that work, to render our diagnosis, and then to make it easy, presumably using a different reading protocol, for the referring clinician to see relevant images after the case has been interpreted.

The way I look at the data might be far different because I need to go to every pixel, as opposed to the surgeon who's about to operate and wants to see a specific aspect of, say, the acetabulum. We need to create another set of easier-to-use reading protocols for our ordering physicians to generate a concise set of data. Otherwise, as CT exams grow to thousands of images, they're just not going to use us any more. They'll find some other way to get their answers.

Ogle: How does everyone assure they're speaking the same language when using reading protocols, and how are radiologists going to be get paid for using them?

Dreyer: There are different efforts being put forth through DICOM committees and the RSNA. Something called RadLex, which is a lexicon of terms the RSNA has put together, provides a uniform framework for indexing and retrieval of a variety of radiology information sources, including teaching files, research data, and radiology reports. A playbook like this helps us to define things in a consistent fashion for imaging. It's going to take a long time, but the RSNA understands that this is something that needs to be accomplished and is going about it very methodically.

Totally separate are efforts to come up with payment for 3D and for other additional functionality and protocols. The American College of Radiology has taken the lead in defining these issues and what needs to be reimbursed. It's challenging, because technology is changing too fast for legislation and the payers to keep up.

Ogle: Will reading protocols increase the efficiency of users?

Dreyer: With PACS, we saw about a 25% increase in the efficiency of radiologists once it was fully deployed at Mass General. That included most users sticking with their old hanging protocol methods. It's difficult to say with all this additional information flooding in how much more efficient they could be. I think, however, that if people could see the information they now can't get and if it was available at the time they read, and if they have the 3D information they want when they want it, they could probably increase their efficiency by another 50%. As the data coming at you increase, it doesn't get any faster to read. Unfortunately, everything slows down as you get more and more data.

Ogle: Is there any way to anticipate the disruptive technology that could alter the course of image interpretation?

Dreyer: You have to keep your finger on the pulse of all the technologies that could possibly affect you. That's a very tall order, and there's a price to be paid when you jump on disruptive technology early, as we did at Mass General with voice recognition. I see the same potential with 3D. It may very well change how we think about and design PACS, how we display images, and how primary interpretations of images are made. The fact that we have workstations from most of the PACS vendors in our lab and have trained people to make 3D an integral part of our system speaks to the fact that we're anticipating that this is going to change us all. It's the biggest wave coming at us.

Processing is also getting faster while memory and displays are getting cheaper with greater resolution, which makes it easier to do on a standard desktop workstation the things that used to take dedicated hardware. Imaging is going to get more specific for diseases. An ordinary MR of the pelvis is not likely to be done in the future; instead, what will be ordered will be an MR of the prostate to rule out cancer, for example. All these changes are going to require us to define reading protocols. How we move through and automate our interpretations will be crucial to our success.

Mr. Ogle is a freelance healthcare writer and media consultant. This article first appeared in Insights & Images, published by Fuji Medical Systems.

Recent Videos
Current and Emerging Insights on AI in Breast Imaging: An Interview with Mark Traill, Part 1
Addressing Cybersecurity Issues in Radiology
Computed Tomography Study Shows Emergence of Silicosis in Engineered Stone Countertop Workers
Can an Emerging AI Software for DBT Help Reduce Disparities in Breast Cancer Screening?
Skeletal Muscle Loss and Dementia: What Emerging MRI Research Reveals
Magnetoencephalopathy Study Suggests Link Between Concussions and Slower Aperiodic Activity in Adolescent Football Players
Radiology Study Finds Increasing Rates of Non-Physician Practitioner Image Interpretation in Office Settings
Assessing a Landmark Change in CMS Reimbursement for Diagnostic Radiopharmaceuticals
Addressing the Early Impact of National Breast Density Notification for Mammography Reports
2 KOLs are featured in this series.
Related Content
© 2024 MJH Life Sciences

All rights reserved.