Is the continued rise of artificial intelligence (AI) driven by legitimate “machine learning,” or do the frequently “suspicious” chest X-rays and questionable detection of subtle findings on head CTs reveal a hype-driven train of new products with an ultimately “planned obsolescence”?
Have you heard that artificial intelligence (AI) is going to make radiologists obsolete?
I would wager that it is borderline impossible for anybody in our field not to have heard this notion. Panicking colleagues and non-rads (often with a taunting tone) in health care have been spewing such doom and gloom for years. Social media gives them a means of repeating their forecasts to an ever-increasing audience with increasing frequency.
I have a hard time believing it. I’m sure that, somewhere out there, AI is doing great things and people who previously did those things might have been displaced as a result, or realistically foresee it happening. In my radiological life, however, I am seeing no threat—or, indeed, even help—from AI.
That’s how it’s supposed to go, right? First, AI tools make our jobs easier, but we humans are still doing most of the work. Then AI insinuates itself further and further until at some point a human is no longer required at the workstation.
I have had computers supposedly doing part of my job since I was in residency. Back then, it was just for mammo. “Computer-aided detection” (CAD) looked over each case and highlighted areas of concern. The vast majority of the time, the CAD findings were irrelevant, whether it was a wad of normal tissue, or some calcifications that nobody competent would do anything about.
Once in a while, CAD technically found an abnormality, but the finding would be a former site of biopsy, or something that had been present on a dozen mammos from previous years. It was nothing the rad would have missed and even if he or she did, it wouldn’t have been an issue. Then, of course, there were rip-roaring genuine abnormalities that CAD identified, but so could a first-year medical student, or the janitor sweeping up on the far side of the reading room.
Most of the time, CAD just produced little boxes on the images that actually added to our job. Rather than just calling the case normal and moving on, now we had an extra few moments of “Wait, what did the machine find? Better take a second look there,” which never yielded anything other than, maybe, a fleeting worry that we had in fact missed something and needed to upgrade our skills.
Another subsequent job had other CAD for chest X-rays, which provided the same time waste without benefit.
With the recent hype about AI, I felt motivated to ask on a radiology forum: Is this new AI stuff really any different than CAD or is the industry just using fashionable buzzwords to get us (and the deeper pockets buying our equipment) more enthused? To me, it seemed that we might just be playing with semantics. Stuff that’s now being called AI in radiology is still just a computer assisting us.
Folks savvier than me answered that there is a difference beyond mere hype. AI is learning as it goes along whereas non-AI CAD won’t improve with time.
That sounded like a reasonable distinction to me. My problem with it, however, was that absolutely none of the AI products I’ve used has done better with age.
For instance, I am pretty sure that voice recognition wasn’t initially augmented by AI (I started seeing it almost 20 years ago), but part of the promise of voice rec was that it would “learn” as you used and corrected it. Folks who have read this blog for the long haul know that I have not had that experience at all, and I am far from the only one. Sometimes, it even worsens as I use it.
Further, maybe voice rec wasn’t AI-powered back then, but it’s got to be by now. This is at least the case in the apps that come with every cellphone. They use an internet connection to lean on their hive mind. The accuracy of my cell phone dictations is okay. It has certainly been better than PowerScribe. (I’m not singling out PowerScribe. That is just what I have been using for the past couple of years.) However, even with a new cell phone this year, Google’s transcription makes the same stupid mistakes I have seen for multiple years. It’s not “learning,” at least by any definition of that word I have ever seen.
Meanwhile, actual self-proclaimed AI that has been featured in places I have recently worked has also shown me zero learning over the course of time. Every chest X-ray gets flagged as “suspicious,” and head CT/CTAs call blatant abnormalities while missing subtle stuff (or getting faked out by false positives) just as much during year X of use as they did on day one.
Perhaps I am being overly harsh. Maybe these things are, in fact, learning, but at a slower rate than hyped. Maybe the AI I am seeing on my PACS would be decidedly better if I worked with it for 10 years (and maybe Google’s voice recognition will finally perform better for me in 2030).
If that’s the case, it seems more likely to me that the vendors will long since have replaced whatever current AI products they have deployed with new ones. Putting out a new product that works better isn’t any kind of future-proof “machine learning,” it’s just the competitive marketplace at work. If you really want to get cynical, you could call it “planned obsolescence.”
After all, what company would put out a product that indefinitely improves itself, never needing replacement? That would put a kibosh on future sales, and at some point, I am certain that they will need some flashy new term when everybody gets bored with “AI.” The alphabet soup will continue to churn long after I am around.
FDA Clears AI-Powered Ultrasound Software for Cardiac Amyloidosis Detection
November 20th 2024The AI-enabled EchoGo® Amyloidosis software for echocardiography has reportedly demonstrated an 84.5 percent sensitivity rate for diagnosing cardiac amyloidosis in heart failure patients 65 years of age and older.