As radiologists we review numerous images and base our findings on our experience and expertise, which are in turn based on reading articles and textbooks (our knowledge base). If we program all of these knowledge bases into a computer, then wouldn’t the computer be as good or likely even better than we are?
Answer: A Google news search for this returns 4,269 articles.
Question: What is Watson?
Last week the world was taken by storm as Watson, the supercomputing brainchild of IBM, easily defeated two of the greatest trivia masters on “Jeopardy!”. The nationally televised program prompted many observers to pen articles regarding the impact of this type of technology on the relationship between computers and humans.
Potential areas of transformative change sighted included fields as disparate as healthcare and hedge fund management. One NPR story even specifically mentioned radiology as an arena where artificial intelligence may replace humans. So are we destined for the dystopian futures portended by such sci-fi classics as “The Matrix” and “Terminator”? Contrary to Ken Jennings final “Jeopardy!” response, I do not welcome our new computer overlords.
To compete on “Jeopardy!” Watson was programmed with vast quantities of knowledge on a range of topics and the ability to sort through it when presented with the clues of a “Jeopardy!” answer. It should come as no surprise, then, that Watson did so well on a game show which requires a huge fund of knowledge and the ability to access this knowledge quickly. Isn’t that what computers were built for?
But what makes Watson remarkable is its ability to do what all other computers before it were incapable of: comprehend natural language including puns, word plays, and metaphors common in “Jeopardy!” clues. Computers will do what you tell them to do only if you speak its special language - computer programming.
However, computers cannot understand natural human language. If I ask my friend to suggest a “money” Sushi restaurant he will understand my query and suggest places that he knows to be good based on his experience. Yes, you can do essentially the same thing with a computer using Yelp! or Urban spoon, but you need those applications to perform that functionality for you and it is likely that the program will interpret the slang “money” in the context of cost, not quality. You cannot just “speak” to your computer and have it understand you and more importantly the context of your question. Watson opens the door to this potential.
So what impact will this type of technology have on the field of radiology? As radiologists we review numerous images and base our findings on our experience and expertise, which are in turn based on reading articles and textbooks (our knowledge base). If we program all of these knowledge bases into a computer, then wouldn’t the computer be as good or likely even better than we are?
With the ability to understand natural language, a primary-care physician could ask the computer following an abdominal CT, “Does my patient have appendicitis?” and the computer could answer, “No, but there is a high likelihood that the patient has diverticulitis.” While the aforementioned scenario is pretty scary, the advancement of machine intelligence makes it distinctively possible for the near future.
However, the personal relationship between a doctor and his patient can never be replaced. It is important, for example, to diagnose cancer, but how is that information communicated? I do not believe a computer will ever be able to demonstrate compassion or rest a hand of comforting support on the shoulder of a patient that is hurting.
Radiologists must continue to foster these personal relationships both in direct patient encounters and our interactions with our referring providers. We must also be honest with ourselves. Currently computers augment our imaging capabilities but we know that one day computers will be faster and more accurate at making diagnoses than we are.
In order to distinguish ourselves from machines, we must get out from behind our PACS stations and engage our patients and referring providers in meaningful discussions. If not, we better get used to working for our computer overlords.
Dr. Krishnaraj is a clinical fellow in the abdominal imaging and intervention division, department of imaging, at Massachusetts General Hospital/Harvard Medical School. He can be reached at akrishnaraj@partners.org.
New Study Examines Short-Term Consistency of Large Language Models in Radiology
November 22nd 2024While GPT-4 demonstrated higher overall accuracy than other large language models in answering ACR Diagnostic in Training Exam multiple-choice questions, researchers noted an eight percent decrease in GPT-4’s accuracy rate from the first month to the third month of the study.
FDA Grants Expanded 510(k) Clearance for Xenoview 3T MRI Chest Coil in GE HealthCare MRI Platforms
November 21st 2024Utilized in conjunction with hyperpolarized Xenon-129 for the assessment of lung ventilation, the chest coil can now be employed in the Signa Premier and Discovery MR750 3T MRI systems.
FDA Clears AI-Powered Ultrasound Software for Cardiac Amyloidosis Detection
November 20th 2024The AI-enabled EchoGo® Amyloidosis software for echocardiography has reportedly demonstrated an 84.5 percent sensitivity rate for diagnosing cardiac amyloidosis in heart failure patients 65 years of age and older.