In a recent interview, Rajesh Bhayana, M.D., shared insights from new research that compared the abilities of ChatGPT-3.5 and ChatGPT-4 to answer text-based questions akin to those found on a radiology board examination.
The recently released ChatGPT-4 (Open AI) may offer more advanced reasoning, be less prone to hallucinations, and be more capable of passing a radiology board exam than ChatGPT-3.5 (Open AI), according to newly published research.
In prospective studies published recently in Radiology, researchers assessed the performance of ChatGPT-3.5 and ChatGPT-4 in answering 150 text-based multiple-choice questions akin to those found on a radiology board examination.
The researchers found that the ChatGPT-4 model correctly answered more than 80 percent of the questions in comparison to 69 percent for ChatGPT-3.5. ChatGPT-4 also demonstrated a greater than 20 percent improvement over ChatGPT-3.5 on questions that required higher-order thinking, including description of imaging findings, classifications, and application of concepts, according to the study authors.
In a recent interview, Rajesh Bhayana, MD, FRCPC, the lead author of the studies, said it is apparent that the technology with ChatGPT is showing significant improvement.
“The fact that ChatGPT-4 performed better than ChatGPT-3.5 and had less frequent incorrect answers and also performed better with higher-order reasoning suggests the frequency of hallucinations is in fact decreasing,” noted Dr. Bhayana, an abdominal radiologist, and technology lead in the Department of Medical Imaging at the University of Toronto in Canada.
(Editor’s note: For related content, see “Can ChatGPT Have an Impact in Radiology?” and “Can ChatGPT Provide Appropriate Information on Mammography and Other Breast Cancer Screening Topics?”)
While Dr. Bhayana said there is significant potential with the use of ChatGPT in radiology, he cautioned that accuracy remains an issue and use of the technology still requires rigorous fact checking.
“It was very impressive that these models, based on the way they work and based on the fact that they are general models, performed so well in a specialty like radiology where language is so critical,” maintained Dr. Bhayana. “(But) it still does get things wrong. When it does get those things wrong, it uses very confident language. If you’re a novice and you can’t separate fact from fiction, it can be tough to know what’s right and what’s wrong. Especially for education, especially for novices when you’re looking up that information and learning something for the first time, you can’t rely on it. If you do use it, you have to always fact check it.”
For more insights from Dr. Bhayana, watch the video below.
Mammography Study Suggests DBT-Based AI May Help Reduce Disparities with Breast Cancer Screening
December 13th 2024New research suggests that AI-powered assessment of digital breast tomosynthesis (DBT) for short-term breast cancer risk may help address racial disparities with detection and shortcomings of traditional mammography in women with dense breasts.
The Reading Room: Artificial Intelligence: What RSNA 2020 Offered, and What 2021 Could Bring
December 5th 2020Nina Kottler, M.D., chief medical officer of AI at Radiology Partners, discusses, during RSNA 2020, what new developments the annual meeting provided about these technologies, sessions to access, and what to expect in the coming year.
Mammography News: FDA Grants Expanded 510(k) Clearance for AI-Powered SmartMammo Dx DBT Software
November 29th 2024Originally cleared by the FDA in 2021, the SmartMammo Dx software for digital breast tomosynthesis (DBT) can now be utilized with the Senographe Pristina mammography systems from GE HealthCare.
New Study Examines Short-Term Consistency of Large Language Models in Radiology
November 22nd 2024While GPT-4 demonstrated higher overall accuracy than other large language models in answering ACR Diagnostic in Training Exam multiple-choice questions, researchers noted an eight percent decrease in GPT-4’s accuracy rate from the first month to the third month of the study.