Evidence-based medicine is not a new concept. Hippocrates noted over 2000 years ago that while medicine is a mix of science and opinion, "the former begets knowledge, the latter ignorance."
Evidence-based medicine is not a new concept. Hippocrates noted over 2000 years ago that while medicine is a mix of science and opinion, "the former begets knowledge, the latter ignorance."
So why are we discussing it so much now? Information overload. More than 50 million medical articles have been published to date. But just 10% of what is published has lasting scientific value, and 50% of today's medical knowledge will be out of date within 10 years.
Textbooks quickly become old, journals are too numerous, and so-called experts are sometimes wrong. Even if you subspecialize in radiology, it is difficult to keep up with the literature.
It is important to use the highest possible level of evidence for the decision you are making and to know when there is no evidence. But evidence on its own is not enough. Clinical decision making involves integrating sound scientific data with clinical judgment.
Finding good evidence first involves doing a literature search. You should then rank the scientific papers you read based on quality and level of evidence. A systematic review should provide strong evidence, while a conclusion based only on opinion is insufficient.
Studies assessing the impact of diagnostic testing on clinical decision making and patient prognosis should ideally be designed as randomized control trials. Observational cohort studies and case-control studies are viable alternatives. It is also important to check what outcome measures are used. Measures of quality in radiology may include patient safety, technical performance, and economic efficacy. But, ultimately, the best measure of quality is whether patients are diagnosed correctly.
The requirements for grading therapeutic studies in terms of evidence are generally well known and accepted. This is not the case for diagnostic tests, such as x-rays or MR scans. Whenever you order a test, you should know from the patient's history the probability that they have a disease (pretest probability). Then you need to know the test's sensitivity and specificity-that is, its accuracy. If performing the test raises the patient's probability of having disease above a certain threshold, then the patient should be treated. If it lowers the probability sufficiently, disease can be ruled out without the need for further tests. If it does not change the probability, don't perform the test.
Radiologists' lives would be simple if the distinction between a diseased population and a nondiseased population was clear. Unfortunately, it's not. Some healthy people will test positive for a disease, and some diseased people will test negative. Given the presence of disease, sensitivity reflects the proportion of people who will test positive. Specificity reflects the proportion testing negative when disease is absent.
The prevalence of disease matters as well. For instance, if 2000 people have a dental x-ray, and the prevalence of caries is 50%, then a test with a sensitivity of 0.65 will find cavities in 650 of the 1000 subjects who have diseased teeth. If the specificity is 0.98, then 980 of the 1000 people without cavities will have a clear x-ray. So there will be 350 false negatives, and 20 false positives. But if the disease prevalence falls to 5%, then the positive predictive value (true positives divided by all positives) will drop dramatically, and a lot of people with healthy teeth will be told they have cavities. On the other hand, confidence that a clear x-ray is a true negative should go up.
The best way to tell whether a test will change the probability of disease is to use the likelihood ratio. This is the probability of a test result in people with the disease, divided by the probability of the same test result in people without the disease. If the likelihood ratio is 1, the test doesn't discriminate at all. It is totally useless. The higher the positive likelihood ratio, the more likely the patient has the disease. The lower the negative likelihood ratio is, the higher the chance that the patient is healthy.
There are two lessons from all of this. First, predictive values are dependent on the prevalence of disease. Second, if you want to find disease, use a test with high sensitivity, but if you want to exclude disease, use a test with high specificity.
PROF. ASPELIN is a professor in radiology at the Karolinska Institute in Stockholm. This column is based on a presentation made at the Asian Oceanian Congress of Radiology meeting held in Hong Kong in August.
Study Reaffirms Low Risk for csPCa with Biopsy Omission After Negative Prostate MRI
December 19th 2024In a new study involving nearly 600 biopsy-naïve men, researchers found that only 4 percent of those with negative prostate MRI had clinically significant prostate cancer after three years of active monitoring.
Study Examines Impact of Deep Learning on Fast MRI Protocols for Knee Pain
December 17th 2024Ten-minute and five-minute knee MRI exams with compressed sequences facilitated by deep learning offered nearly equivalent sensitivity and specificity as an 18-minute conventional MRI knee exam, according to research presented recently at the RSNA conference.
Can Radiomics Bolster Low-Dose CT Prognostic Assessment for High-Risk Lung Adenocarcinoma?
December 16th 2024A CT-based radiomic model offered over 10 percent higher specificity and positive predictive value for high-risk lung adenocarcinoma in comparison to a radiographic model, according to external validation testing in a recent study.