A closer look at Predictive Values
When waging war and winning battles, successful generals consider timing a critical factor. Shall we move at dawn or wait till their supplies run lower? Few empires have witnessed the consistent military success as Rome. For roughly one thousand years, they expanded their empire, obliterated their enemies, and reigned supreme. Rome had focused leadership, unparalleled training, well-utilized resources, advanced science, an ever-improving infrastructure, and sacred chickens. Yes, sacred chickens. For generations, the sacred chickens’ feeding behavior would advise the Romans whether they should or should not attack the enemy that day. A Roman general of the first Punic Wars discovered the ramifications to dismissing the chicken’s advice and consequentially lost his naval fleet to the enemy, and his reputation and career to his people. This famous event also helped solidify the veracity of this augury.
Podcast: Free Audio File
If you prefer to listen to podcasts, feel free to play the audio version of this blog by clicking on the player above.
Podcast: When a Blue Dot or Line Doesn't Mean a Blue Dot or Line
Length: 4 min 50 seconds
Written and read by the author
Without Predictive Values we risk false assumptions
Today, even with science and logic on our side, we use diagnostic tests in a similar fashion as the Romans used ancient augury. We seek further information to decide a course of action. Are the gods favorable or not? Is a non-curable virus present or not? It may seem ridiculous, but Roman generals took the sacred chicken test as seriously as a clinician takes a viral test. And without understanding predictive values, both the ancient Romans and the clinicians of today risk false assumptions that can lead to death.
No diagnostic test has achieved perfection
No diagnostic test has achieved perfection, in part, due to the nature of the yin-yang relationship of sensitivity and specificity. (Click here for more information on sensitivity and specificity.) As we increase the sensitivity of a diagnostic test, we limit its specificity. Let’s examine a common diagnostic test with sensitivity and specificity of greater than 98% and proven track record for over a decade. With that resume, a clinician might rely on it to be infallible, but data is only as good as the doctor using it.
Even with high sensitivity and specificity of 98%, the positive predictive value barely manages above 55%.
Predictive values provide us the probability that a test represents the truth. While sensitivity and specificity play a large role in its calculation, the prevalence of the disease in question also impacts the result. As the prevalence of a disease lowers, so too, the positive predictive value lowers. This results in increasing potential for false positive tests and incorrect death sentences for patients.
Knowing that virtually no true prevalence data for small animals exist, assume we have a low prevalence of a virus of 2 ½ percent. For feline retroviruses in the United States, several studies report a similar seroprevalence. Even with high sensitivity and specificity of 98%, the positive predictive value barely manages above 55%. Thus, when presented with a positive result, a clinician and pet owner should withhold a diagnosis while we submit confirmatory tests. Skipping this step could result in misdiagnosis and the life of the animal. Conversely, the negative predictive value, how we interpret negative results, rises with low prevalence. The probability that a negative result is true in this scenario exceeds 99%! Interpretation of test results relies on predictive values.
Don't go a fowl!
As clinicians, we must not rely on “how great we think a test is” or how long it we have used it. Even when confirmatory tests appear to be a waste, the process reduces our error and even if this means saving only one additional life a year, merits our consideration. So, don’t go “a fowl” or count your viruses before they hatch – the blind trust of any result offends the gods of statistics.