I positively predict value with prevalence
It has been fifteen minutes, fifteen long minutes. You look at the two blue lines that have been formed in your point-of-care test. One blue line would indicates that the control is working and that the result is negative. Two blue lines is a positive test. You just tested positive for Zombie disease. But how is that possible? You have been so careful? After all, you ran the test to just to see if the test kits were still working.
Before panic sets in, you run your biostatistics in your mind. The specificity of the test is 95%, and the sensitivity is also 95%. The region that you have been investigating has a low prevalence of Zombie disease of only 4%. This means that the likelihood that a positive test genuinely supporting infection is just 44.1%, less than half a chance. You take a deep breath and know that you need to run a confirmatory test to see that you are a false positive test case. This was much better than six months ago when you were in a region with a ten-times higher prevalence of 40%. Then a positive test result would mean a 92.7% likelihood that the positive test indicated actual disease. Same test, two different interpretations of the outcome.
The dependence upon disease prevalence information is both the value and the frustration of Predictive Values. Prevalence is the number of diseased individuals divided by the entire population at a point in time. It doesn’t take into account individual lifestyle risks such as being recently bit by a zombie. More frustrating than zombies is that in small animal veterinary medicine, we rarely have prevalence data to use.
Podcast: Free Audio File
If you prefer to listen to podcasts, feel free to play the audio version of this blog by clicking on the player above.
Podcast: Well, Cholera me Crazy
Length: 4 min 39 seconds
Written and read by the author
How to read Positive Predictive Value and Negative Predictive Value
In theory, if you had 100 positive tests, but only 98 of them have the disease, then the test has a Positive Predictive Value of 98%. Looking at this from the other way around, if your patient has a positive result, then there is a 98 percent chance that this test is correct. If you have 100 negative tests but 10 of them have disease, then the test as a Negative Predictive Value of 90%.
The Easy Way and the Hard Way to Calculate Positive Predictive Value (PPV)
Predictive Values help answer real-world questions such as “does this test really mean I have zombie disease?” To do this, the data must include real-world prevalence data. Either the prevalence is incorporated into the original data set, or it is provided as a separate number.
When the data, which can be placed into a 2x2 table, includes the prevalence data, then calculating the PPV is rather straightforward. The numerator is the number of True Positive Tests. The denominator is all positive tests, both True Positives, and False Positives. When looking at the 2x2 table, we calculate sensitivity and specificity vertically and calculate predictive values horizontally.
The Hard Way to Calculate Positive Predictive Value (PPV)
Sometimes the 2x2 table does not have prevalence data, such as case-control studies or randomized clinical trials. While the basic formula remains the same, prevalence data must be included. The True Positive number can be calculated by multiplying Sensitivity with Prevalence. The False Positive is 1 minus Specificity times 1 minus Prevalence. The Negative Predictive Value has a similar derivation.
Myvetzone dot com has this formula. It sounds simple but looks more involved than calculations where prevalence is already incorporated into the table.
Summary of Predictive Value
Sensitivity and Specificity help us understand if a particular test is better suited to rule a disease in or to rule a disease out. The Predictive Values can help us understand how to interpret those test results in light of the prevalence of the disease in question.