

The Ancient Roman Calendar Consisted of Ten Months
If the words September, October, November, and December have ever triggered the numbers 7, 8, 9, and 10 in your head, they do so with good reason. The ancient Romans divided the year into ten months with a month-less winter period at the end where the astronomers could adjust for the Earth’s orbit. As error in Earth’s apogee estimation lessened, the legendary second king of Rome, Numa Pompilius added January and February to the calendar around 700 BCE. Science consistently uses estimations and compensates for statistical errors to better understand our world.
Podcast: Free Audio File
If you prefer to listen to podcasts, feel free to play the audio version of this blog by clicking on the player above.
Podcast: Do These Definitions Ring a Bell Curve?
Length: 5 min 20 seconds
Written and read by the author

“A man's errors are his portals of discovery.”
James Joyce wrote, “A man's errors are his portals of discovery.” Statistical Standard Errors are portals into p-values, Confidence Intervals, and the tinted scientific lenses we must use to focus upon our world. By understanding the definitions and relationships of standard error, standard deviation, confidence intervals and p-values, clinicians can better plot their trajectory through studies and towards the truth.

Let's Consider the Perfect Bell Curve

Standard Error is a type of Standard Deviation
While Standard Deviation gives you variation within the study, SE provides potential variation that might exist in a wider, shall we say global, environment. For this reason, we often chose Standard Error to calculate p-values and Confidence Intervals, two values that assist us in evaluating data. For example, let’s say that a study showed that kissing an aardvark reduces your chances of catching the flu by 70%. Was this result accurate or just a product of random chance within the study? The p-value provides the percentage that the data resulted from random chance. A p-value of 0.05 indicates that the potential of the data being as extreme from pure random chance is only 5 %, a common threshold for scientific studies. With Confidence Intervals, we can state with 95% confidence that a value lies between A and B. For example, we might have a confidence that true value for aardvark flu-reducing kissing lies between 65 and 75 %.

We must look beyond simple p-values and take in all the data
When reading a study, aardvark or otherwise, the p-value tells how likely you should believe the particular study finding and the confidence interval tell you how far to believe it. With a p-value threshold of 5%, we can easily utilize this value. If above 5%, then we decide that the result is inconclusive. If below, we can accept that kissing aardvarks may be beneficial to our health. For confidence intervals, we must employ common sense. A confidence interval of 25 to 100% is vastly different than 69-71% when it comes to deciding to kiss an aardvark. In the first scenario, even if beneficial, is a possible 25% reduction in risk worth kissing an ant eater? If one degree from 70%, we might be more amenable to the situation. So, whether evaluating data on celestial orbits, aardvark kissing, or vaccine efficacy, we must look beyond simple p-values and take in all the data.
References and Further Reading
-
Altman, D. G., & Bland, J. M. (2005). Standard deviations and standard errors. BMJ : British Medical Journal, 331(7521), 903.
[amazon_link asins='1118553985,1935660020,0199946647,130526892X,1607951789' template='ProductCarousel' store='vetzone-20' marketplace='US' link_id='aa3e86be-ce1c-11e7-ae16-0944a45c35a3']