Column by Richard Sternberg
I read a news report of a scientific article recently that was originally published in a reputable scientific journal that said something to the effect that multivitamins were good for the brain. I shared this with my daughters via text, one of whom is a real biomedical scientist. She retorted back almost immediately, “Dad, did you look at the original paper?” I hadn’t. She did. “Not only is the study flawed but there is not adequate evidence to suggest that there is a causative relationship. At best, and I am not sure, there is a mild correlation.”
Frequently I’m presenting in my columns the results of studies and the implication is that one thing led the other one to change. Unfortunately, that cannot be assumed. Unless the study is very well-designed, you could just as easily have the second finding affecting the first. In any event this is what it called a correlation. Correlation describes a relationship between variables: when one changes, the other also changes. A correlation is a “statistical indicator” of the relationship between variables. These variables change together but this change isn’t necessarily due to a direct or indirect causal link. A third variable, unseen, could cause both of the other variables to change. Causation means that changes in one variable directly bring about changes in the other; there is a cause-and-effect relationship between variables. The two variables are correlated with each other and there is also a causal link between them. Correlation does not imply causation, but causation requires a correlation.
There are very many types of scientific studies, each with different methods and appropriate statistical mathematical formulas to determine whether the pre-existing supposition is, indeed, factual or not. Some studies are better than others. The type of study most likely to prove a causation is what is called a double blind study. The supposition is that if you do something or don’t do something, or give a certain medicine versus a placebo, a certain outcome will occur.
In order for this type of study to be valid it is necessary that the subject neither knows what they are getting nor what is being done to them, or whether they are actually being treated, or being given a placebo, or having a sham procedure. Then it’s important that the person evaluating the outcome doesn’t know how the subject was treated in the first place. Only when all the information is collected will another researcher determine which subject received which treatment and what the outcomes were. Then, using mathematical statistical methods, the probability that there is a relationship is determined.
There is an excellent book, intended for those with or without a mathematical background, titled “How to Lie with Statistics” by Darrell Huff. It is very inexpensive. I strongly recommend that everyone read it. Quoting statistics that have no validity is one of the strongest advertising or marketing techniques available. Something looks so right, but when you dig down, you find out that the inference is completely invalid. I use what I learned from this book frequently, both to determine if something is true or if someone is trying to put something over on me.
So, when viewing a report that says something prevents cancer, for example, it is important to evaluate how the experiments or reviews led to that conclusion. Frequently what is being touted as the next greatest thing is nothing of the sort.