The title is part of the title of the epidemiologist Ioannidis’s landmark article “Why Most Published Research Findings are False” (*PLOS Medicine, 2, 3124. *Doi:101371/journal pmed, 0020124, 2005). Subsequent research has confirmed his conclusion. Many articles followed (see the *AAA Tranche of Subprime Science* (Gelman and Laken, 2014). The problem hit the popular press with the October 19^{th} cover of the *Economist *broadcasting **HOW SCIENCE GOES GOES WRONG**.

Given the ramifications of this conclusion it is remarkable that this problem has not received much wider attention. So the *healthymemory blog* is standing up to do its part. The reasons for science going wrong are technical, dealing with the misuse of statistical methodology, as well as economic and political. The *Economist* does a fairly good job in explaining the problem for the layperson. This blog post will provide some examples and try to offer some advice.

As Ioannidis’s is an epidemiologist his critique centered on the medical literature although the ramifications of his article extend far beyond epidemiology. Most importantly the findings deal with our medical care. Readers should be somewhat aware of this as to the frequent contradictory findings regarding what is good or bad for us. Let us begin with the example of medical screening. The 5-year survival rate is one type of information that is given to promote the benefits of screening. This rate is defined as the ratio of the number of patients diagnosed with cancer still alive five years after the diagnosis divided by the number of patients diagnosed with cancer. So this rate is defined by a cancer diagnosis and leads to the conclusion that screening is saving lives. If lives are indeed being saved should it not be seen in mortality rates? A mortality rate is not defined by a cancer diagnosis. The formula for the annual mortality rate is the ratio of the number of people who die from cancer in one year divided by the total number of people in the group. It is not clear what is going on here, but if screening were indeed saving lives then it should be reflected in the mortality rate. When regarded in this light, the 5-year survival rate is a bit like a self-licking ice cream cone. Some ways of presenting the benefits of treatment are much more impressive than others. To learn more about this see the healthymemory blog posts “Interpreting Medical Statistics: Risk Reduction”, and “Health Statistics.”

To take a specific example, consider the Prostate Specific Antigen Test given to screen for prostate cancer. At one time this was regarded as being almost compulsory for males over a certain age. Now it is recommended only for males in a high risk group and, even then, only after consulting with their physician. You might ask what are the risks in screening. Apart from the costs, discomfort, and convenience, there are the side effects. In the case of prostate surgery they could be incontinence and/or impotence.

Research has also shown that many doctors do not understand how to communicate accurate medical statistics to their patients. A study reported by Gerd Gergenzer and his colleagues, “Helping Doctors and Patients Make Sense of Health Statistics” (*Psychological Science in the Public Interest Vol 8_Number2, *2008) showed that few gynecologists understood positive mammograms. First lf all, the gynecologists were given the following accurate information about the women patients.

The probability that a woman has breast cancer is 1% (prevalence).

If the woman has breast cancer, the probability that she will test positive is 90%.

If a woman does not have breast cancer, the probability that she nevertheless tests positive is 90% (sensitivity).

If a woman does not have breast cancer, the probability that she nevertheless tests positive is 9% (false positive rate).

Then the doctors were told that a woman tested positive and that she wanted to know whether she has breast cancer for sure, or what here chances are. The doctors were then given the following choices from which to choose.

A. The probability that she has breast cancer is about 81%.

B. Out of 10 women with a positive mammogram, about 9 have breast cancer.

C. Out of 10 women with a positive mammogram, about 1 has breast cancer.

D. The probability that she has breast cancer is about 1%.

41% responded to option B.

13% chose option A.

21% chose option C.

19% chose option D.

Option C is the correct response.

This is based on Bayes formula for conditional probabilities. A good way of computing this is to use natural frequencies.

Consider 1,000 women.

10 are expected to have breast cancer and the remaining 990 to be free of breast cancer.

Of the 10 with breast cancer 9 should test positive and 1 negative. Of the remaining 990, 89 should test positive and 901 should test negative.

Then we divide the number having breast cancer, 9, divided by the number testing positive (89). The closest multiple choice option being C.

So what does a prospective patient do when the majority of the medical literature is wrong. First of all, do not forget the option of doing nothing. Get multiple opinions regarding your problem. And do your own research. Take all of this into consideration along with your personal values and make a decision, remembering that doing nothing remains an option.

© Douglas Griffith and healthymemory.wordpress.com, 2014. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.