Posts Tagged ‘Ioannidis’

Understanding the Science of Elusive Health Risks

January 6, 2017

The title of this post is the subtitle of “Getting Risks Right” a book by an American epidemiologist and cancer researcher Geoffrey C. Kabat. He is a senior epidemiologist at the Albert Einstein College of Medicine.  Understanding these health risks is an extremely difficult task and Kabat makes a strong effort to assist us in executing this task.

The Preface asks the question “Why do things that are unlikely to harm us get the most attention?’  The simple answer is that science takes time and moves slowly, but people want quick answers.  The popular press publishes apparent answers that are a long way from being validated.

The first chapter is titled, “The Illusion of Validity and the Power of ‘Negative Thinking,’ and begins with the following quote from Francis Bacon:  “It is the peculiar and perpetual error of human understanding to be more moved and excited by affirmatives.
The root of all superstitions is that men observe when things hit but not when they miss; and commit to memory the one and forget to pass over the other.”

Chapter 2 describes the fundamentals of studies in the area of public health.  Ioannidis’s landmark article “Why Most Published Research Findings are False” (PLOS Medicine, 2, 3124. Doi:101371/journal pmed, 0020124, 2005) has been cited in several previous healthy memory posts.  The consensus among epidemiologists and statisticians is one of general agreement.  But most people remain ignorant of the situation.  The only article in the popular press of which HM is aware is Why Most Published Research Findings are False” (PLOS Medicine, 2, 3124. Doi:101371/journal pmed, 0020124, 2005).  Kabat discusses additional scientific difficulties in conducting scientific research in the area of health.  Please read his book to understand the relevant issues.

However, health research has additional difficulties because here the science is embedded in a society that is highly attuned to the latest potential or breakthrough.   Kabat writes, “Findings from rudimentary studies often are reported as if they were likely to be true when, in fact, most research findings are false or exaggerated, and the more dramatic the result, the less likely it is to be true.”  Later he writes, “Reports of exaggerated findings can, in turn, give rise to ‘information cascades’—highly publicized campaigns that can sow needless alarm and lead to misguided regulation ad policies.  These difficulties are thoroughly aired in Chapter 3.

The final four chapters of the book discuss 4 areas of research.  Chapter 4 explores the question of whether exposure to radio frequency energy causes brain cancer.  The issue, whether the worldwide adoption of a novel technology within a short time span could be causing a fatal disease.  Kabat documents the extensive research carried out over two decades provides no strong or consistent evidence to support this possibility.

Chapter 5 explores the main lines of preoccupation with “endocrine disrupting chemicals” in the environment hypothesis.  Although this certainly was a legitimate concern, Kabat documents how false ideas based on poor data got enormous attention.  He explains how to make sense of a bitter controversy that is currently raging in the scientific and regulatory communities in Europe and the United States.

Chapter 6 describes a little-known success story.  By linking a long-standing enigmatic disease in the Balkans to dietary exposure to a toxic herb that has been used in traditional cultures throughout history.  Research on aristolochic acid contained in certain varieties of the herb Aristolochia has  led to new insights on the carcinogenic process as well as highlighting the threat posed by the woefully inadequate regulation of thousands of products marketed as “dietary supplements.”  More than half of Americans use these products to the tune of $32 billion a year.  Unfortunately, naive consumers
wrongly believe that the government requires manufacturers to report all adverse effects and that the FDA must approve supplements before they are sold.  Few consumers of supplements are aware of the implications of the Dietary Supplements and Health Education Act (DSHEA), passed by Congress is 1994 with strong support from the supplements  industry and its political allies.  By defining herbal supplements and botanicals as “dietary supplements,” DSHEA excluded them from the more rigorous standards used in regulating prescriptions and even over-the-counter drugs.  By not making herbal supplements and botanicals subject to testing, US citizens are being put at risk.  This point is underscored by the following quote from Simon Singh and Edzard Ernst :  “Just because something is natural does not mean that it is good, and just because something is unnatural does not mean that it is bad.  Arsenic, cobra poison, nuclear radiation, earthquakes, and the Ebola virus can all be found in nature, whereas vaccines, spectacles, and artificial hips are all man-made.”  In this context HM would like to comment on the labeling of Genetic Modified Organisms (GMOs) as being bad.  To the contrary, they might be the only option for feeding an increasingly growing population.  The also offer the prospect of both better tasting and affordable products.

Chapter 7 recounts another success story, the long-standing question of what causes cervical cancer led, over a period of thirty years, to the identification of a small number of highly specific carcinogenic subtypes of the humanpapillomavirus (HPV).  The persistent infection with one or more of these subtypes is necessary to cause the disease.  This knowledge has led to the development of vaccines that have the potential to virtually eliminate cervical cancer as well as to fundamental new knowledge about how the virus evolved to cause cancer.

Kabat comes to the following conclusion, “the need for a more nuanced and realistic view of science, which acknowledges the enormous challenges, promotes skepticism toward widely circulated but questionable ides, and at the same time pays attention to what science can achieve at its best.

At this point please indulge HM in a personal story.  When he was working, he received a call from a representative of his insurance company.  This representative encouraged an annual checkup to include the prostate specific antigen test (PSAT).  For decades this had been a standard recommendation to men of my age.  However, HM tries to keep up with the literature.  He had read that urologists, the individuals most knowledgeable about the benefits of this test, had changed this long-standing recommendation.  Now the test is recommended only in certain high risk patients, and then, only after consulting with a physician.  However, it took another year before the rest of the medical community followed the lead of the urologists.

© Douglas Griffith and healthymemory.wordpress.com, 2016. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Advertisements

Cyberchondria and the Worried Well

September 13, 2016

“Cyberchondria and the Worried Well” is chapter 7 of The Cyber Effect” an important book by Mary Aiken, Ph.D., a cyberpsychologist.  Reports estimate that up to $20 billion is spent annually in the U.S. on unnecessary medical visits.  Dr. Aiken asked how many of these wasted effects are driven by a cyber effect?  A majority of people in a large international survey said they used the Internet to look for medical information, and almost half admitted to making self-diagnoses following a web search.  A follow-up survey found that 83% of 13,373 respondents searched the Internet often for information and advice about health, medicine or medical conditions.  People in “emerging economies” used online sources for this purpose the most frequently—China (94%), Thailand (93%), Saudi Arabia (91%), and India (90%) led the table of twelve countries.

Dr. Aiken writes that 20 years ago when people experienced any physical condition that persisted to the  point of interfering with their activities they would visit a doctor’s office and consult a doctor.  In the digital age, people might analyze their own symptoms and play doctor at home.  She notes that about half of the medical information offered on the Internet has been found by experts to be inaccurate  or disputed.  HM feels compelled to insert here the conclusion expressed by Ioannidis’s 2005 paper, which is still believed by most statisticians and epidemiologists, “Why Most Published Research Findings are False.”  This implies that the on-line information is similar to the information available in the research world.  And physicians are working with a questionable data base, so the problem of accurate research information is real and not an artifact of the internet.  [To learn more about Ioannidis see the following healthy memory blog posts,”Liberator of Knowledge from Tyranny of Profit,” “Thinking 2.0,” “Most Published Research Findings are False,’ and “The Problem with Scientific Journals, Especially Elite Ones.”]

There are also online support groups such as the website MDJunction.com.  These groups do provide a place where thousands meet every day to discuss their feelings, questions, and hopes with like minded friends.  Although these places provide support, they might not be the best sources of information.  And MDJunction.com does have a fine print disclaimer at the bottom of the page—“The information provided in MDJunction is not a replacement for medical diagnosis, treatment, or professional medical advice.”

The term “cyberchondria” was first coined in a 2001 BBC News report that was popularized in a 2003 article in “Neurology, Neurosurgery and Psychiatry,” and later supported by an important study by Ryen White and Eric Horvitz, two research scientists at Microsoft, who wanted to describe an emerging phenomenon engendered by new technology—a cyber effect.  In the field of cyberpsychology, cyberchondria is defined as “anxiety induced by escalation during health-related search online.”

The term “hyperchondria” has become outdated due to the Fifth Edition (DSM-5) of the “Diagnostic and Statistical Manual of Mental Disorders.”  About 75% of what was previously called “hypochondria” is now subsumed under a new diagnostic concept called “somatic symptom disorder,” and the remaining 25% is considered ‘illness anxiety disorder”.  Together these condition are found in 4 to 9% of the population.

Most doctors regard people with these disorders as nuisances who take up space and time that could be devoted to truly sick people who need care.  And when a doctors informs the patient that they do not have a diagnosable condition they become frustrated and upset.

Conversion disorders are what was called “hysterical conditions,’ which formerly went by such names as “hysterical blindness” and ‘hysterical paralysis.”  These have been renamed “functional neurological symptom disorder”, formerly called “Munchhausen syndrome”, is a psychiatric conditioning which patients deliberately produce or falsify symptoms or signs of illness for the principal purpose of assuming the sick role.

Iatrogenesis is a Greek term meaning “brought forth,” which refers to an illness “brought forth by the healer.”  It can take many forms including an unfortunate drug effect or interaction, a surgical instrument malfunction, medical error, or pathogens in the treatment room, for example.  A study in 2000 reported that it was the third most common cause of death in the United States, after heart disease and cancer.  So having an unnecessary surgery or medical treatment of any kind means taking a big gamble with your life.

In 1999 the estimate was of between 44,000 and 98,000 deaths annually in the United States  when the Institute of Medicine issued its infamous report, “To Err is Human.  HM is proud to note that a one of his colleagues, Marilyn Sue Bogner, was a pioneer in this area of research.  The first edition of her book “Human Error in Medicine predated the IOM report.  In 2003 she published “”Misadventures in Health Care:  (Inside Stories:  Human Error and Safety.”  Unfortunately, she has recently passed away.  And, unfortunately, matters seem to be getting worse.  In 2009 the estimates of each due to failures in hospital care rose to an estimated 180,000 annually.  In 2013 the estimates ran between 210,00 and 440,000 hospital patients in the United States die as a result of a preventable mistake.  Dr. Aiken believes that part of this escalation is due to the prevalence of Internet Medical searches.

So we have a difficult situation.  Cyberspace has erroneous information, but the underlying medical research also contains erroneous information and doctors are constrained by these limitations.  We should be aware of these limitations and be cognizant that the diagnosis and recommended treatment might be wrong.  The best advice is to solicit multiple independent opinions and to always be aware that “do nothing” is also an option.  And it could be an option that will safe your life.

© Douglas Griffith and healthymemory.wordpress.com, 2016. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Liberator of Knowledge from Tyranny of Profit

April 11, 2016

This post is motivated by an article by Michael S. Rosenwald  in the April 6, 2016 edition of the Washington Post titled,”Thief? Or Liberator of Knowledge from the Tyranny of Profit?”  The title of this healthy memory post should indicate my position on the title of the article.  The article is about a 27-year-old graduate student from Kazakhstan Alexandra Elbakyan who is operating a searchable online database of nearly 50 million stolen scholarly journal articles.

The basis for the posts I publish on this blog come from books I have purchased.  There are additional magazines and journals that I receive on the basis of professional organizations to which  I belong  and to which I pay dues.  Sometimes I find an interesting article from a source to which I do not have free access, but discover an unjustifiable fee to purchase the article.

It is a tad ironic that one of the purposes of these scientific organizations to which I belong is to disseminate scientific knowledge.  Yet they charge for the dissemination of this knowledge, and these publications constitute a significant part of the income for these organizations.

At one time this publication process might have been justifiable when it was based on paper.  However, in the digital age this publication process is no longer justifiable.  There are annoyingly long publication delays in the print medium, whereas the dissemination of information should be fast in this new digital age.

One substantial delay is the review process in refereed journals.  This is a matter of independent reviewers reviewing articles and providing input to the journal to determine if the article should be published.  I’ve participated in this process both as an article submitter and an article reviewer.  Often the agreement among the reviewers is not high.  I’ve reviewed articles that I think made a substantive contribution to the field, yet the articles were rejected on the basis of what I regarded to be minor issues.  I don’t believe that it is ever possible to write an article to which there are no objections.  The nature of research requires certain compromises and if these compromises are raised high enough, the article is rejected.

I am of  the strong opinion that this review process is unnecessary.  Usually I can quickly tell whether an article is worth my time.  And I am curious as to what articles I am not seeing due to an unjustified rejection of a good article.  I think the strongest advocates of article reviews are tenured faculty members who must make judgments as to whether junior faculty member should be granted tenure.  The review process allows them to count the numbers of articles published in refereed journals.  Otherwise, there would be the necessity to read an evaluate articles written by these junior faculty members.

Actually, there is a much larger problem that was documented in epidemiologist Ioannidis’s landmark article “Why Most Published Research Findings are False” (PLOS Medicine, 2, 3124. Doi:101371/journal pmed, 0020124, 2005). Subsequent research has confirmed his conclusion. Many articles followed (see the AAA Tranche of Subprime Science (Gelman and Laken, 2014). The problem hit the popular press with the October 19th cover of the Economist broadcasting HOW SCIENCE GOES GOES WRONG (see the healthy memory blog post “Most Published Research Findings are False.”)

I am amazed that this conclusion has received so little public attention.  It means that should your physician give you advice or recommend certain medications or procedures, he is most likely flying by the seat of his pants.  Even if is based on published research, there is a better than even chance that the research is in error.

Moreover, it is this insidious paper publication process that underlies most of this problem.  Journals pride themselves on high rejection rates, yet many of the rejected articles might have been failures to replicate.  The problem is further exacerbated by researchers who do not even bother to submit negative findings to journals because they know that these articles are likely to be rejected.  This is known as the “file drawer” problem which refers to important results that never see the light of day and end up in the file drawer.

So it is clear to me that this conventional publication process needs to be made electronic with all articles being available and all the data on which the articles are based need to be made available.  Most of this research is based on government funding.  So it is especially infuriating that I cannot get articles or data for which I have already paid.

© Douglas Griffith and healthymemory.wordpress.com, 2015. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Thinking 2.0

March 9, 2016

This  post was inspired by an article in the February 26, 2016 edition of the “New Scientist” written by Michael Brooks.  The title of the article is “A new kind of logic:  How to upgrade the way we think.”    There are many healthymemoy blog posts about the limitations of our cognitive processes.  First of all, are attentional capacity is quite limited and requires selection.  Our working memory capacity is around 5 or fewer items.  There are healthy memory blog posts on cognitive misers and cognitive spendthrifts.  Thought requires cognitive effort that we are often reluctant to spend making us cognitive misers.  And there are limits to the amount of cognitive effort we can expend.  Cognitive effort spent unwisely can be costly.

Let me elaborate on the last statement with some personal anecdotes.  Ohio State was on the quarter system when I attended and my initial goal was to begin college right after graduation in the summer quarter and to attend quarter consecutively so that I would graduate within three years.  Matters when fairly well until my second quarter when I earned the only “D” in my life.  Although I did get one “A” it was in a course for which I had already read the textbook in high school.  I replaced and continued to attend consecutive quarters, but only part time during he summer.  I was in the honors program and managed to graduate in 3.5 years with a Bachelor’s of Arts with Distinction in Psychology.  I tried going directly into graduate studies, but found that I had already expended my remaining cognitive capital.  So I entered the Army to give my mind a rest.

When I returned and began graduate school I was a cognitive spendthrift who wanted to learn as much as I could in my field.  However, I found that I could not work long hours.  If I did my brain turned to mush and I was on the verge of drooling.  So I found it profitable to stop my cognitive spendthrift days and marshal my cognitive resources. It worked and I earned my doctorate psychology from the University of Utah.

Michael Brooks argues that we are stuck in Thinking 1.0.   He mentions that our conventional economic models bear no resemblance to the real world.  We’ve had unpredicted financial crises because of incorrect rational economic models.  This point has been  made many times in the healthy memory blog.  Behavioral economics should address these shortcomings, but it is still in an early stage of development.

Ioannidis’s article has convinced  statisticians and epidemiologists that more than half of scientific papers reach flawed conclusions especially in medical science, neuroscience and psychology.

Currently we do have big data, machine learning, neural nets, and, of course, the Jeopardy champion Watson.  Although these systems provide answers, they do not provide explanations as to how they arrived at the answers.  And there are statistical relations in which it is difficult to determine causality, that is, what causes what.

Michael Brooks argues that Thinking 2.0 is needed.  Quantum logic makes the distinction between cause and effect (one thing influencing another) and common cause (two things responding to the same effect).  The University of Pittsburgh opened the Center for Causal Discovery (www.ccd.pit.edu) in 2014.

Judea Pearl, a computer scientist and philosopher at UCLA (and the father of the tragically slain journalist Daniel Pearl) says “You simply cannot grasp causal relationships with statistical language.”  Judea Perl has done some outstanding mathematics and has developed software that has made intractable AI programs tractable and has provided for distinguishing  cause and effect.  Unlike neural nets, machine learning, and Watson, it provides the logic, 2.0 logic I believe, as to reasoning behind the conclusions or actions.

It is clear that Thinking 2.0 will require computers.  But let us hope that humans will understand and be able to develop narratives from their output.  If we just get answers from machine oracles will we still be thinking in 2.0

© Douglas Griffith and healthymemory.wordpress.com, 2016. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Most Published Research Findings Are False

October 5, 2014

The title is part of the title of the epidemiologist Ioannidis’s landmark article “Why Most Published Research Findings are False” (PLOS Medicine, 2, 3124. Doi:101371/journal pmed, 0020124, 2005). Subsequent research has confirmed his conclusion. Many articles followed (see the AAA Tranche of Subprime Science (Gelman and Laken, 2014). The problem hit the popular press with the October 19th cover of the Economist broadcasting HOW SCIENCE GOES GOES WRONG.

Given the ramifications of this conclusion it is remarkable that this problem has not received much wider attention. So the healthymemory blog is standing up to do its part. The reasons for science going wrong are technical, dealing with the misuse of statistical methodology, as well as economic and political. The Economist does a fairly good job in explaining the problem for the layperson. This blog post will provide some examples and try to offer some advice.

As Ioannidis’s is an epidemiologist his critique centered on the medical literature although the ramifications of his article extend far beyond epidemiology. Most importantly the findings deal with our medical care. Readers should be somewhat aware of this as to the frequent contradictory findings regarding what is good or bad for us. Let us begin with the example of medical screening. The 5-year survival rate is one type of information that is given to promote the benefits of screening. This rate is defined as the ratio of the number of patients diagnosed with cancer still alive five years after the diagnosis divided by the number of patients diagnosed with cancer. So this rate is defined by a cancer diagnosis and leads to the conclusion that screening is saving lives. If lives are indeed being saved should it not be seen in mortality rates? A mortality rate is not defined by a cancer diagnosis. The formula for the annual mortality rate is the ratio of the number of people who die from cancer in one year divided by the total number of people in the group. It is not clear what is going on here, but if screening were indeed saving lives then it should be reflected in the mortality rate. When regarded in this light, the 5-year survival rate is a bit like a self-licking ice cream cone. Some ways of presenting the benefits of treatment are much more impressive than others. To learn more about this see the healthymemory blog posts “Interpreting Medical Statistics: Risk Reduction”, and “Health Statistics.”

To take a specific example, consider the Prostate Specific Antigen Test given to screen for prostate cancer. At one time this was regarded as being almost compulsory for males over a certain age. Now it is recommended only for males in a high risk group and, even then, only after consulting with their physician. You might ask what are the risks in screening. Apart from the costs, discomfort, and convenience, there are the side effects. In the case of prostate surgery they could be incontinence and/or impotence.

Research has also shown that many doctors do not understand how to communicate accurate medical statistics to their patients. A study reported by Gerd Gergenzer and his colleagues, “Helping Doctors and Patients Make Sense of Health Statistics” (Psychological Science in the Public Interest Vol 8_Number2, 2008) showed that few gynecologists understood positive mammograms. First lf all, the gynecologists were given the following accurate information about the women patients.

The probability that a woman has breast cancer is 1% (prevalence).

If the woman has breast cancer, the probability that she will test positive is 90%.

If a woman does not have breast cancer, the probability that she nevertheless tests positive is 90% (sensitivity).

If a woman does not have breast cancer, the probability that she nevertheless tests positive is 9% (false positive rate).

Then the doctors were told that a woman tested positive and that she wanted to know whether she has breast cancer for sure, or what here chances are. The doctors were then given the following choices from which to choose.

A. The probability that she has breast cancer is about 81%.

B. Out of 10 women with a positive mammogram, about 9 have breast cancer.

C. Out of 10 women with a positive mammogram, about 1 has breast cancer.

D. The probability that she has breast cancer is about 1%.

41% responded to option B.

13% chose option A.

21% chose option C.

19% chose option D.

Option C is the correct response.

This is based on Bayes formula for conditional probabilities. A good way of computing this is to use natural frequencies.

Consider 1,000 women.

10 are expected to have breast cancer and the remaining 990 to be free of breast cancer.

Of the 10 with breast cancer 9 should test positive and 1 negative. Of the remaining 990, 89 should test positive and 901 should test negative.

Then we divide the number having breast cancer, 9, divided by the number testing positive (89). The closest multiple choice option being C.

So what does a prospective patient do when the majority of the medical literature is wrong. First of all, do not forget the option of doing nothing. Get multiple opinions regarding your problem. And do your own research. Take all of this into consideration along with your personal values and make a decision, remembering that doing nothing remains an option.

© Douglas Griffith and healthymemory.wordpress.com, 2014. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

The Problem with Scientific Journals, Especially Elite Ones

January 3, 2014

Examples of elite scientific journals are Science,Nature, and Cell. But this problem generalizes to practically all refereed journals. Unfortunately, a criterion many refereed journals regard as one of success is a high rejection rate for submitted papers. This problem was recently articulated by Randy Schekman, the 2013 Nobel Prize winner in Physiology or Medicine.1 One of his criticisms was the artificial restriction of papers they publish that result in a high rejection rate. The second criticism has to do with published “impact factor” that purports to measure how important a journal is. The result of these pernicious factors is the conclusion John Ioannidis made in 2005 in Plos: Medicine, that most published research findings are false.

The sine qua non of science is replication. But journals do not like to publish replications of research. Much worse, is that failures to replicate are also likely not published. Simply put, that is how the majority of published research findings are false. This problem is so severe that the cover of the October 19th to 25thEconomist read, HOW SCIENCE GOES WRONG. The feature article elaborated on the very brief synopsis that I have provided.

At one time, in the era of paper publishing, there was a serious cost that limited how much research could be published. However, that is no longer the case. There is no limit to how much research can be put online. There is still a cry for research to be refereed. I have participated in the review process both as a reviewer and as a receiver of reviews. I have not been impressed by the process. There is a large factor of arbitrariness, and often form is weighted more strongly than substance. Frankly, I do not need what I read to be refereed. I can quickly ascertain whether a particular paper is worthy of further attention.

I think the major force behind refereeing are the academics. When making tenure decisions, the number of refereed publications is a factor that is heavily weighted. Absent this metric, academics might actually need to read the papers of those they are considering for tenure.

Randy Sheckman has started his own on-line journal. Expect many more in the future. Indeed, expect being able to download more research papers from authors’ websites.

This is certainly a welcome development for poor bloggers such as myself trying to access relevant research. There is also a push to make more data available to researchers. As most of this research is funded with taxpayers’ money, this is certainly appropriate, but I shall stop here before proceeding on another rant.

1What’s wrong with Science, The Economist December 14th 2013, p. 86.

© Douglas Griffith and healthymemory.wordpress.com, 2013. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.