Organizing Information for the Hardest Decisions

“Organizing information for the hardest decisions” is a chapter in Levitin’s The Organized Mind: Thinking Straight in the Age of Information Overload. The primary focus of this chapter is on medical decisions. The subtitle is “ When Life is on the Line,” but Levitin structures the decision making in terms of probabilities and statistics. Given that we live in a world of uncertainty and constantly deal with, whether we realize it or note, probabilistic information, the advice in this chapter can be generally applied to the vast majority of decisions we need to make.

Levitin begins by discussing objective probabilities, probabilities regarding things we can count. For example, what is the probability of drawing an the ace of spades out of a pack of 52 playing cards. This can be computed by dividing the number of aces of spades in a legitimate pack of playing cards, but the total of number of cards in the deck, that is 1/52=0.019. The probability of drawing an ace from the deck of 52 cards requires that the 0.019 probability be multiplied by 4, as there are four aces in a deck of cards to yield 0.076.

Or consider the probability of rolling a six on a fair die. As there are six sides to a die the probability wold be 1/6=0.0967. To compute the probability of rolling two sixes, we need to multiply this probability by itself to get 0.0082.

To compute the likelihood of winning a pick three lottery you divide 1 by the number of numbers in the lottery, 1/1000=0.001, 1/10000= 0.0001, 1/100000=0.00001. 1/1000000=0.0000001.

Remember that in these rollover lotteries, where the winnings can assume astronomical amounts, the reward will be shared among the winners. Moreover, very often the prize is paid out over time, which effectively reduces the amount of the earnings. I remember one of these times when the prize had reached some astronomical amount and people were waiting in line for hours just to buy a ticket. When one woman was asked what she thought her chances of winning were, she answered, “about fifty/fifty.” Although she might represent an extreme case, few people can understand these extremely low odds. First of all, they would not waste their money. But it is also a waste of effort and time. Nevertheless, it keeps a fantasy alive for some. There is a term for this phenomenon and it is known as denominator neglect. People ignore the magnitude of the denominator, when evaluating risk or a bet.

There is an error most people commit when dealing with objective probabilities that is known as the gambler’s fallacy. This stems from a failure to appreciate how random random really is. For example, when people are asked to write down what they think a random sequence would look like for 100 tosses of a coin. Rarely will anyone put down runs of seven heads or tails in a row, even though there is a greater than a 50% chance that they will occur in a 100 tosses. Statisticians have argued that there is no such thing as a hot hand in basketball or other sports because hot streaks are likely just as a matter of random chance. The gambler’s fallacy is related to the notion that something is due. For example, if a fair coin is tossed five times and comes up heads five times, people will think that the sixth toss will be tails because it is “due.” Well each of these coin tosses is independent, so the probability that the sixth toss will be a tail is 50%, just as it was for the first toss. Now it is true that the probability of six straight heads is 0.008.

The preceding were objective computable probabilities. Whenever possible or relevant, you should be familiar with or compute them. However, we must also deal with subjective probabilities. Subjective probabilities are estimates, or guesses, regarding the likelihood of particular events or outcomes. We need to deal with these subjective estimates all the time. For example, how likely is it going to rain? How likely is it that I could get a job offer? What is the probability that my car will break down. What is the probability that I’ll miss my flight? I hope when you do this you are better calibrated than the lady who thinks she as a 50/50 chance of winning the lottery. And you need to combine these subjective probability estimates with respect to both favorable and unfavorable outcomes.

Levitin divides decisions into the following four categories:

  1. Decisions that you can make right now because the answer is obvious. (Here I would add that it is a good idea to do a mental check to see if you are overlooking any relevant information or risks. In retrospect you might find a risk that was obvious that was initially overlooked.)

  2. Decisions you can delegate to someone else (your spouse, perhaps?) who has more time or expertise than you do.

  3. Decisions for which you have all the relevant information but for which you need some time to process or digest that information. This is frequently what judges do in difficult cases. It’s not that they don’t have the information—it’s that they want to mull over the various angles and consider the larger picture. It’s good to attach a deadline to these.

  4. Decisions for which you need more information. At this point, either you interest a helper to obtain that information or you make a note to yourself that you need to obtain it. It’s good to attach a deadline in either case, even if it’s arbitrary, so that you can cross this off your list

Much medical decision-making, particularly important medical decisions, falls into category 4. You need more information. Doctors can provide some of it, but doctors have their own biases, are usually poor at computing or expressing probabilities. Moreover, much of this information is wrong (see the healthymemory blog post “Most Published Research Findings are False.”). If you read that post you should remember that many doctors cannot inform a woman who has tested positive, the probability that she actually has cancer. The probability is still only 10%. The reason for this is that the base rate of cancer is quite low. So many mammograms result in false positives. If you have read that blog post you should also realize that the successes of cancer screening are reported via cancer survival rates. There has been no analogous improvement in mortality rates. When making decisions you should not overlook the option of doing nothing. Ignoring base rates is an all too common human fallacy. So determining accurate base rates is critical to many decisions.

Making decisions regarding conditional probabilities involves using Bayes Theorem. Levitin provides a simple example that can be used as a template. That example follows.

Suppose that you take a blood test for a hypothetical disease, blurritus, and it comes back positive. However, the cure for blurritus is a medication called chlorohydroxelene that has a 5% chance of serious side effects, including a terrible, irreversible itching just in the part of your back that you can’t reach. Five percent doesn’t seem like a big chance, and you might be willing to take it to get rid of this blurry vision.

Here is the available information.

The base rate for blurritis is 1 in 10,000 or .0001.

Chlorohydroxelene use ends in an unwanted side effect 5% of the time or .05.

What we need to know is the accuracy of the test with respect to two measures

The percentage of the time the test falsely indicates the presence of the disease, called a false positive.

The percentage of time that it fails to report indicate the presence of the disease when the disease is present, called a false negative.

Draw a table of two rows and two columns, a fourfold table.

The columns represent the test results, positive or negative.

The rows represent the presence of the disease, Yes or No.

There are test results for 10,000 people. There is one positive test, and no negative tests for people who did not have the disease. So in the first row of the table there is a 1 in the left portions and 0 in the right portion.

In the “No” row there are 200 positive tests and 9,799 negative tests.

So to determine the probability that you have the disease, you add up the total positive test results and find that there is only a 1/201, 0.49% that you have the disease. So there is a 9.51% chance that you do not have the disease. Levitin provides an appendix in the book elaborating on the development and use of these fourfold tables. They are absolutely essential when conditional probabilities are involved and there are always conditional probabilities involved in medical tests. No medical test is infallible and it is important to have data regarding both false positives and false negatives.

Biopsies provide a good example of the fallibility of medical tests. They involve subject judgment in what is basically a “Does it look funny test.” The pathologist or histologist examines a sample under a microscope and notes any regions of the sample that, in her judgment, are not normal. She then counts the number of regions and considers them as a proportion of the entire sample. The pathology report might say something like”5% of the sample had abnormal cells,” or carcinoma noted in 50% of the sample. Pathologists often disagree about the analysis and even assign different grades of cancer for the same sample. So always get, at least, a second sample on your biopsy.

These medical decisions are example of making decisions are the basis of expected values and expected costs that can be generally applied. Suppose you need to decide whether you should pay to park your car. Suppose that the parking lot charges $20 and the cost of a parking ticket is $50, but there is only a 20% chance that you’ll get a ticket. So the expected value of paying for parkins is a 100% chance of losing $20 (-$20). Not paying for parking has a 25% chance of losing $50 (-$12.50). So for today the smart money says do not pay for parking (excuse me for avoiding the ethical problem of disobeying the law and inconveniencing workers by parking in a loading zone. This is only being done as an illustrative example).

What is current in the news is a big problem, because often we are lacking information regarding the frequency of events and confuse the frequency and urgency of reporting with the actual frequency of occurrences. One of the best examples of this occurred after the 9/11 tragedy many people decided to drive rather than fly. As driving is more dangerous than travel by commercial aviation, this resulted in an increase in deaths due to changing modes of transportation. People are alarmed by crime and envision frequent shoot outs between criminals and police. They feel a need to arm and protect themselves. Well check the actual frequency of crime in your neighborhood versus basing it on the programs and news reports on television. You probably are much safer than you think you are. In contrast to what we see on television, it is my understanding that the majority of police retire from their jobs without ever having fired their weapons on duty. And guns are used in more suicides than in homicides, to say nothing of accidental shootings.

This post has probably been disturbing for many readers. Well unfortunately, there is much missing information, much misinformation, and problems in accurately computing probabilities in making decisions. It is hoped that this post will inform on what to worry about and what to ignore, on what questions to ask, and how to combine probabilities to make decisions.

Tags: , , , , , , , , , ,

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: