Posts Tagged ‘O.J. Simpson’

The Prosecutor’s Fallacy

June 2, 2010

There are 175 accredited law schools in the United States. Only one of these schools requires a basic course in statistics or research methods.1 This is unfortunate as this deficiency in education has had adverse effects on justice. Prosecutors have even had a statistical fallacy named after them, the Prosecutor’s Fallacy. Here is the Prosecutor’s Fallacy:

p(match) is mistaken for p(not guilty|match).

 Suppose that you have been charged with first degree murder, a capital offense. You undergo DNA testing. Your DNA is found to match a DNA sample taken from the scene of the crime. The expert witness testifies that only one person in 100,000 would be able to match the DNA sample. You might conclude that it is all over for you. Many prosecutor’s would conclude that they had an open and shut case. But suppose that you lived in New York City and that the crime took place in New York. I believe that the population of the metropolitan New York is around eight million. So within the metropolitan New York area there are about eighty other individuals who could provide matching samples. So, given no other evidence against you, the probability is only 1/80 (0.0125) that you are the murderer.

DNA evidence can be more beneficial to the defense than than to the prosecution. For example, if the DNA from the semen sample taken from a rape victim does not match that of the accused, it is fairly certain that the accused is not guilty. It is difficult to understand how judges, if they are truly interested in justice, would ever deny DNA tests for rapists convicted before DNA testing had advanced to its present state.

A related fallacy in statistical reasoning can be found in the O.J. Simpson case. Simpson’s lawyer, Alan Dershowitz presented the data that as many as four million women are battered annually by husbands and boyfriends in the United States in 1992. In 1992 913 women were killed by their husbands and 519 were killed by boyfriends. So out of these four million cases of abuse, there were only1,432 homicides. From this Dershowitz concluded that there is less than 1 homicide per 2,500 cases of abuse. If you do the computations, you will find that this is still a conservative estimate. Although this is a conservative estimate, it is the wrong statistic. What we need to know is of the battered, murdered women, how many were killed by someone other than their husband or boyfriend. When this is considered we find that 89% of these women were murdered by their husband or boyfiend and only 11% by someone else. This statistic casts a dramatically different light on the probability of Simpson’s guilt. Yet the prosecution let this past without offering this relevant statistic.

Courts are frequently given the responsibility of determining whether violent people should be released back into the community. Psychiatrists are given a difficult task when they need to render an opinion as to whether a violent or potentially violent person should be released. The American Psychiatric Association provided this statement to the Supreme Court of the United States: “our best estimate is that two out of three predictions of long-term future violence are wrong.” Still the Supreme Court of the United States has ruled that such testimony is legally admissible as evidence. Here is their reasoning, “mental health professionals are not always wrong…only most of the time.”

1Faigman, D. L. (1999). Legal Alchemy: The Use and Misuse of Science in the Law. New York: Freeman and Co.

Errors in Probabilistic Reasoning

January 19, 2010

 Conditional probabilities are of the form, if such and such is the case, then the probability of so and so is y. Stanovich reports an article described by Robyn Dawes that had a headline that said that marijuana use led to the use of hard drugs.1 The headline implied that the survey was the probability of a student using hard drugs given that the student had used marijuana. But the survey was about the inverse probability, the probability that the student had used marijuana given that he had used hard drugs. If you think about this for a minute, you will realize, if you haven’t already that they are not the same. Many experiment with marijuana, but few, fortunately, move on to hard drugs.

Unfortunately, the inversion of conditional probabilities is not restricted to the above article; it is a common occurrence. Both patients and medical practitioners sometime invert probabilities and think that the probability of a disease, given a particular symptom, is the same as the probability of the sympton, given the disease. Bear in mind that different diseases share many of the same symptoms, and keep this in mind when you are the patient.

Perhaps the most blatant use, or misuse, of inverse probabilities occurred during the O. J. Simpson murder trial. One of Simpson’s defense attorneys, Dershowitz, I believe, presented the conditional probability that a husband who had assaulted his significant other would eventually murder her. That probability, fortunately, is quite low (0.0004) . So this assertion buttressed the defense. The prosecution failed to recognize that it was the inverse probability, the probability that a husband who had murdered his significant other had previously assaulted her is very, very high (0.89). This presentation to the jury would certainly have buttressed the prosecution’s case.2

Please don’t mistake me for taking a position on the Simpson murder case. I hear the jury foreman, who happened to be a woman, state that had this been a civil case, where the standard is the preponderance of the evidence, they would have convicted. However, in a criminal case, the standard is beyond a “reasonable doubt.” Now I would appreciate a definition of a “reasonable doubt.” A jury risks making two errors: letting a guilty person go free and convicting an innocent person. So what is the standard here? Is mistakenly convicting one person in four beyond a “reasonable doubt.” One person in ten? One person in twenty? One person in a hundred? One person in a thousand? I think a definition is needed here to bring the legal system into the 21st century.

1Stanovich, K. E. (2009). What Intelligence Tests Miss: the psychology of rational thought. New Haven: The Yale University Press.

2Gigerenzer, G. (2002). Calculated Risks: How to Know When Numbers Deceive You. New York: Simon & Schuster

© Douglas Griffith and, 2009. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and with appropriate and specific direction to the original content.