Posts Tagged ‘Philip Tetlock’

Reason

April 17, 2018

Steven Pinker has a chapter called Reason in his outstanding book, “Enlightenment Now.” Part of the problem with reason or reasoning are beliefs, as was expounded in a previous healthy memory blog post, “Beliefs: Necessary, but Dangerous.” The legal scholar Dan Kahan has argued that certain beliefs become symbols of cultural allegiance protected by identity-protective connection. People affirm or deny these beliefs to express not what they know but who they are. Endorsing a belief that hasn’t passed muster with science and fact-checking isn’t so irrational. At least not by the criterion of the immediate effects on the believer. The effects on the society and planet are another matter. The atmosphere doesn’t care what people think about it, and if it in fact warms by 4 degrees Celsius, billions of people will suffer, no matter how many of them had been esteemed in their peer groups for holding a locally fashionable opinion on climate change along the way. Kahn concluded that we are all actors in a Tragedy of Belief Commons: what’s rational for every individual to believe (based on esteem) can be irrational for the society as a whole to act upon (based on reality). Technology has the effect of magnifying differences that result in polarization in political and social domains.

A fundamental problem is that accurate knowledge can be effortful and time consuming to obtain. Predictions are very difficulty as some have noted especially when they are about the future. Psychologist Philip Tetlock has studied the accuracy of forecasters. He recruited hundreds of analysts, columnists, academics, and interested laypeople to compete in forecasting tournaments in which they were presented with possible events and asked to assess their likelihood. This research was conducted over 20 years during which 28,000 predictions were made. So, how well did the experts do? On average, about as well as a chimpanzee throwing darts. In other words, not better than chance.

Tetlock and fellow psychologists Mellers and Gardner held another competition between 2011 and 2015 in which they recruited several thousand contestants to take part in a forecasting tournament held by the Intelligence Advanced Research Projects Agency (IARPA). Again the average performance was at chance levels, but in both tournaments the researchers could pick out “superforecasters,” who performed not just better than chimps and pundits, but better than professional intelligence officers with access to classified information, better than prediction markets, and not too far from the theoretical maximum. The accurate predictions last for about a year. Accuracy declines into the future, and falls to the level of chance around 5 years out.

The forecasters who did the worst, were also the most confident, were the ones with Big Ideas, be they left- or right wing, optimistic or pessimistic. Here is the summary by Tetlock & Gardner:

“As ideologically diverse as they were, they were united by the fact that their thinking was so ideological. They sought to squeeze complex problems into the preferred cause-effect templates and treated what did not fit as irrelevant distractions. Allergic to wishy-washy answers, they kept pushing their analyses to the limit (and then some), using terms like “furthermore” and “moreover” while piling up reasons why they were right and others wrong. As a result they were unusually confident and likelier to declare things as “impossible” or “certain.” Committed to their conclusions, they were reluctant to change their minds even when their predictions clearly failed.”

Tetlock described the super forecasters as follows:

“pragmatic experts who drew on many analytical tools, with the choice of tool hinging on the particular problem they faced. These experts gathered as much information from as many sources as they could. When thinking, they often shifted mental gears, sprinkling their speech with transition markers such as “however,” “but,” “although,” and “on the other hand.” They talked about possibilities and probabilities, not certainties. And while no one likes to say, “I was wrong,” these experts more readily admitted it and changed their minds.”

The superforecasters displayed what psychologist Jonathan Baron calls “active open-mindedness” with opinions such as these:

People should take into consideration evidence that goes against they beliefs. [Agree]
It is more useful to pay attention to those who disagree with you than to pay attention to those who agree. [Agree]
Changing your mind is a sign of weakness. [Disagree]
Intuition is the best guide in making decisions. [Disagree]
It is important to persevere in your beliefs even went evidence is brought to bear against them. [Disagree]

The manner of the Superforecasters’ reasoning is Bayesian. They tacitly use the rule from the Reverend Bayes on how to update one’s degree of credence in a proposition in light of evidence. It should be noted that Nate Silver (fivethirtyeight.com) is also a Bayesian.

Steven Pinker notes that psychologists have recently devised debiasing programs that fortify logical and critical thinking criteria. They encourage students to spot, name, and correct fallacies across a wide range of contexts. Some use computer games that provide students with practice, and with feedback that allows them to see the absurd consequences of their errors. Other curricula translate abstruse mathematical statements into concrete, imaginable scenarios. Tetlock has compiled the practices of successful forecasters into a set of guidelines for good judgment (for example, start with the base rate; seek out evidence and don’t overreact or under react to it; don’t try to explain away your own errors but instead use them as a source of calibration). These and other programs are provably effective: students’ newfound wisdom outlasts the training session and transfers to new subjects.

Dr. Pinker concludes,”Despite these successes, and despite the fact that the ability to engage in unbiased, critical reasoning is a prerequisite to thinking about anything else, few educational institutions have set themselves the goal of enhancing rationality (This includes my own university, where my suggestion during a curriculum review that all students should learn about cognitive biases fell deadborn from my lips.) Many psychologists have called on their field to “give debiasing away” as one of its greatest potential contributions to human welfare.”

It seems appropriate to end this post on reason with the Spinoza quote from the beginning of the book:

“Those who are governed by reason desire nothing for themselves which they do not also desire for the rest of humankind.”

Finally, Hope on the Prediction Front

October 15, 2015

A previous healthy memory blog post, “Would You Rather Be Popular or Accurate,” summarized Philip Tetlock’s book, Expert Political Judgment.  Tetlock summarized several decades of research on experts’ political predictions.  He found that their predictions were virtually indistinguishable from chance, in other words these experts were not experts.  However, he was able to classify these experts into two categories, which he labeled hedgehogs and foxes.  Hedgehogs were characterized by big ideas.  In other words, they were ideologues.  However, the judgments of foxes were more nuanced with qualifications and conditions.  Even though the judgments of foxes were poor, they were still better than the judgments of hedgehogs.  What it is disturbing is that the hedgehogs get more air and print time, so we are wasting our time listening to these experts.  Nevertheless, these experts make a good living at being wrong.

Tetlock summarized his new research in Superforcasting:  The Art and Science of Prediction by Philip Tetlock and Dan Gardner.  This research involved the recruitment of literally thousands of volunteers.  These  volunteers were given tasks such as predicting if and when, North Korea would conduct a nuclear test, if and when peace would break out in Iraq, if and when Iran would agree to a nuclear ban, etc.  The volunteers would research these topics and revise their predictions whenever they thought that new information warranted a revision.  The volunteers reported their predictions using subjective ratings.  Remember that these were volunteers working without pay.  Anyone could volunteer.  I believe that token gift certificates were presented.

This research was sponsored by the Intelligence Advanced Research Project Agency (IARPA).  I imagine that some readers are asking two questions.  One question might be why did IARPA not use expert intelligence experts?  The second question might be, why conduct all this research, why not simply ask the experts how they do their analysis?

With respect to the first question I would remind readers of the previous study, where presumable experts were not found to be experts.  There also  is no means of identifying the experts.  Most reports do not include subjective numerical estimates that are amenable to statistics.  Nor is their a system that tracks the accuracy of these reports.  Moreover, given Tetlock’s previous research where hedgehogs receive the attention and foxes are ignored, it might be that the wrong analysts are being promoted and receiving attention.  The foxes might be laboring ignored in obscurity.

With regard to the second question, the answer is that you could not rely on what they tell you.  The vast majority of cognitive processing occurs below our level of awareness, and research has shown that at times what people report is why they did something is not consistent with the empirical evidence (see the healthy memory blog post, “Strangers to Ourselves”).  To a certain extent it is as useful as asking someone how they ride a bicycle.

It was only a very small percentage of this group who could be classified as “superforecasters.”    Moreover, identifying this group presented statistical challenges.  The question was whether these high performers more knowledgeable or lucky.   After all, lottery winners are lucky individuals who are rewarded for doing something stupid.

What was characteristic of these superforecastors?  Well, first of all I believe that all participants could be regarded as having growth mindsets (see the immediately preceding post).    The supercasters tended to use relatively precise subjective estimates, which the frequently revised.  Moreover, these revisions were done in the spirit of Bayesian analysis (see the healthy memory blog post, “Organizing Information for the Hardest Decisions”), even if they didn’t explicitly use Bayes Theorem.   There are many more results and conclusions, but too many to summarize.  If interested, I recommend reading the book.

© Douglas Griffith and healthymemory.wordpress.com, 2015. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Would You Rather Be Popular or Accurate?

July 16, 2014

That is, if you were a political pundit, would your rather be popular or accurate? To answer this question we need to review research done by Philip Tetlock, a professor of psychology and political science. In 1987 he started collecting predictions from a broad array of experts in academia and government on a wide variety of topics in domestic politics, economics, and international relations. He asked theses experts to make predictions on a periodic basis about major events. This study spanned more than fifteen years and was published in his 2005 book, Expert Political Judgment. Regardless of their backgrounds, these experts did barely better than random chance, and had done even worse than rudimentary statistical methods at predicting future political events. About 15 percent of the events they predicted to have no chance of occurring, happened, and about 25% of those they said were absolutely sure things failed to occur. At this point you might have decided against a career as a pundit, but remember many pundits manage to make a living, and some pundits make a very good living.

Tetlock was able to classify his pundits into two classes that he called hedgehogs and foxes, The Greek poet Archilochus had written, “The fox knows many little things, but the hedgehog knows one big thing.” Hedgehogs believe in Big Ideas, in governing principles about the world that behave as though they were physical laws and underlie most every interaction in society. Hedgehogs tend to be specialized, stalwart, stubborn, order-seeking, confident, and ideological. These are all traits that make hedgehogs weaker forecasters.

On the other hand, foxes are scrappy creatures who believe in many little ideas and in taking a multitude of approaches towards a problem. Foxes are multidisciplinary, adaptable, self-critical, tolerant of complexity, cautious and empirical. These are all traits that make foxes better forecasters. “Better” is used in a relative context as the overall performance was quite poor.

So, would you rather be a fox or a hedgehog? Hedgehogs tend to be much more popular on TV talk shows as they are strong spoken and sure in their beliefs. They tend not to equivocate, even though the issues are complex and they are quite likely to be wrong. This is likely a contributing factor to the polarization of society. In his 1970 book, Future Shock, Alvin Toffler predicted future technology would lead to the polarization of society. This is one of the mechanisms by which the polarization occurs.

© Douglas Griffith and healthymemory.wordpress.com, 2014. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.