Posted by on Jun 20, 2016 in Articles & Advice, Blog, Featured, Posts, Radio & Podcasts |

By Jason Zweig | June 17, 2016 11:00 a.m. ET

Image credit: “Fortune Teller on Melrose Avenue,” Carol M. Highsmith (2012), Library of Congress


An Interview with Philip Tetlock at The Wall Street Journal’s CFO Network conference

To get business right, finance chiefs need to be good forecasters. Yet research has shown that amateurs can actually be better than experts at predicting the future.

Jason Zweig, The Wall Street Journal’s investing columnist, sat down with Philip Tetlock, a professor at the University of Pennsylvania’s Wharton School and the co-author of “Superforecasting: The Art and Science of Prediction,” to explore why that is and what companies can learn from it.

Edited excerpts follow.


Vague verbiage

MR. ZWEIG: Why is it so hard for experts to make forecasts about things in their own domain of expertise?

MR. TETLOCK: One reason is that experts sometimes know too much. I was talking once to John McLaughlin, former director of the CIA, about the end of the Cold War period, and he was remarking that the analysts who were slowest to recognize that East Germany was disintegrating were the people who had been on the case for 20 years.

It was the newbies coming in who got it pretty quickly. And there’s a lot of psychological evidence that attests to the power of preconceptions to grip us and make it hard for us to be timely belief updaters. So sometimes knowledge is actually an impediment. Another big factor is that there is a large amount of uncertainty in the world. So no matter how smart you are, it isn’t going to give you a lot of traction.

MR. ZWEIG: Because luck and randomness are such powerful forces?

MR. TETLOCK: Yeah. They aren’t totally dominant, but they’re powerful forces. And forecasters who don’t take them into account do so at their peril.

MR. ZWEIG: After the invasion of Iraq, the U.S. intelligence community did a lot of soul searching. And part of that was the project that you got involved in and that you write about in your book. Tell us what you were asked to do.

MR. TETLOCK: The U.S. intelligence community ran a competition. Five academic teams from major universities around the country competed to generate realistic probability estimates on events that the U.S. intelligence community cares about. And we were being compared or benchmarked against the predictions of professional intelligence analysts with access to classified information.

That’s a remarkable thing to do.

If I’m an intelligence analyst, I’m much safer saying, “Look, I think there’s a distinct possibility the Iranians may cheat on the nuclear deal.” Let’s say I think the true probability is 5% in the next year. If they do cheat and I’m on record with the 5% probability estimate, I’m in trouble. If I say “distinct possibility,” I’m covered no matter what. Vague verbiage gives you political safety.

The downside is that vague verbiage makes it impossible to learn to make more granular and well-calibrated probability estimates. So there’s this deep tension inside organizations between the desire for political safety and doing what needs to be done to extract as much predictive juice as you can out of your experts.

MR. ZWEIG: We know from your work that expert forecasts are nowhere near as reliable as they should be or as the experts think they are. Are the forecasts of CFOs or corporate forecasts in general more accurate?

MR. TETLOCK: You should expect forecasters to do better to the degree they’re working in a world where they get quick, clear feedback on their forecasts. “Distinct possibility” doesn’t count. You have to be making numerical probability estimates repeatedly over time on a wide range of outcomes. If you do that, you can learn to become one of the better-calibrated professionals.

Who are the best-calibrated professionals who have been studied in the research literature? Expert bridge players. Meteorologists. Expert poker players. Now, card games are special because you’ve got repeated play, well-defined sampling universe, quick feedback. So that’s an extremely learning-friendly environment. I don’t think the environment most CFOs confront is nearly as learning-friendly as that, but it isn’t totally learning-inhospitable either. So I think there is room for traction.

MR. ZWEIG: Your work suggests that you want people to update their forecasts and not be too slow to change their minds, right?

MR. TETLOCK: Yes. A key defining feature of the best forecasters is that they update often, and they typically update by relatively small increments.

Echo chambers

MR. ZWEIG: So how can companies bring some of the specific techniques, that you trained the superforecasters in, into their organizations? Specifically, how can organizations combat confirmation bias, or the tendency to discount information that suggests your evidence is wrong?

MR. TETLOCK: Some of our superforecasters actually developed programs that guaranteed they could break through the echo-chamber effect on the internet. They developed programs designed to expose them to contradictory information on each of the topics that were being questioned in the tournament. Selective exposure is a problem. We tend to read the things that we agree with. We tend to hang out with people we agree with and they can become echo chambers and that can produce unwarranted extremity in forecasts.

MR. ZWEIG: One other aspect of the work that you’ve done is to train forecasters to incorporate base rates. How does that work?

MR. TETLOCK: So superforecasters aren’t very romantic. If you were with a superforecaster at a wedding and you asked them, “How likely do you think it is this couple’s going to get divorced,” they wouldn’t be enraptured by the spirit of the occasion.

They would say, “Well, almost everybody looks pretty happy at their wedding. That’s not a very diagnostic bit of information. What’s the base rate for their sociodemographic category of divorce?” Somewhere between 25% and 50%, say. And they would base their initial estimate on that base rate, and then they would update that estimate in response to what’s called individuating your insider information.

So if you knew, say, that the husband was a psychopathic philanderer, you would of course change your probability. And if you had other insider information that pointed in the other direction, you would update. But getting in the ballpark of plausibility is really crucial.


Source: The Wall Street Journal

See also:

Can You See the Future? Probably Better Than Professional Forecasters

5 Ways to See the Financial Future