When it comes to predicting the future it seems only our failure to consistently get tomorrow right has been steadily predictable, though that may be about to change, at least a little bit. If you don’t think our society and especially those “experts” whose job it is to help us steer us through the future aren’t doing a horrible job just think back to the fall of the Soviet Union which blindsided the American intelligence community, or 9-11, which did the same, or the financial crisis of 2008, or even much more recently the Arab Spring, the rise of ISIL, or the war in Ukraine.
Technological predictions have been generally as bad or worse and as proof just take a look at any movie or now yellowed magazine from the 1960’s and 70’s about the “2000’s”. People have put a lot of work into imagining a future but, fortunately or unfortunately, we haven’t ended up living there.
In 2005 a psychologist, Philip Tetlock, set out to explain our widespread lack of success at playing Nostradamus. Tetlock was struck by the colossal intelligence failure the collapse of the Soviet Union had unveiled. Policy experts, people who had spent their entire careers studying the USSR and arguing for this or that approach to Cold War had almost universally failed to have foreseen a future in which the Soviet Union would unravel overnight, let alone that such a mass form of social and geopolitical deconstruction would occur relatively peacefully.
Tetlock in his book Expert Political Judgment: How Good Is It? How Can We Know? made the case that a good deal of the predictions made by experts were no better than those of the proverbial dart throwing chimp- that is no better than chance, or for the numerically minded among you – 33 percent. With the most frightening revelation being that often the more knowledge a person had on a subject the worse rather than the better their predictions.
The reasons Tetlock discovered for the predictive failure of experts were largely psychological and all too human. Experts, like the rest of us, hate to be wrong, and have a great deal of difficulty assimilating information that fails to square with their world view. They tend to downplay or misremember their past predictive failures, deal in squishy predictions without clearly defining probabilities- qualitative statements that are open to multiple interpretations. Most of all, there seems to be no real penalty for getting predictions wrong, even horribly wrong, policy makers and pundits that predicted a quick and easy exit from Iraq or Dow 36,000 on the eve of the crash go right on working with a “the world is complicated” shrug of the shoulders.
What’s weird, I suppose, is that while getting the future horribly wrong has no effect on career prospects, getting it right, especially when the winds had been blowing in the opposite direction, seems to shoot a person into the pundit equivalent of stardom. Nouriel Roubini or “Dr Doom” was laughed at by some of his fellow economists when he was predicting the implosion of the banking sector well before Lehman Brothers went belly up. Afterwards he was inescapable, with a common image of the man, whose visage puts one in mind of Vlad the Impaler, or better Gene Simmons, finding himself surrounded by 4 or 5 ravishing supermodels.
I can’t help but thinking there’s a bit of Old Testament style thinking sneaking in here- as if the person who correctly predicted disaster is so much smarter or so tuned into the zeitgeist that the future is now somehow magically at their fingertips. Where the reality might be that they simply had the courage to speak up against the “wisdom” of the crowd.
Getting the future right is not a matter of intelligence, but it probably is a matter of thinking style. At least that’s where Tetlock, to return to him, has gone with his research. He has launched The Good Judgement Project which hopes to train people to be better predictors of the future. You too can make better predictions if you follow these 5 rules.
- Comparisons are important: use relevant comparisons as a starting point;
- Historical trends can help: look at history unless you have a strong reason to expect change;
- Average opinions: experts disagree, so find out what they think and pick a midpoint;
- Mathematical models: when model-based predictions are available, you should take them into account;
- Predictable biases exist and can be allowed for. Don’t let your hopes influence your forecasts, for example; don’t stubbornly cling to old forecasts in the face of news.
One thing that surprises me about these rules is that apparently policy makers and public intellectuals aren’t following them in the first place. Obviously we need to do our best to make predictions minimizing cognitive biases, and we should certainly be able to do better than the chimps, but we should also be aware of our limitations. Some aspects of the future are inherently predictable demographics work like this, barring some disaster we have a pretty good idea what the human population will be at the end of this century and how it will be distributed. Or at least demographics should be predictable, though sometimes we miss just happen to miss 3 billion future people.
For the things on our immediate horizon we should be able to give pretty good probabilities and those should be helpful to policy makers. It has a definite impact on behavior whether we think there is a 25 percent probability that Iran will detonate a nuclear weapon by 2017 or a 75 percent chance. But things start to get much more difficult the further out into the future you go or when you’re dealing with the interaction of events some of which you didn’t even anticipate – Donald Rumsfeld’s infamous “unknown unknowns”.
It’s the centenary of World War I so that’s a relevant example. Would the war have happened had Gavrilo Princip’s assassination attempt on Franz Ferdinand failed? Maybe? But even had the war only been delayed timing would have most likely have affected its outcome. Perhaps the Germans would have been better prepared and so swiftly defeated the Russians, French and British that the US would never have became involved. It may be the case that German dominance of Europe was preordained, and therefore, in a sense, predictable, as it seems to be asserting itself even now, but it makes more than a world of difference not just in Europe but beyond whether or not that dominance came in the form of the Kaiser, or Hitler, or, thank God, Merkle and the EU.
History is so littered with accidents, chance encounters, fateful deaths and births that it seems pretty unlikely that we’ll ever get a tight grip on the future which in all likelihood will be as subject to these contingencies as the past (Tetlock’s Second Rule). The best we can probably do is hone our judgement, as Telock argues, plan for what seems inevitable given trend lines, secure ourselves as well as possible against the most catastrophic scenarios based on their probability of destruction (Nick Bostrom’s view) and design institutions that are resilient and flexible in the face a set of extremely varied possible scenarios – (the position of Nassim Taleb).
None of this, however, gives us a graspable and easily understood view of how the future might hang together. Science fiction, for all its flaws, is perhaps alone in doing that. The writer and game developer Adrian Hon may have discovered a new, and certainly produced an insightful way of practicing the craft. To his recent book I’ll turn next time…
Hi, Rick
Have you heard of the Delphi method for forecasting?
http://en.wikipedia.org/wiki/Delphi_method
It seems somewhat like a structured technique to accomplish some of the rules you list above.
Hi James,
No I hadn’t heard of them. The Good Judgement Project is pretty structured though. They have a program where participants are taught how to be a “superforcaster” with the cool thing being anyone can apply for the training. I am thinking about it myself:
http://goodjudgmentproject.com/
Supposedly their results have been quite good though it’s early days.
I am highly skeptical though of any predictions that are anything more than even a couple of years out. There are far too many potential black swans for that to be possible and too many interconnections we can not see. We just need to prepare for flexible response to a variety of alternative futures.