[PREV - COGNITIVE_BIAS_BIASES] [TOP]
SUPERFORECASTING
November 30, 2015
"Superforecasting" (2014) by Tetlock and Gardner
Tetlock and company have been doing some really interesting work,
participating in and winning IARPA tournaments on forecasting
events. The "predictions" involved are both very precisely
defined, and targeted at a region where intelligent estimates are
possible but certainty is not, on timescales suitable for
tournament rounds that last only a few years. Players provide
estimates in terms of percentage of probability, and are scored
in a way that rewards being definite as well as being correct--
and interestingly they are allowed to update their estimates
continuously.
The thing that gets everyone's attention, the headline of nearly
any commentary is that they've come up with some remarkably
simple "debiasing" training: with just about an hour of work,
they can improve your forecasting accuracy by 10%.
This is *really* big news. For example, Daniel Kahneman
(author of "Thinking Fast and Slow") was very pessimistic
about avoiding cognitive traps through training programs--
he's seen too many examples of trained cognitive scientists
getting taken when they really should've known better.
MEDIUM_SPEED
It's profoundly annoying that the book
"Superforecasting" doesn't focus more on this
debiasing training. The result is mentioned in the
introduction, and the debiasing material is
presented in an appendix (a list of "ten QUESTIONING_DEBIASING
commandments"), but outside of that there's no
discussion of it throughout the core of the book.
The first thing I wanted to know is how well
established is this result? Did they compare the
effect of reading these "ten commandments" to "Thinking in Time"
reading any other forecasting advice, for example, is mentioned
a summary of Neustadt and May's "Thinking in Time", *just* in a
or for that matter to reading the Lord's Prayer? footnote-- as a
source of info
Tracing the references, I found a 2014 paper that about the Cuban
asserts that they're debiasing training was compared Missle Crisis.
to no-training, but I infer that it hasn't yet been There's no
compared to *other* training. evaluation of
their decision-
They have however I submitted a question making advice.
done some work about this at Tetlock's
comparing presenting Long Now talk. It
the material in didn't make the cut.
different forms: Tetlock only barely
written advice, mentioned this
training video, debiasing training
etc. As I remember during his talk-- when
it, they all work he mentioned it in
about as well. passing during the Q&A
period, Brand was taken
DEBIASING by surprise about it,
and very impressed.
What *should* have been As it happens, I have a
one of the main focuses theory about this (which
of the talk had been will come as little
sidelined. surprise to regular
readers): this key result
is not *Tetlock's*
result, it comes from
other associated
Other researchers, so he has a
researchers: (possibly unconscious)
tendency to downplay it.
Mellers
Ungar
Stone
Luu
--------
[NEXT - SUPERFORECASTING_ISSUES]