Monday, 14 September 2009

There’s an 80% Chance That Your Analysis is Wrong, and You Know It

In an interview on the excellent Econtalk podcast, Nassim Taleb, the epistemologist and author of the best-selling books The Black Swan and Fooled by Randomness, gave a statistic that blew me away.

The results of 80% of epidemiological studies cannot be replicated.

In other words, when a research scientist studies the reasons for the spread or inhibition of a disease, using all the research tools at his disposal, and is peer-reviewed sufficiently for his results to be published academically, then there is a four-out-of-five chance that predictions using that theory will be wrong, or useless because of changed circumstances.

Taleb gave some innocent, and some less than innocent, reasons for this poor performance.

On the innocent side of things, he raised a couple of human thinking biases that I’ve talked about before: narrative fallacy and hindsight bias. In normal language this combination says that we’re suckers for stories, and when we look at a set of facts in retrospect we force-fit a story to it and we assume that the story will hold in the future. Worryingly, as the amount of data and the processing power increase, then there is an increasing chance of finding accidental and random associations that we think are genuine explanations of what is going on. In a classic example of this, there’s a data-backed study that shows that smoking lowers the risk of breast cancer.

On the less-than-innocent side of things, we can of course use data to fool others and ourselves that our desired theory is true. Taleb is less kind, calling it the “deceptive use of data to give a theory an air of scientism that is not scientific”.

Even more worryingly, if peer-reviewed epidemiological studies are only 20% replicable, then I dread to think about the quality of the 99.99% of other, significantly inferior, analyses we use to make commercial, personal and other life decisions.

So what is Taleb’s solution if we aren’t to be doomed to be 80% likely to be wrong about anything we choose to analyse? He advocates “skeptical empiricism”; i.e. not just accepting the story, which can give false confidence about conclusions and their predictability, but understanding how much uncertainty comes with the conclusion and the reality of the breadth of possible outcomes.

At the risk of sounding pompous by disagreeing and building on Taleb’s thoughts, I’d say there are three things we can do about this if we stop kidding ourselves and admit the truth of our own biases and inadequacies. First, I think we know it when we’re actively seeking a pattern in a set of facts that suits our desired conclusion; or when any pattern we spot seems too fragile, over-complicated or hard to test. We just need to be honest about how biased we are. Second, we also need to be honest about how little we know, and how far wrong we can be, so that we can be ready for scenarios that are much higher or lower than our confidently predicted ranges. Third, we can design a test or pilot or experiment to find out how wrong or over-confident we were.

Would you rather persuade yourself and other people that you’re right, or would you rather know the truth?

Some related links:
Background on Taleb:
http://en.wikipedia.org/wiki/Nassim_Nicholas_Taleb
Script and MP3 of Econtalk’s interview with Taleb:
http://www.econtalk.org/archives/_featuring/nassim_taleb/


Copyright Latitude 2009. All rights reserved.

Latitude Partners Ltd
19 Bulstrode Street, London W1U 2JN
www.latitude.co.uk

No comments:

Post a Comment