Monday 14 September 2009

There’s an 80% Chance That Your Analysis is Wrong, and You Know It

In an interview on the excellent Econtalk podcast, Nassim Taleb, the epistemologist and author of the best-selling books The Black Swan and Fooled by Randomness, gave a statistic that blew me away.

The results of 80% of epidemiological studies cannot be replicated.

In other words, when a research scientist studies the reasons for the spread or inhibition of a disease, using all the research tools at his disposal, and is peer-reviewed sufficiently for his results to be published academically, then there is a four-out-of-five chance that predictions using that theory will be wrong, or useless because of changed circumstances.

Taleb gave some innocent, and some less than innocent, reasons for this poor performance.

On the innocent side of things, he raised a couple of human thinking biases that I’ve talked about before: narrative fallacy and hindsight bias. In normal language this combination says that we’re suckers for stories, and when we look at a set of facts in retrospect we force-fit a story to it and we assume that the story will hold in the future. Worryingly, as the amount of data and the processing power increase, then there is an increasing chance of finding accidental and random associations that we think are genuine explanations of what is going on. In a classic example of this, there’s a data-backed study that shows that smoking lowers the risk of breast cancer.

On the less-than-innocent side of things, we can of course use data to fool others and ourselves that our desired theory is true. Taleb is less kind, calling it the “deceptive use of data to give a theory an air of scientism that is not scientific”.

Even more worryingly, if peer-reviewed epidemiological studies are only 20% replicable, then I dread to think about the quality of the 99.99% of other, significantly inferior, analyses we use to make commercial, personal and other life decisions.

So what is Taleb’s solution if we aren’t to be doomed to be 80% likely to be wrong about anything we choose to analyse? He advocates “skeptical empiricism”; i.e. not just accepting the story, which can give false confidence about conclusions and their predictability, but understanding how much uncertainty comes with the conclusion and the reality of the breadth of possible outcomes.

At the risk of sounding pompous by disagreeing and building on Taleb’s thoughts, I’d say there are three things we can do about this if we stop kidding ourselves and admit the truth of our own biases and inadequacies. First, I think we know it when we’re actively seeking a pattern in a set of facts that suits our desired conclusion; or when any pattern we spot seems too fragile, over-complicated or hard to test. We just need to be honest about how biased we are. Second, we also need to be honest about how little we know, and how far wrong we can be, so that we can be ready for scenarios that are much higher or lower than our confidently predicted ranges. Third, we can design a test or pilot or experiment to find out how wrong or over-confident we were.

Would you rather persuade yourself and other people that you’re right, or would you rather know the truth?

Some related links:
Background on Taleb:
http://en.wikipedia.org/wiki/Nassim_Nicholas_Taleb
Script and MP3 of Econtalk’s interview with Taleb:
http://www.econtalk.org/archives/_featuring/nassim_taleb/


Copyright Latitude 2009. All rights reserved.

Latitude Partners Ltd
19 Bulstrode Street, London W1U 2JN
www.latitude.co.uk

Tuesday 1 September 2009

Put Some Emotion into Your Decision-Making and Analysis


I’m a firm believer that emotion plays a cornerstone role in any decision-making. What’s more, I also believe that strong emotion should be used to stimulate much better analysis about how to improve performance or solve a problem.

While my wife picks herself up from the chair she’s fallen off, making unflattering comparisons between problem solvers, analysts, consultants, coaches, philosophers, scientists and Mr Spock, I’ll give you some context and take some time to explain what I mean by the heresy above.

I was listening last week to a podcast featuring a German philosopher called Sabine Doring. Her area of interest is the philosophy of emotion, and its role in decision-making. In her interview, she provided three insights that got me thinking:

1. Emotions are by definition directed at something, which makes them different from moods. For example: feeling sad is a mood; feeling aggressive towards your cheating former lover is an emotion. So I can’t be an emotional or unemotional person, but I can be emotional or unemotional about a particular concept, person or decision.

2. It is ultimately your emotions that determine what matters to you when making a decision. In the most mechanical and number-driven decision-making, we still choose and give weight to different factors based on such aspects as risk-aversion (worry), time-horizon (impatience) and reward (greed). And the vast bulk of decisions, being much less mechanical, require some major value judgments. In fact, if you don’t care about your decision-making criterion, then the whole thing doesn’t matter, is irrelevant and doesn’t require a decision.

3. Recent studies by her colleagues showed that people are generally more creative when happy (counter to the art-house dogma), and more rational and analytic when depressed.

So, contrary to the truism that emotions cloud reason and need to be shoved to the backs of our minds when trying to be rational, Ms Doring’s musings lead me to a list of insights that I hope can help us become better decision-makers:

1. The better we understand ourselves, the better decisions we can make. I’m not advocating self-indulgent soul-searching here, but I am proposing being alert to and honest about the emotion that motivates each decision (and, yes, greed counts if maximising reward is number one).

2. The more we care about something, the harder we will look to find a solution or make it work. There’s a downside to this of course, that we’re tempted to overlook things that run counter to our desired result. This why one of my few personal rules is to be as emotional about finding the truth as I am about anything else.

3. Playing good cop/ bad cop, or happy cop/ depressed cop, about a decision will help you get first into the creative to search for possibility in making something work, and then into the rational in testing it. Some of the best management teams I know have permanent happy and depressed cops to create this productive balance.

So there you are: a rationale for more emotion. Hopefully, my photo above shows how emotion fits into my performance.

Copyright Latitude 2009. All rights reserved.

Latitude Partners Ltd
19 Bulstrode Street, London W1U 2JN
www.latitude.co.uk