Friday, 6 November 2009

So How Do I Improve Service Then?

In my last post (here), I showed that, all else being equal, better service equals higher growth and bigger profits.  One implicit, but counter-intuitive, element of this is that improving service regularly results in lower costs.

The obvious question that comes to mind for the practical person is, “So how do I improve service then?”

The temptation is to dive straight into benchmarking, systems and process improvement, and incentives.  However, there are three steps I’d advise taking first, which will save an awful lot of work and waste down the line.

Customer Service Performance - I’ll cover it one word at a time.

1.    Customers

Your customers will not be one homogeneous group.  You’ll have different types, which have different levels of value to you, different performance requirements, and will rate your service accordingly.  One simple way to improve your service and profit performance is just to focus on your profitable, amenable customers.

To start, you need make your definition of customer groups as relevant as possible.  Sometimes the obvious splits (small/medium/large, sector, etc) are valuable.  But additional thought about how you group your customers can make the exercise much more insightful.  For example, you could segment by who the decision-maker is: purchasing professionals usually care much more about total cost at specified service levels, whereas people with operating roles care most about service support at a competitive price.

If you look at the value of the different groups (the value of their contribution over their lifetime less the cost of acquiring them – a very different and more relevant number than monthly profitability which ignores acquisition cost and loyalty), you’ll see who values you most.  Chances are that you’ll have different groups that have very different value, and in each group you’ll have a range of service performance and profitability.  You’ll have a series of undemanding high value groups.  You’ll also have some groups of very unprofitable or hard to please customers.

Your first decision comes now.  Which groups do you want to focus on?  Can you turn around the profit and service performance of the nightmare group? Should you – is it worth it?  If you drop a group, does the cost of serving the rest go up or down? Many of our clients have turned around their entire service and profit performance by making some hard-headed decisions at this stage, ditching certain customer groups that were just too hard to acquire and serve profitably, and whose demands caused service problems across the entire edifice.

2.    Services

If you have multiple service or product lines, this is the same exercise as the one for customers above.  Again, the key insight here is to look at the value of a product or service over its lifetime including development costs, and not the regular monthly margin analysis that already appears in Board reports.

Again, you need to make some decisions on unprofitable service lines, and products that diminish your service reputation.  However, service line decisions are generally less clear cut than customer decisions.  Early stage products commonly have poor service performance and poor profitability, some apparently profitable products give you such a bad reputation that your word-of-mouth marketing is negative, so you need to think through the full strategic impact before getting out the hatchet.

3.    Performance

Now we’ve got a handle on how service level and profit differs by customer and product, and have decided who we want to serve with which services, we can now ask the more operational question of “what do we mean by performance?”, and identify customers’ most important concerns.  The key customer service concern in each of our last five engagements for clients has been completely different: active account management, short waiting times, access to technical expertise, personalised offers, and up-time reliability. 

Finding this concern is generally a matter of talking to customers and listening to what’s important to them – you ask a broad question about “service”, they will respond by talking specifically about what’s important to them.  One of our favourite ways of cross-checking this is to ask a series of customers about our client’s strengths and weaknesses, and counting up the mentions for both – the important issues for customers generally find their way to the top of both lists.

Unless you are highly-diversified, there will generally be one big thing to get right that addresses all the major concerns, and if you do this to a level of excellence, then everything else follows.  The big thing for a famous airline service turnaround was making sure the planes took off on time; the big thing for one of our tech clients was 100% server availability. 

This approach of focusing on the one big thing has an additional benefit of automatically deprioritising wasteful or unvalued activity, which is the flip-side of the same service improvement coin.

If you can find time to take these three steps before diving into operational improvements, then you save yourself a mountain of unnecessary work down the line.  To give you an idea of how valuable it can be, I’ll relate the experience of a client of ours.  After the customer assessment, they decided to drop a previous focus on very large customers whose size, complexity and purchasing structure made those customers expensive to acquire, expensive to serve, and very disloyal.  They dropped a product line whose gross profit looked very attractive, but whose customer acquisition cost made it value-destroying, and whose performance was damaging the company’s reputation.  They then focused their service performance turnaround on the key issue of account management, which enabled them to more effectively address other customer concerns about accessing technical expertise and designing bespoke solutions.  In one year, they have doubled total profit and are currently the industry’s service and profit leader.

With these high-level decisions made, the other steps of process improvement, systems support, incentives and devolved decision-making responsibility are much simpler and well-targeted.  I’ll save those for a future post.



Latitude Partners Ltd
19 Bulstrode Street, London W1U 2JN
www.latitude.co.uk

Thursday, 5 November 2009

Does better service really lead to bigger profits?

There's some strong received wisdom that better service is somehow financially a good thing. But the piecemeal data around that supports this view is unconvincing, it is often put together by vested interests, such as consultancies that charge fees to help improve customer service, or by idealists who just want it to be true.  It's easy to point to high service companies that generate outstanding results.  But this is a biased and self-selecting exercise, because the opposite is also true - there are lots of inconvenient examples that show low service companies making outstanding returns.  Would you say that Ryanair has better service than BA?  No? So how come it makes more money then?  Service clearly isn't the be-all-and-end-all.  Companies have different business models, different customer bases, different competitors, regulatory environments, sources of economic rent, etc, etc, all of which also affect their financial performance.

To get anywhere on this, we've got to start by comparing like with like.  Here (below) is the most unpolluted evidence I've seen.  This company rents out white vans, and does this through a series of local rental companies.  They're of similar sizes, they all do the same thing, for very similar customers, with the same fleet, and the same rate cards.  But if you look at the performance of the different rental companies, you see two striking things.  First, the ones with better service have better growth rates.  Makes sense.




Here's the more interesting chart.  The ones with better service also have better profitability.







So the ones providing better service, providing more value to their customers, at the same price as the poor service providers, are the ones that also make more money.  And in case you’re wondering about causality – when the worst performer addressed its service problems to climb up the chart, its costs went down as growth went up.

We can speculate all day about the reasons.  Here are a few common explanations:
  • Word-of-mouth marketing: happy customers recommend you for free, and reduce your sales or marketing costs.
  • Better customer retention: there's good evidence that better service leads to higher loyalty (repurchase) - Lexus is commonly perceived to be the highest service car provider in the US and has repurchase rates of 63% versus 30-40% for most other brands.  
  • It could be that it's easier to serve these same regular repeat customers at lower cost: think about how efficient it is at your local coffee shop when you get to the front of the queue, you have the exact money ready, and they have your regular drink ready. 
  • Higher service companies may be the ones that sell to better customers: who'd disagree that more agreeable, cooperative, organised customers make better service a lot easier.
I could carry on speculating about underlying reasons, but that’s an intellectual exercise.  What matters here is that, all else being equal, with better service you make more money.

Instead of speculating about why service increases profits, it’s more useful to accept the link and ask: how do we improve service?

It's tempting to look for a process answer here, following visions of efficient, flawless, repeatable mechanisms.  I’m not saying process improvement doesn’t help, but the highest service companies in our charts were also the most informal, with rule-bending and exceptions happening all the time, and none of these had ever been through a process improvement exercise.
 
For an answer I can relate to, I'll bow to the wisdom of the most credible person I've heard talk about this subject, an impressive man called David Neeleman.  He was the founder of JetBlue Airways, which at the time I heard him speak was the lowest cost and the highest-rated service airline in the US for the second year running.  Even in the year of its infamous ice storm crisis where passengers were stuck on planes for up to 8 hours, it still came top in national service surveys.

Mr Neeleman's explanation of JetBlue's excellence was all about service attitude at the top and the bottom.  At the top, his own practice as CEO was to take one trip per week on a JetBlue flight, in which he served as cabin crew during the flight, helped with the bags at the airport, and was an obvious and visible role model of the importance of service.  At the bottom, JetBlue's recruiting practice was focused on hiring courteous people who also cared about service.  Candidate's attitudes to other people were observed closely:  did they hold the door open for other people; were they pleasant to the receptionist?  JetBlue also uses measures in the middle, using Bain's highly-regarded Net Promoter Score system; and Bain has shown excellent evidence of NPS benefits, though Mr Neeleman didn't talk about this at all.

So, recruit a team that cares about service, supported by a leader who continually reiterates its importance and acts as a role model for service excellence, then let that group of people work out how to take it from there.  This is clearly only the tip of the iceberg on service improvement, and I'll expand on it with evidence from some our clients’ successes on another day.

But I want to get back to my main point.  Using the best evidence I know about service, it warms my heart that everyone wins - value begets value – and that better service does lead to better rewards.

Relevant links:

About David Neeleman
http://en.wikipedia.org/wiki/David_Neeleman

About Net Promoter Score
http://www.netpromoter.com/netpromoter_community/index.jspa

Latitude Partners Ltd
19 Bulstrode Street, London W1U 2JN
www.latitude.co.uk

Wednesday, 28 October 2009

Six Common Mistakes People Make When Analysing Markets


Market analysis is a difficult science.  So it's not surprising that most attempts that we review contain mistakes and gaps that put the whole analysis, and its consequences for the business, into question.  Even worse, it's our view that the strategy gurus who propose and push high level market analysis models inadvertently cause many more problems than they solve, and are probably the single biggest group of culprits causing shoddy market assessment.

Here are six common mistakes. 

1. The analyst looks only at a macro level

It's important to look at markets at a high level, at growth, trends, competitive environment, etc, and the macro view is what the text books prescribe.  But looking only at this level misses what for us is the most frequent area of critical insight: what customers and potential customers value and pay for now; and what they are going to value and pay for in the future.  There is one place that the revenue and profit pool of a market is going to come from, and that is customers' spend, budgeted or unbudgeted.  There is one source of revenue and profit for each individual supplier to that market, and that is winning some or all of that spend.

Unless you're a cartel, monopoly or lobby group, then you're in the business of providing value for value.  You need to know what results customers need to achieve, how they propose to achieve them, how they decide who will support them in doing so, and the value they place on that support.

The clue to where the customer places most value? Where they intend to spend money.  This is either unidentified and consists of a price they are willing to pay to solve a problem; or is identified in the form of next year's budget.

This micro analysis has got enormously more business value than the macro-level equivalent of quantifying emergent (unbudgeted) and established (budgeted) markets.  What would you rather know, the macro overview that the market has grown by 4% with a trend to cloud computing, or that your target customers are under pressure to reduce infrastructure support costs by an average $15m and will pay 70% of the savings to reliable outsourced suppliers with reputations for service responsiveness? 

2. Where micro analysis is done, it is done cursorily, uncommercially, or just plain badly

Talking to customers about their plans, desired results, budgeted spend and supplier requirements is a golden opportunity to understand some critical facts: where customers place future value, the corresponding source and size of a pool of profit for you, and how to win that business.

Unfortunately, most analysts miss this golden opportunity by delegating the exercise to market researchers or untrained, poorly-briefed graduates.  These people are briefed to ask mindless, well-trodden questions about likes and dislikes, strengths and weaknesses; or they get interviewees to rank elements of the value proposition in terms of importance and performance. If you ever listen in to one of these interviews, you'd be shocked by its tedium and superficiality.  The consequent information is sometimes useful.  But it's only useful if it supplements some much more important and tangible information: which elements of the budget are growing or shrinking and by how much; who the budget holder is and how they make buying decisions; what causes people to stay with or switch from incumbent suppliers; what characteristics suppliers need to have in the future to win business; how the supplier can help them make or save more money.

Asking questions like these takes skill and commercial acumen, but their answers are worth more to you than every market research report you could ever commission. 

3. The analyst doesn't take enough care to define his market

"What market am I analysing?" is a much more important and difficult question than it looks at first sight.  Let's take an imaginary organic pet food manufacturer, based in Wales.  Does my market include every potential customer of pet food, even though my product costs three times as much as the non-organic market leader?  Or is it organic pet food, which is defined by the product and some kind of customer sentiment?  Or is it premium pet food, defined by some kind of price and quality level?  Do we include or ignore the supermarket customers that we will never access because we're too small to be stocked by the big chains? How much of the UK do we count as our market? Do we include continental Europe?  If so, how much of it?  Do we include dry food, even though we only do wet? Etc, etc.

The reason I've banged on with this definition example is that every definition I suggested gives you a completely different market, with different size, growth, trends, competitive set, etc.  And the definition you use for one circumstance, say understanding how your core customer group is growing, will likely be different for another equally valuable circumstance, such as how big could demand be if you cut prices by 30%.

The damaging temptation is always to ignore these factors and define the market according to what data is available, which gives you a substantiated quantification of a (probably) irrelevant market.  In our experience, you are almost always better off taking care to define your market to be as relevant as possible to your situation, and accepting that will need to use bottom-up assumptions to estimate the market size, structure and growth. 

4. The analysis obsesses about the competitive environment, to the exclusion of all else

Michael Porter's five forces are a very useful checklist, core competence/strategic intent is a useful mindset, the disciplines of market leaders and the 7S framework can help create insight.  I'll keep my thoughts about The Art of War to myself.  These common strategic tools can have value - competitive intensity is usually the most dominant driver of margins - but they're not the whole toolbox.  They usually only cover the competitive side of the picture and miss such fundamental issues as whether demand is shrinking or growing, what customers actually value and plan to pay for, and what your company actually, distinctively offers.

This obsession with macro competitive postioning distracts the analyst or manager from both the demand side, and from the micro analysis that solves the germane issue: what you actually need to do to make more money.

5. The analysis places far too much confidence in forecasts

Most analyses that we see use a single forecast for the future; no scenarios, no what-ifs, no ranges.  Even worse, that forecast is usually a projection of the recent past with little thought to what might drive any changes, or what the leading indicators are, or anything that might tell us what confidence we have in our estimates.

There is one thing that we can be sure about with all of our forecasts, and that is that they will be wrong.  If we took a moment to analyse the success of our historical attempts at forecasting individual markets, we would all be humbled by our enormous margin of error.  As humans, we regularly and grossly overestimate our ability to predict the future.  If our business relies on such forecasting performance, with no margin for the likely large error, then there is a high probability that it will be seriously compromised.

It's therefore wise to be realistic about our ability to forecast and act accordingly.  We have to accept that we cannot predict the future, we can only prepare for it.  So there is much more value in acknowledging our lack of prescience, and developing a series of scenarios.  Of course we need budgets and targets, but the discpline of working through how we can survive the disaster scenario, or how we can generate sufficient capacity in the optimistic case is of more tangible value than complacently forecasting and hoping that we're right.

6. There is a disconnect between market analysis and sales forecasts

I've lost track of the number of plans I've seen where the market grows at one rate, and the business grows at a completely different rate, almost always faster than the market.  There's rarely any justification for this implied share gain.  It's possible if the company has just entered a market, or if its product is suddenly better, or if it has a new channel, or if it has something else new and advantageous, or if its competitors have decided to lie down and let it win some of their pitches.  Our default position is to assume no share gain unless there's a very good reason to assume otherwise.  Anything else generates cognitive dissonance in our rational analytical minds.


So there you have it.  Six common mistakes, any one of which can cause a market analysis to be unhelpful, devalued or just plain misleading.  We see many, many more mistakes, but the list is too long to cover in this forum.

I hope by raising them that we have averted some problems and implied some solutions.  I don't have a catch-all unbreakable golden rule for analysing markets effectively.  But my best one is this: business transactions are about providing value and being rewarded for doing so, so to understand the market you need to look for where that value is, and follow the money.


Latitude Partners Ltd
19 Bulstrode Street, London W1U 2JN
www.latitude.co.uk

Monday, 19 October 2009

Why Let Facts Spoil the Narrative?

I've just finished reading a very shakey due diligence report, in which one of the key questions to review was the impact of the recession on the company under investigation.  The report author covered the beneficial effects of recession on services (comparable to those of the target company) that people mainly use in their own homes.  The report talked about increases in satellite and cable TV subscriptions, growth in Dominos deliveries, and the trend to "staycations", and implied that, as a result, everything would be OK.

A question kept coming to my mind as I ploughed through this nonsense.  This question was: "But the recession has been happening for about a year - why don't they just look at what's been happening to the company?".  The analyst could have looked at sales, customer churn, average customer value, new sign-ups.  And they could have looked at them before, during, and after recession (now we're coming out of it for a short while).  They could have compared the company's performance to changes in disposable income, or employment, or interest rates, or consumer confidence.  They could have just looked at whether they rose or fell.  The data was available and staring them in the face; but they didn't look at any of the vast array of facts at their disposal, and instead indulged in this staycation narrative.  I'll let you guess whether the facts confirmed, contradicted, or made irrelevant, the report's conclusions .

I kept asking myself why any sane analyst would display such disregard for information.  After some reflection on examples of similar behaviour, here's my conclusion: given the choice between some compelling facts and a compelling narrative, people will often prefer the narrative.  From everyday observation, there are legion examples of people ignoring or skimming over facts that might get in the way of a good story.

This preference can be, literally, fatal; and if you'll indulge a longer-than-usual post, I'll illustrate it with a historical example.

A nineteenth century physician called Ignaz Semmelweis analysed the high incidence of childbirth mortality of Puerperal fever at one of the wards of Vienna General Hospital.  He noticed that Puerperal fever was high in wards where the same doctors also conducted post-mortems, and showed that if doctors washed their hands with chlorine solution after working with cadavers, then Puerperal fever incidence declined dramatically.

His data is hard to challenge:



Unfortunately, Semmelweis's facts didn't fit the narrative of the day.  Prevailing theories of health related to the balance of the four humours of the body, and the role of "bad air" in the spread of disease.  In fact, his implication, that lack of cleanliness in the surgeon was a cause of the disease spreading, was considered insulting to the gentlemen who administered medicine and surgery.

Semmelwies was roundly criticised, and his observation and recommendations were dismissed by the mainstream, despite their obvious life-saving results.  It was only after Pasteur's work into germ theory became accepted 20 years later that the establishment embraced the findings of, the by then dead, Semmelweis.

So coming back to my point.  There will always be an accepted or acceptable narrative to explain anything, be that the four humours of nineteenth century medicine, or the various dubious adspeak marketing theories we hear today.  We can pretty much guarantee that by blindly following the narrative, we will be proven as gullible, closed-minded and wrong as those olden-day physicians.  Alternatively, we can ignore the narrative for a moment, and just have a quick  look at the facts...

Copyright Latitude 2009. All rights reserved.


Related links.

On Ignaz Semmelweis
http://en.wikipedia.org/wiki/Ignaz_Semmelweis#Ideas_ran_contrary_to_established_medical_opinion

On truth, bias and disagreement
http://www.econtalk.org/archives/2009/03/klein_on_truth.html



Latitude Partners Ltd
19 Bulstrode Street, London W1U 2JN
www.latitude.co.uk

Wednesday, 14 October 2009

So-What (SWOT) Analysis



One thing that makes my palms sweat when reviewing a business plan or strategy document is a SWOT analysis.  Reading one of those two-by-two tables makes me think that my generally very smart, commercial, rational clients, have decided to illustrate what they learned at primary school, or have accidentally inserted a no-idea-is-stupid whiteboard printout from the start of a brainstorming session.

I'm not saying that SWOT doesn't have its place at the beginning of the strategy development process; it does, especially if you start with the "O".

O, for opportunity, forces you to take a moment to look around and speculate where the future pools of profit might be, which is especially useful for bringing out those areas that you're currently not doing anything about.

S(trengths) helps you realise where your sources of competitive advantage might lie.

W(eaknesses) forces you to be realistic about what may need improving, and traits that might put you at a disadvantage.

T(hreats) forces you to look at those things coming over the horizon that might sink you below the water line.

This forced lookaround for factors that may be important is, in my experience, the entire benefit of SWOT.  But it's only of value if you go on to test properly which ones are true and material.  Unfortunately, most plans that I see stop with the SWOT output, and bung the list unqualified into the document.  This is worse than useless; it's foolhardy, because it can set in train a series of actions that are based on barely-substantiated speculation.

From the long list of strengths, weaknesses, opportunities and threats that emerge in the SWOT analysis, how do you know which are actually true as opposed to speculation? Which are material and will affect the entire future of your business, and which are pretty much irrelevant?  Which ones should you deliberately not do something about, for example the weakness in high-end products that would kill your cost advantage if you addressed it?  How do you know which opportunities are the ones to put time and money into, and which are the ones to deprioritise?

If you recognise SWOT's limitations, and treat it as a start point, from which you do some testing with facts, then you can create something valuable from this motley list of brainstormed hypotheses.

Start with the opportunities and ask some standard commercial questions.  How big are they? How well positioned are we to exploit them versus everyone else?  How much does it cost to start exploiting each of them?  How sustainable is the profit stream that comes from each?  Which of them is the most valuable use of a dollar of investment or an hour of management time?  If the business case of any one of them stacks up, what do we do next to get there?

Do the same kind of reality check and so-what test with the strengths, weaknesses and threats.  And you will end up with a short list of credible opportunities and actions, which I promise will pay back the additional time a hundred-fold.

You'll also have fewer business plan readers with sweaty palms asking "so what?".


Copyright Latitude 2009. All rights reserved.

Latitude Partners Ltd
19 Bulstrode Street, London W1U 2JN
www.latitude.co.uk

Sunday, 11 October 2009

You Can't Take Vision to the Bank

She sat, watching him in the manner of a scientist: assuming nothing, discarding emotion, seeking only to observe and to understand.

A description of Dagny Taggart, “Atlas Shrugged”, by Ayn Rand

Earlier this year I worked with two companies that couldn’t be more different.

Company one is one of the most respected names in the FTSE, and operates in a classic recession-proof sector.  Company two is an unknown business in an unfashionable declining sub-segment of the telecoms sector.

Company one’s management team is smart and sharp, and would be intimidating if they weren’t such pleasant people.  The Directors have a cadre of direct reports who, to my initial and ongoing bemusement, make sure that everything that reaches the Directors is high level, conceptual and visual.  When working with us, one tried to insist that our presentations contained less data and more pictures – pictures for God’s sake!  But, the thing is, these people weren’t acting dysfunctionally – in every meeting with us, the company Directors dwelt and debated on the concepts and vision, and seemed to skip very quickly over all of our data and analysis.

Company two’s management team is one of the most uninspirational I’ve ever met.  The top two Directors could pass as the two main characters in Peep Show.  But these guys love their numbers.  Every question we asked them in our work with them was answered with numbers, supported by a flood of analysis.  The business is managed using a set of KPIs that would have a quantitative analyst in paroxysms of delight.  Everything is tested, everything is monitored.

Company one has had flat sales in a growing market, and so seen its share decline pretty much continually for the last ten years.  But it now has a striking vision of industry leadership for the future, which might work.  You never know.

Company two has grown revenue and profit more than 20% annually in the time since the management team came on board.  This isn’t from harvesting – new services launched in the last three years now make up about 25% of profit.   Company two’s vision – I’m quoting exactly here – “we’ll try a bunch of things and see what the numbers tell us”.

I think I’ve made my point.  Vision can be appealing, numbers count.

Copyright Latitude 2009. All rights reserved.

Latitude Partners Ltd
19 Bulstrode Street, London W1U 2JN
www.latitude.co.uk

Friday, 9 October 2009

Burden of Proof


A long-established business I know is being mauled by its competitors. To get out of an apparently terminal decline, this year it threw millions into consulting fees. Following the consultants’ advice, it plans to throw hundreds of millions into systems and other investments. Management admits that the business case following the advice was built on hard-to-test, unresearched assumptions that could turn out to be very inaccurate. However, the Board made a decision to go ahead and agreed on an implementation plan.

Shortly after, my company was asked to look at the business case for expanding a currently small existing scheme that captures customer information for use in more effective marketing, range development and profitability analysis. The business case for this is a slam dunk: the existing smaller scheme is already highly profitable, investment is miniscule, roll out can be tested with a low cost pilot, payback is less than a year, the scheme adds hundreds of millions to shareholder value, and all the high growth competitors run similar schemes. Without the scheme, the company has no idea about customer profitability or marketing effectiveness, and is at a material disadvantage to competitors who already use the same kind of information from their own schemes to steal its loyal customers.

Management is going to reject the project. Why? Because they may be able to get "almost-as-good" additional customer information as a result of the plan they’ve already decided on (the one that costs hundreds of millions and is based on self-confessed shakey data). And, if that plan turns out well, they may be able to capture many of the same benefits from the scheme we tested. If the company captures these benefits, then the remaining incremental benefits of our scheme are just too small.

So management is going to make a terrible decision. This isn’t because of bad economics – I don’t disagree with the marginal benefit argument. They’re going to make a bad decision because of where they place the burden of proof. They’ve taken as read the shakey assumptions from the decision they’ve already made, and put the burden of proof on how another scheme can improve on that.

The lesson for us in all of this? We tend to place the burden of proof on the uncomfortable choice: the new, the unfamiliar, the thing that may cause us to change our ways. We won’t make good decisions unless we put the same burden of proof on all of our options, including what might happen if we don’t change our ways.

If we’re the frogs floating in the slowly-heating pan of water, when are we going to look at the risk of staying in the pan as hard as we look at the risk of jumping out?

Copyright Latitude 2009. All rights reserved.

Latitude Partners Ltd
19 Bulstrode Street, London W1U 2JN
www.latitude.co.uk

Monday, 14 September 2009

There’s an 80% Chance That Your Analysis is Wrong, and You Know It

In an interview on the excellent Econtalk podcast, Nassim Taleb, the epistemologist and author of the best-selling books The Black Swan and Fooled by Randomness, gave a statistic that blew me away.

The results of 80% of epidemiological studies cannot be replicated.

In other words, when a research scientist studies the reasons for the spread or inhibition of a disease, using all the research tools at his disposal, and is peer-reviewed sufficiently for his results to be published academically, then there is a four-out-of-five chance that predictions using that theory will be wrong, or useless because of changed circumstances.

Taleb gave some innocent, and some less than innocent, reasons for this poor performance.

On the innocent side of things, he raised a couple of human thinking biases that I’ve talked about before: narrative fallacy and hindsight bias. In normal language this combination says that we’re suckers for stories, and when we look at a set of facts in retrospect we force-fit a story to it and we assume that the story will hold in the future. Worryingly, as the amount of data and the processing power increase, then there is an increasing chance of finding accidental and random associations that we think are genuine explanations of what is going on. In a classic example of this, there’s a data-backed study that shows that smoking lowers the risk of breast cancer.

On the less-than-innocent side of things, we can of course use data to fool others and ourselves that our desired theory is true. Taleb is less kind, calling it the “deceptive use of data to give a theory an air of scientism that is not scientific”.

Even more worryingly, if peer-reviewed epidemiological studies are only 20% replicable, then I dread to think about the quality of the 99.99% of other, significantly inferior, analyses we use to make commercial, personal and other life decisions.

So what is Taleb’s solution if we aren’t to be doomed to be 80% likely to be wrong about anything we choose to analyse? He advocates “skeptical empiricism”; i.e. not just accepting the story, which can give false confidence about conclusions and their predictability, but understanding how much uncertainty comes with the conclusion and the reality of the breadth of possible outcomes.

At the risk of sounding pompous by disagreeing and building on Taleb’s thoughts, I’d say there are three things we can do about this if we stop kidding ourselves and admit the truth of our own biases and inadequacies. First, I think we know it when we’re actively seeking a pattern in a set of facts that suits our desired conclusion; or when any pattern we spot seems too fragile, over-complicated or hard to test. We just need to be honest about how biased we are. Second, we also need to be honest about how little we know, and how far wrong we can be, so that we can be ready for scenarios that are much higher or lower than our confidently predicted ranges. Third, we can design a test or pilot or experiment to find out how wrong or over-confident we were.

Would you rather persuade yourself and other people that you’re right, or would you rather know the truth?

Some related links:
Background on Taleb:
http://en.wikipedia.org/wiki/Nassim_Nicholas_Taleb
Script and MP3 of Econtalk’s interview with Taleb:
http://www.econtalk.org/archives/_featuring/nassim_taleb/


Copyright Latitude 2009. All rights reserved.

Latitude Partners Ltd
19 Bulstrode Street, London W1U 2JN
www.latitude.co.uk

Tuesday, 1 September 2009

Put Some Emotion into Your Decision-Making and Analysis


I’m a firm believer that emotion plays a cornerstone role in any decision-making. What’s more, I also believe that strong emotion should be used to stimulate much better analysis about how to improve performance or solve a problem.

While my wife picks herself up from the chair she’s fallen off, making unflattering comparisons between problem solvers, analysts, consultants, coaches, philosophers, scientists and Mr Spock, I’ll give you some context and take some time to explain what I mean by the heresy above.

I was listening last week to a podcast featuring a German philosopher called Sabine Doring. Her area of interest is the philosophy of emotion, and its role in decision-making. In her interview, she provided three insights that got me thinking:

1. Emotions are by definition directed at something, which makes them different from moods. For example: feeling sad is a mood; feeling aggressive towards your cheating former lover is an emotion. So I can’t be an emotional or unemotional person, but I can be emotional or unemotional about a particular concept, person or decision.

2. It is ultimately your emotions that determine what matters to you when making a decision. In the most mechanical and number-driven decision-making, we still choose and give weight to different factors based on such aspects as risk-aversion (worry), time-horizon (impatience) and reward (greed). And the vast bulk of decisions, being much less mechanical, require some major value judgments. In fact, if you don’t care about your decision-making criterion, then the whole thing doesn’t matter, is irrelevant and doesn’t require a decision.

3. Recent studies by her colleagues showed that people are generally more creative when happy (counter to the art-house dogma), and more rational and analytic when depressed.

So, contrary to the truism that emotions cloud reason and need to be shoved to the backs of our minds when trying to be rational, Ms Doring’s musings lead me to a list of insights that I hope can help us become better decision-makers:

1. The better we understand ourselves, the better decisions we can make. I’m not advocating self-indulgent soul-searching here, but I am proposing being alert to and honest about the emotion that motivates each decision (and, yes, greed counts if maximising reward is number one).

2. The more we care about something, the harder we will look to find a solution or make it work. There’s a downside to this of course, that we’re tempted to overlook things that run counter to our desired result. This why one of my few personal rules is to be as emotional about finding the truth as I am about anything else.

3. Playing good cop/ bad cop, or happy cop/ depressed cop, about a decision will help you get first into the creative to search for possibility in making something work, and then into the rational in testing it. Some of the best management teams I know have permanent happy and depressed cops to create this productive balance.

So there you are: a rationale for more emotion. Hopefully, my photo above shows how emotion fits into my performance.

Copyright Latitude 2009. All rights reserved.

Latitude Partners Ltd
19 Bulstrode Street, London W1U 2JN
www.latitude.co.uk

Sunday, 30 August 2009

The Flip-side of Focus

I’m going to talk about focus.

Here’s a typical concluding paragraph from a business article or interview about improvement in any area of performance:

“It takes commitment from the top. Every member of the top team, from the CEO downwards, needs to be a champion and role model of [fill in the gap – customer service, efficiency, health and safety, etc.]. It needs to be recognised as a top priority, measured, monitored and rewarded. Every organisation/individual that has taken this approach has improved its [customer service, efficiency, health and safety, etc.] performance by a factor of …”

You can read and hear similar advice and claims from gurus and advisors of personal development, public policy, sports and fitness, hobbies, interests and a host of other areas where people seek performance improvement. In a nutshell, if you focus on something, you get better at it.

This is a fine approach with great merits, but brings up in my mind an awkward compromising question: “What about everything else; everything you’re not focusing on?” Is there a risk that by focusing so much attention on X, then Y will get worse, or at least not improve as much as it would if it got more attention?

One UK insurance company I know spent one year putting customer service above absolutely everything else, and grew that year to market leadership. But it saw its profits turn negative and created chaotic complexity as every front line person did what was necessary to delight every individual customer. The following year saw a focus on process simplification; the next, a heavy focus on waste reduction. Each year saw improvement in that year’s particular area of focus, but standstill or decline elsewhere. After three years, the company was several places down in the market league table, and was subsequently acquired by the new market leader.

Ben Franklin established a set of thirteen personal virtues in his 20s, which he famously, and successfully, practised for the rest of his life. He would focus on one at a time, making each habitual before moving onto the next. The first and foremost of these virtues was, of all things, temperance. The reason he chose such an uninspiring virtue to lead all the others was that he needed to practise temperance in order to have the presence of mind to put the right effort into his twelve other important virtues.

The lesson from all this? Of course we need to focus our attentions. Of course we need to choose where we improve or excel: we can only do one thing at a time and we can’t be all things to all men. But we need to have the thoughtfulness and wherewithal to take things in the round, think through where we choose to focus, and we need to pay attention to the full consequences. This includes an acknowledgement that we are likely to go backwards in areas where we aren’t paying attention.

With this in mind it’s unlikely that one area of focus for performance improvement will capture everything we need. There are good precedents for multiple areas of focus. Franklin had thirteen in a carefully selected order; Jesus had a Golden Rule of two parts, the Lord’s Prayer, and a gospel full of other rules and stories; everyone I speak to from Tesco gives a different combination of reasons for its success.

People from management science backgrounds might call this approach a balanced scorecard. Life-hackers might call this priority management. To me it’s just about looking past the claims and clichés, and paying proper attention to the complete picture.


Copyright Latitude 2009. All rights reserved.

Latitude Partners Ltd
19 Bulstrode Street, London W1U 2JN
www.latitude.co.uk

Thursday, 20 August 2009

Types of Blindness

There are various degrees and kinds of blindness, widow. There is the connubial blindness, ma'am, which perhaps you may have observed in the course of your own experience, and which is a kind of wilful and self-bandaging blindness. There is the blindness of party, ma'am, and public men, which is the blindness of a mad bull in the midst of a regiment of soldiers clothed in red. There is the blind confidence of youth, which is the blindness of young kittens, whose eyes have not yet opened on the world; and there is that physical blindness, ma'am, of which I am, contrairy to my own desire, a most illustrious example.
Stagg, the blind man in Barnaby Rudge, by Charles Dickens


I’ve no doubt you’ve followed the recent bickering among the political classes about the NHS in the UK, and the proud blindness being used by all parties in their selective and romantic championing of that institution’s cause. I’ve got no idea whether this kind of deliberate or programmed blindness works in power politics; but in the areas I do know: in science, in sport, and in the subject of this blog, business and enterprise, it is a road to ruin. I’ll look at Stagg’s three types of blindness.

Let’s start with Stagg’s first example, the connubial blindness that men and women have about their lovers. People making business cases have often already fallen in love with their ideas or products. There’s nothing wrong with that; it’s a passion that fires the imagination for options and possibilities. The flip side is that, too often, these same people exhibit this connubial blindness, and they want to see the best of every side of the situation. They make over-optimistic assumptions, under-estimate threats and assume the market will love the idea as much as they do. Too often the consequences are the same that Dickens’ Nancy suffered at the hands of her beloved, murdering Bill Sykes.

A second kind of blindness, the blindness of party, comes when people start to believe the rhetoric espoused by their colleagues, like dyed-in-the-wool Tories, or socialists, or Republicans, or whatever. They are selective in their choice of facts, of sources of information, and of assumed consequences. People can be tribal, and there can be comfort in shared views, but this tribalism only increases the likelihood that those same people won’t challenge their own received wisdom, and will ignore some real opportunities and threats that don’t fit their group paradigm. A political example of this is communism; a business example is any bank you want to choose.

Stagg’s third example, the blind confidence of youth, is horribly apparent in new businesses and in businesses that have confused their own ability with a bull market. There’s nothing wrong with this confidence if it’s supported by some resilience in a company’s management and business model; if it can cope with a downside scenario. But this resilience is all too often absent. I’ve rarely seen such an over-confident company hit its too-lofty targets, and I’ve seen many felled by the first major blow or downturn that comes its way. Example: choose any one of 99% of companies from any boom – dotcoms in the early 2000s, property investors from financial bubble, or almost every social networking business from the last four years.

The lesson in all this? We all need to open our eyes and prop them open: to where we might not want to see something bad about our service; to where we’re not challenging the party line; or where one bad month or lost customer would sink us. Better to see the truth than suffer Nancy’s fate.

Copyright Latitude 2009. All rights reserved.

Latitude Partners Ltd
19 Bulstrode Street, London W1U 2JN
www.latitude.co.uk

Thursday, 6 August 2009

The Sign of a Good Answer

In stage 16 of the 2006 Tour de France, the favourite, Floyd Landis fell badly off the pace in the last 8 minutes and crawled to the finish with his chances in tatters. The next day, stage 17, he produced the most inspiring comeback many cycling fans have seen. He attacked early, broke away from the favourites and alone sustained a pace to which the peloton, with all its wind-saving advantages, couldn’t respond. He made up enough time to eventually be crowned Tour champion.

Explanations came flooding in for this stirring physical achievement. To many fans it was a story of the triumph of human spirit and athletic supremacy. But the accepted explanation in knowledgeable circles was that Landis, by being out by himself, could to keep his body temperature under control, giving him a major physiological advantage. You see, it was an incredibly hot July day, and the riders in the peloton were suffering from sky high temperatures; meanwhile Landis spent the day pouring cold water over his head, supplied by the team car.

Seems reasonable?

A few days after Landis’ victory, his urine sample, taken after stage 17 tested positive for a banned performance-enhancing substance - synthetic testosterone.

OK, which explanation for the inspirational performance do you believe now?

You see, as we all know, it’s very easy to come up with a reason or theory for any phenomenon: why our competitor is doing twice as well as we are; why we’re not as profitable as we once were; why we’re struggling to get a foothold with a customer group; why growth has fallen off, or whatever is puzzling us.

When you review businesses and markets, as you unearth the facts and analyse the data, you come across a whole raft of potential explanations and solutions for the business’s condition and theories on what it should do next. The first set you come across is usually complex, nuanced and subtle, often received wisdom from proclaimed experts, and very tempting to believe. But if you have the gumption to ignore these temptations; if you carry on researching, challenging and turning over stones, I guarantee that you will eventually come up with a better answer that’s as plain as the nose on your face.

Scientists have a principle called Occam’s Razor, the common understanding of which is as follows, “Of several acceptable explanations for a phenomenon, the simplest is preferable, provided that it takes all circumstances into account.”

At my company, we describe it differently. We just carry on investigating, testing and searching for explanations until we uncover the one that fits our golden rule: when looked at in retrospect, it must be absolutely obvious.

Copyright Latitude 2009. All rights reserved.

Latitude Partners Ltd
19 Bulstrode Street, London W1U 2JN
www.latitude.co.uk

Saturday, 1 August 2009

Straightforward and Spartan: What Big Businesses Can Learn from their Smaller Brethren

King Leonidas: [turns back shouting] Spartans! What is your profession?
Spartans: HA-OOH! HA-OOH! HA-OOH!
King Leonidas: [turning to Daxos] You see, old friend? I brought more soldiers than you did.


There is an unchallenged and lazy myth that small companies generate more wealth in the economy every year than large ones. They don’t, though the real facts are much more interesting. Research shows that three per cent of all firms account for almost all private sector employment and revenue growth; and these “high impact” firms are proportionately evenly spread across all company sizes. But here’s the killer fact: 98.3% of firms in this wealth-creating sector have 20 employees or fewer, because there are simply many more smaller firms in the world.

But how can it be that smaller firms make up so much of this GDP engine, and generate just much wealth as blue chips, who have established brands, sophisticated systems, massive buying power and the pick of the brightest talent? I’ve been fortunate to observe companies from across the size spectrum, and even more fortunate to work with some that were once in the small camp and are now large. Here are four distinctions that I see in the successful smaller ones.

First is brutality. My smaller clients have a mindset in which they will habitually challenge their propositions and expect to need to reinvent their services, even in the many instances where they are the dominant player in their particular niche. They are justifiably proud of their products, but they appreciate that today’s iPod is tomorrow’s Walkman. Cannibalisation is a fact of life, and better that they do it to themselves than their competitor does it for them.

Second is straightforwardness. Here’s an example. A client I started working with eight years ago was shortly to become one of the 3 percent club of high impact firms; it quickly became the global market leader in its niche. Then, I could ask a question to one of the Directors and they would just answer it, warts-and-all, no justifications, no qualifications. Now it has joined the 97 percent of slow-growers. I re-engaged with the firm again recently. Now, the most common first response to my questions is: “can you tell me what you will do with that information?” the second most common is: “can you tell me why you need to know that?” Following this pointless dance, we get to the same questions I was going to ask anyway, only we have time now for 25% fewer of them. And the answers are now gilded with such self-justification that finding the problems necessary to improve my client’s condition takes on a whole new level of complexity.

Third is decisiveness. My smaller clients engage only my company for advice and act on about 50% of it. They are far less exhaustive in their analysis before decisions, being happy to act on 80% confidence. In contrast, larger companies often use multiple advisers and internal teams in an often futile search for perfection, and act on 10% or less of the advice they seek.

Fourth is parsimony. If you’re used to corporate life, a smaller company’s office feels like a cell. If you’re a consultant used to the corporate jet-set, a smaller company’s fees feel like a budget airline. It’s probably not in everyone’s interest for me to highlight parsimony. But those who read these posts regularly know I'm keen on saying the truth as I see it, rather than selecting the facts that suit. Besides, Ben Franklin thought parsimony was the route to wealth, and who am I to argue with him.

So: brutality, straightforwardness, decisiveness, parsimony. Leonidas would be proud.

Copyright Latitude 2009. All rights reserved.

Latitude Partners Ltd
19 Bulstrode Street, London W1U 2JN
www.latitude.co.uk

Sunday, 26 July 2009

What to Look for in a Consultant


Here are some things to look for if you genuinely want value from a consultant, all of which you can test in a meeting or single reference.

First is personal credibility. I don’t necessarily mean brand here, which in a way is lent credibility. I mean the credibility of the person or people actually doing the work. To appreciate the importance of this distinction, you just need to look forward in time and ask: “What credibility will that person have in front of the Board or the bank or the MD or my colleagues when they start asking difficult questions?” Of course there are times when a brand does add credence, but I’d argue that what really counts is the quality of the individual making those recommendations, and consequently the recommendations themselves.

Second is subject matter expertise. By this I mean genuine expertise in the particular issue you are facing, be that business case development, due diligence, market entry, benchmarking, whatever. I specifically don’t mean industry expertise, which in my experience contributes either marginally or even negatively in a consultant’s value to a company.

At the risk of unpopularity, using industry expertise as a way of choosing a consultant is, in my view, misconceived. It is an easy way of screening for a buyer, and an easy way of selling for a consultant. Consultancies understand this dynamic and have industry experts at senior level only, in order to help the sales situation. They talk the language, have the examples, etc. But the people that do the work and generate the insights come from a pool of generalists and subject matter experts.

We hear all the time that clients are amazed how quickly we get to understand their industries. But they're giving us way too much credit and over-estimating the difficulty of understanding a sector sufficiently well to apply our subject matter expertise. In contrast, subject matter expertise is the thing that takes years to develop. To illustrate, we have performed dozens of due diligences of technology companies, and we use exactly the same skills to due diligence leisure companies. Each one, in whatever sector, takes only three weeks, and we have never had any problems in sectors we have never worked in before. But we would have no idea where to start re-engineering a business process, a subject in which we have no expertise, one of the exact same tech companies we just diligenced. And we couldn’t even dream of knowing how to sell a client’s products.

A third thing to look for in a consultant is his willingness and ability to challenge you, your views and your assumptions. It is easy for an adviser to back down, particularly someone who's junior, impressionable, easily intimidated or otherwise anxious to please. I’m embarrassed for my profession that 90% of consultants I've met fit into one of those categories, people responsible for the tedious cliché of the consultant taking your watch to tell you the time.

The Greeks had a concept of a noble friend, who would tell you the truth, even if it wasn't what you wanted to hear. A good consultant is a noble friend for hire.
A fourth thing to look for is absolute attention to your particular concern. The consultant will be so focused on your particular issue that you won't be able to see any approach or methodology he employs. Every conversation will all be about your situation and helping you improve your own condition.

There are some good precedents for these characteristics in an adviser. William Wilberforce, in my opinion one of the greatest men who ever lived, had for an adviser the great John Newton (pictured above), composer of Amazing Grace. Alexander the Great had Aristotle. Washington had Lafayette. I guess they didn’t do too badly.

Copyright Latitude 2009. All rights reserved.

Latitude Partners Ltd
19 Bulstrode Street, London W1U 2JN
www.latitude.co.uk

Saturday, 18 July 2009

Don't Swallow Your Own Snake Oil


Business people need to be scientific in how look at their companies, their markets and how they make decisions. If they don't, they may be lucky and thrive for a while, but they will ultimately and inevitably end up in ruin.

I want to be clear what I mean by the term “science” here. I don’t mean biology, chemistry, physics or any other examples from the school curriculum that restrict our thinking and, if I'm honest, put us off the subject. What I mean by science (in as unpretentious a way as possible) is a method and mindset of trying to find the truth of a situation or issue or problem; and caring first and foremost about finding the truth, irrespective of what that truth turns out to be.

It’s not about making an argument or proving a point. As soon as you start looking to defend a position or prove a point, then you're not a scientist, no matter what your qualifications or credentials. Mr Dawkins, looking to prove that God doesn't exist, isn’t a scientist. His antagonists, creationists trying to find evidence that He does exist, aren’t scientists either. Neither is anyone who selects information to justify themselves, rather than seeking information and testing the quality of their thinking to challenge themselves.

Therefore, you see true science exhibited more often in arenas where people need to get results, regardless of rationale or excuses, such as sport or gardening or medicine or the judge in the court room; and you see it less often where people need to be right, such as politics, interest groups, sales or the barrister in the courtroom.

Science, and the scientific method, is partly a thinking skill. It involves breaking down a problem into clear discrete component parts with an analytical knife; using crystal clear thinking to hang those parts together; making your assumptions and gaps in your knowledge explicit; using facts to test those assumptions and your draft conclusion; changing the conclusion according to what the facts say; and then challenging that new one in turn. You repeat this in a relentless process until you've got an answer with which you're satisfied. But the method isn’t something I want to go into any more here, because for most people the method isn’t the main issue.

The main issue is mindset. This mindset is about deliberately challenging your knowledge in the search for the truth of a situation and, crucially, being happy to change your view as the balance of facts dictates. It is not about collecting facts to make an argument or prove a point. This latter path is an aspect of rhetoric, which is a noble art, but it isn’t science. And unfortunately this point-proving seems to be a stronger instinct in the way our minds work than the discomfort of challenging our thinking and conclusions.

I'll give you an example of how easy it is to slip into the rhetor's mindset. I run training sessions for management consultants in the principles and practice of the consulting method, which is basically the scientific method. Everyone typically learns the scientific techniques to get to the heart of problems and crack difficult issues in a rigorous, objective and credible way. That is until I split the learners into teams and ask them, as an exercise, to give me the case for retaining or abolishing the royalty. I give the teams names: "Royalists" and "Republicans". As soon as they're given those positions my students turn from objective scientists into aggressive rhetors, searching for evidence that backs up their position. One team searches for the massive cost to the taxpayer of the royal family, while the other searches equally hard for the vital tourism income they bring to the country. They can't help themselves in this one-sided self-justifying behaviour. And I see this same behavior in myself and others every day.

Now let me come around to what all this means for business. First of all, of course there's a time for rhetoric and making an argument: whenever you're persuading someone in a sale, raising finance, or recruiting a super-star graduate. But if you want to know the truth about an issue, and make the best decision for yourself and your own business, you need the scientist’s mindset. You need to be humble about your pre-conceived notions, be open to challenge, and be prepared for the discomfort of receiving and doing the challenging. As soon as you stop doing that, and start building a fortress of facts to support your rhetoric, you're on the road to ruin, with self-justification, post-rationalisation and excuses all the way down.

So I'll leave you with a couple of questions to ask yourself about whatever issue you're facing. Are you being a scientist and trying to find the truth about whatever issue that concerns you, or are you selecting facts to make yourself feel better and prove a point? Are you trying to make the patient healthier, or are you trying to sell yourself some snake oil?

Copyright Latitude 2009. All rights reserved.

Latitude Partners Ltd
19 Bulstrode Street, London W1U 2JN
www.latitude.co.uk

Saturday, 11 July 2009

Can You Absolutely Know That It's True? An Unlikely Insight for Businesses from the Self-help Section

My wife is a wonderful but crazy woman who takes great interest in the modern spiritual gurus that appear on Oprah and whose books top the iTunes non-fiction list. Her attention has moved over the years from what I consider to be flimsy self-help one-size-fits-all nonsense to some thoughtful, insightful, challenging people.

One of these is an American woman called Byron Katie. This woman regularly counsels people who describe situations that are causing them some degree of personal pain. She responds to pretty much every person’s description with two opening questions: “Is that true?” and “Can you absolutely know that it's true?” - the second question being a polite equivalent of “Come on, be honest with yourself.”

The results of this simple line of questioning are frightening and fascinating. Almost immediately, when pushed to be honest with the second question, Ms Katie’s interlocutors realise they’ve been working with some very dodgy assumptions. These assumptions have been convenient, but have given them a misguided take on the situation and have resulted in some pretty damaging actions.

The consequent realisations of where they've been treating someone unfairly, punishing themselves, etc. often cause an outpouring of emotion that I find a bit much to bear, but my wife seems to like.

Now, let me get to my point, because it's not about relationship self-help.
As managers and professionals, we constantly work with a series of assumptions that a simple challenge such as "is this true" causes doubt, and a follow up such as "can you really know it's true?", causes the whole false edifice to come crashing to the ground and reveal something very different. I've seen whole sales forces pitching "we're not the cheapest, but we're valued advisors" in price-sensitive markets where they were, in fact, the cheapest. I've seen shifts away from highly profitable services to loss-making ones based on superficial untested assumptions and use of the wrong measures. I know a CEO who absolutely insists that his holiday company services the over-45s, and spends his marketing money accordingly, when his average customer age is 71.

Now I'm not saying that Byron Katie's formula is original. I think Socrates beat her to it by a couple of millennia. And I'm not saying you should apply it to your personal life unless you want to be known as the annoying guy. Ms Katie is currently on her third husband; and Socrates was so annoying that he was accused of corrupting the minds of the youth of Athens and forced to drink hemlock.

But I do say try the challenge on your own business or service or product. Challenge your own assumptions - work out what it is that really makes customers prefer you, or what really makes your product superior, or how much return you really make on that sponsorship or other favourite area of marketing spend. I'll bet you find something important that you didn't know, and I'll bet it affects your wallet.

Copyright Latitude 2009. All rights reserved.

Latitude Partners Ltd
19 Bulstrode Street, London W1U 2JN
www.latitude.co.uk

Thursday, 9 July 2009

The Next Level of Analysis - It Makes All the Difference

Malcolm Gladwell’s “Outliers” is a treat. It gives fascinating insights into what really lies behind world class performers, destroying the superficial romantic myths about poor kids defeating the odds with a combination of god-given talent, inspiration and succeeding against the odds. I’d sum up the Gladwell’s conclusion of what makes world class performance with the phrase “practice makes perfect”, though I’m sure my performance coaching colleagues would justifiably emphasise the importance of creating environments and cultures that stack the odds in favour of practising the right things.

Gladwell’s conclusions were fascinating, but what caught my attention was his method – how he got to the insightful conclusions that gave the myth to some romantic, but ultimately incorrect and misleading, received wisdom. In a nutshell, all he did was this: he took the analysis to the next level. That’s it. Here’s an example from the book.

A US study analysed the improvement in reading performance of school children from different social classes. Though the social groups had similar aptitude for the youngest children studied, the gap between wealthier and poorer classes grew as the children got older. Policy makers had concluded that the education system was failing the poorer children.

However, analysis of reading performance before and after summer recess revealed an interesting insight – that the entire difference in improvement could be explained by what happened when children weren’t in school. During the main school holiday, children from the wealthier classes improved their reading ability, whereas the poorer children regressed. In fact, during the school year, the poorer classes actually improved marginally more than the richer children. So by trying to raise the relative level of the poorer children through the traditional school system with traditional school hours and a traditional school year missed the crucial point - that they regressed outside class. A simple answer was a very popular, successful school that kept children at school for longer and kept them focused on their work.

The useful insight for me here wasn’t about schools or policy or social justice. It was that by getting under the skin of things, you get the insight that allows you to take the most effective actions.

Critics of Gladwell's book say he's superficial. But all these critics are saying is that he'd have got even more insight if he'd gone to the next level, so the importance of analysing to the next level still stands.

So what is the relevance of this to business strategy? It is the immense value of analysing to the next level, beyond the superficial received wisdom, and the risk of not doing so and missing the obvious actions. I’ve been analysing businesses and their markets for more than 15 years, with more than 200 companies, including my own. From that 200+ sample, I cannot think of a single example of a company that hasn’t understood its markets or the value of its services differently after making that next level of analysis. Some of these companies even acted on that insight and turned around their performance as a result.

So does this mean for the provider and buyer of strategic advice? I was speaking at a Chairmen’s dinner recently about the benefits of performing an external review, and the host asked me a very similar question: do I, as a reviewer of businesses and markets, provide information or do I provide advice? I bungled an answer, but I should have said this:

"If you keep asking the right questions; if you get under the skin of the issue, not being satisfied or fobbed-off by superficial hearsay, then the answer and the advice comes out all by itself."


Copyright Latitude 2009. All rights reserved.

Latitude Partners Ltd
19 Bulstrode Street, London W1U 2JN
www.latitude.co.uk

Saturday, 28 March 2009

Strategic Reviews — Typical Content and the Most Common Mistake Management Makes

“There are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we know we don't know. But there are also unknown unknowns. There are things we don't know we don't know.”
Donald Rumsfeld, Former US Secretary of Defense

“Just one more question.”
Lt Columbo, San Francisco Police Department

In our last post we covered how critical it is to learn the lesson of the great detective, and treat a strategic review as an investigation. In this post, we look at the basic areas to cover in a review, and where companies classically miss valuable insights by not adopting this inquisitive mindset.

If it’s going to be useful, a strategic review should cover a minimum of five areas:

1. Positioning at a macro level—this classically consists of some measure of market attractiveness and some measure of competitive positioning

2. Positioning at a micro level—this review covers the buying process, routes to market, purchase criteria, company performance against those criteria, performance versus competitors, customer purchase intentions, and switching

3. A review of financial performance by business unit or product or geography

4. An assessment of the risks and opportunities the company faces

5. Strategic decisions about where and how to compete as a result of the investigation

In addition, management might then want to look at some specifics or angles or hypotheses it wants to test, such as market appetite for a new product. Management may also want to put together projections and a business case to support the decision making and subsequent planning process.

This list above is over-simplified, but you get the gist.

The fatal mistake to make with strategic reviews is to stay at too high a level, be too assumptive and too generic. Without asking that next question, leaving all the difficult stones unturned, you’re left with a strategy built on limited and superficial knowledge, Mr Rumsfeld’s known knowns. You never get to the unknown unknowns where the insights lie.

When management adopts this high-level mindset, our five review areas above tend to play out as follows:

1. Management defines its markets too broadly and so is in no position to understand properly the size, growth or any other measure of attractiveness of its different businesses’ markets. Also, without a tightly-defined view of which markets it is competing in, management doesn’t understand who it is really competing against for its most important business

2. With a high-level mindset, positioning at micro level is barely covered - it appears to low level for strategic work. As a result, management misses critical information about customer budgets and spend intentions, revenue security, requirements for service improvements, and other tangible, useful facts

3. Financial performance by business unit/product/geography, etc ends up being confined to Board KPIs and familiar numbers, with the consequence that some very common and vitally important drivers of economic value and returns are missed completely in analysis

4. With excessively superficial and generic information from areas 1-3, the range of risks and opportunities becomes much too broad and irrelevant. Management is then faced to many poorly-defined risks to know how to mitigate them, and has an excessively long list of nonspecific opportunities that it can only guess how to prioritise

5. With no new insights raised to challenge assumptions, the strategy ends up being very close to that inside the CEO’s mind prior to the review. Unless the CEO has incredible prescience, there is only a slim chance of this being the best strategy for the business

In the rest of this series, we cover how the investigative mindset gives us more insight and clues in areas 1-3. With this deeper and more accurate level of knowledge, management can make more valuable decisions and take more relevant actions.


Copyright Latitude 2009. All rights reserved.

Latitude Partners Ltd
19 Bulstrode Street, London W1U 2JN
www.latitude.co.uk

For the full text of this series email steve@latitude.co.uk

Tuesday, 24 March 2009

Strategic Reviews — Overview: “A Little More Columbo and a Little Less Sun Tzu”


Every so often the management of a business has a prompt to ask itself some big fundamental questions: What business are we in? Where and how should we compete? What are the prospects for our business and how can we change them? Which parts of our portfolio should we keep, sell, close, grow, harvest, restructure? Where do we focus our limited capital and time?

One common exercise provides the basis to answer these essential questions—the strategic review.

It requires an analysis of the attractiveness of the company’s markets, its competitive positioning, drivers of profit and economic value, a review of new opportunities and an assessment of risks. Done well, this strategic review provides management with the information and the confidence to make decisions from the most high-level (such as which businesses should we be in) to the most everyday (such as how do we improve our customer service).

But here’s the thing. A strategic review is an investigation. And in this investigation, God is in the detail. The Columbo fans out there know this already — you don’t solve the case if you act like the local cops and just take a cursory look, missing the key fact that the murdered “burglar” who came in from the lawn has no grass on his shoes. It’s the same with strategic reviews: you don’t generate insight by going through the motions and staying high level; you get it by behaving like the great detective — asking the questions nobody else thought of and noticing the things nobody else noticed.

There is a time for strategy and vision and bold moves and inspiration and big picture; but that time is later, once you know exactly where you stand.

In this forthcoming series of posts, we will cover the basics of performing strategic reviews, highlighting along the way some of the disciplines we use to generate the insights that successful reviews should produce.


Copyright Latitude 2009. All rights reserved.

Latitude Partners Ltd
19 Bulstrode Street, London W1U 2JN
www.latitude.co.uk

For the full text of this series email steve@latitude.co.uk

Thursday, 12 March 2009

Business turnarounds - troubleshooting performance problems (3/3)

In our last post we looked at our two most fruitful areas of analysis when an under-performing company has issues with sales (or gross profit) growth. In this post, we look at the two areas of investigation that we find most useful for companies with profitability problems.

Again, the most useful analyses are the ones that are rarely done. Board packs, management KPIs and performance measures often track return on sales and total profit by line and by customer group. These analyses ordinarily have value, but don't add to what the Board already knows, and so do not provide insights to a performance problem that the Board has to date been unable to address.

The analyses we find most useful are related to lifetime profits:

1. Analysis of profit contribution of assets over their lifetime
2. Analysis of profit contribution of customers over their lifetime.

Profitability analysis of asset usage

Capital constraint is a critical issue in turnarounds. However, profitability numbers in Board KPIs often either ignore asset usage or treat amortisation uniformly for each product line, not distinguishing asset-intensive versus asset-light customers.

Accounting for each customer group’s asset usage often highlights major cash sinks. It can overturn previously held understanding of customer profitability and often reveals where companies have historically focused time and resource on what turn out to be loss-making or value-destroying customers.

Example – gaming machine operator turnaround

Profitability had declined for five consecutive years in a highly capital intensive sector, resulting in low return on investment and ultimately covenant breach.

The business had focused on maintaining high machine rents, by targeting sales on high end managed pubs and by rapid and continuous new product introductions. The business deprioritised lower end free trade customers that required lower rates of introduction and paid correspondingly lower rents.

Using a simplistic assumption for machine depreciation, managed houses appeared profitable, free houses unprofitable.



Correct accounting for machine asset depreciation showed the historic focus on managed pubs to be value-destroying.



The business consequently renegotiated its managed pub contracts to reflect the accurate understanding of cost structure, grew profitability and has successfully refinanced.

Customer acquisition cost and lifetime value

Management accounts and monthly KPIs can hide the true cost of acquiring customers, and the payback over the customers’ lifetimes. This can be particularly true for larger deals or new services where the customer contributions appear large, but the time and cost taken to acquire these customers can make them loss-making over their lifetime. This is further exacerbated when accounting for a high time value of money in distressed situations: a large initial sales cost outlay and delayed incoming cash flows can generate large negative net present values and heavy cash requirements for some major prospective customers or ambitious new services.

Example – telecoms reseller turnaround

A corporate telecoms reseller had operated at low scale and with heavy losses for several years, and faced closure by its financing parent.

The business perceived greatest potential from large corporate customers and focused sales efforts on these accounts.

Analysis of customer lifetime and acquisition cost arising from low hit rate, resulted in the conclusion that the business was incorrectly focused on loss-making large corporates and under-investing in sales to highly-profitable SMEs





The business refocused onto SMEs and subsequently achieved trade exit in excess of £150m.


So, there we have it, four rarely-used but commonly insightful analyses to perform on struggling businesses, when the usual KPIs and Board packs have not given any productive clues to the causes of decline.

Given the recent trend for debt holders to delay taking control of breached or distressed companies, we believe that management and equity now has greater breathing space to diagnose financial issues. And we believe that a rigorous understanding of such issues is value-adding for everyone involved.


Copyright Latitude 2009. All rights reserved.

Latitude Partners Ltd
19 Bulstrode Street, London W1U 2JN
www.latitude.co.uk

For the full text of this series email steve@latitude.co.uk

Business turnarounds - troubleshooting performance problems (2/3)

In our last post we pointed out that the most fruitful areas to investigate for unaddressed causes of performance problems are, by definition, those that aren't covered in standard KPIs or Board packs.

We find four analyses particularly insightful, depending on the area of underperformance. The first two are suitable for understanding issues of sales (and gross profit) growth/decline, and we cover them in this post. The third and fourth are more suitable for issues with profitability; we will cover these in our next post.

To diagnose issues with sales or gross profit growth/decline, the two most commonly insightful analyses are:

1. Reviewing market and competitive benchmarks, in order to understand whether the company's product mix matches market growth areas and if gross margins are in line with peers, or if, more likely, the company is working hard to hold back the tide by growing share in a low margin segment

2. Analysing the year-on-year sources of business, to understand the reliable base line of secure business, and whether the company's underperformance is a result of issues with customer acquisition, customer retention, or both.

Review of market and competitive benchmarks

A market and competitive review generates rapid and useful benchmarks of reasonable growth expectations for a company’s services and expectations for gross margin. Problems with revenue and gross margin can often be simply the result of a poor business mix, skewed towards the low growth, low margin market segments.

Plans to exceed market benchmark growth or margin are too unconservative for a sound turnaround plan. Changing business mix to higher growth, higher margin segments, achieved by reprioritising marketing and sales investment, is almost always a more pragmatic path to follow.

Given the generally strong positive relationship between gross margin and market growth, such a reprioritisation kills two birds with one stone.

Example – technology services turnaround

Sales had slowed below historic rate in the previous 18 months, and gross margin shrinkage caused impending covenant breach.

Rapid analysis of market growth and competitor margins showed that business had focused excessively on a low growth, low margin segment. This growth in excess of market had depressed margins even further. A return to sales and margin growth was possible from rebalancing business mix to higher growth segments.



The refocus took 8 weeks to implement and resulted in business returning to full-year budget performance within 4 months. The business refinanced successful with all solvent banks retaining participation and is now the most profitable player in its sector.


Analysis of new versus retained business

Understanding a company’s reliable base of recurring business is clearly important in the context of turnaround financing and planning. In addition, a review of new versus retained business over months or years illustrates whether the revenue problem is one of acquisition or retention, each of which has very different restorative actions.

Example – tour operator turnaround

Sales had declined for three consecutive years in a steadily growing niche, resulting in declining total profit, with trend rate threatening covenant breach.

The business had cut unprofitable discounted lines but maintained traditional efficient distribution in an unsuccessful attempt to reverse profit decline.

Analysis of sources of yearly bookings showed a strong stable base of regular customers but a continual annual decline of new customers, resulting in steady decline of volumes.

Cumulative bookings from previous customers show consistent annual spend



Cumulative bookings from new customers show ongoing annual decline





Evidence of reliable loyal booking provided a solid baseline for a successful refinancing.

The business refocused on new customer acquisition through a successful new online channel, regional departures to reflect changing travel patterns and the launch of lower-cost introductory products to capture new customers.

Of course, there are limitless other analyses that can be used to address the underlying causes of sales underperformance, but the two in this post are the ones we find most useful and under-used.

In our next post, we will cover our two most fruitful approaches to understanding profitability issues.


Copyright Latitude 2009. All rights reserved.

Latitude Partners Ltd
19 Bulstrode Street, London W1U 2JN
www.latitude.co.uk

For the full text of this series email steve@latitude.co.uk