Wednesday 28 October 2009

Six Common Mistakes People Make When Analysing Markets


Market analysis is a difficult science.  So it's not surprising that most attempts that we review contain mistakes and gaps that put the whole analysis, and its consequences for the business, into question.  Even worse, it's our view that the strategy gurus who propose and push high level market analysis models inadvertently cause many more problems than they solve, and are probably the single biggest group of culprits causing shoddy market assessment.

Here are six common mistakes. 

1. The analyst looks only at a macro level

It's important to look at markets at a high level, at growth, trends, competitive environment, etc, and the macro view is what the text books prescribe.  But looking only at this level misses what for us is the most frequent area of critical insight: what customers and potential customers value and pay for now; and what they are going to value and pay for in the future.  There is one place that the revenue and profit pool of a market is going to come from, and that is customers' spend, budgeted or unbudgeted.  There is one source of revenue and profit for each individual supplier to that market, and that is winning some or all of that spend.

Unless you're a cartel, monopoly or lobby group, then you're in the business of providing value for value.  You need to know what results customers need to achieve, how they propose to achieve them, how they decide who will support them in doing so, and the value they place on that support.

The clue to where the customer places most value? Where they intend to spend money.  This is either unidentified and consists of a price they are willing to pay to solve a problem; or is identified in the form of next year's budget.

This micro analysis has got enormously more business value than the macro-level equivalent of quantifying emergent (unbudgeted) and established (budgeted) markets.  What would you rather know, the macro overview that the market has grown by 4% with a trend to cloud computing, or that your target customers are under pressure to reduce infrastructure support costs by an average $15m and will pay 70% of the savings to reliable outsourced suppliers with reputations for service responsiveness? 

2. Where micro analysis is done, it is done cursorily, uncommercially, or just plain badly

Talking to customers about their plans, desired results, budgeted spend and supplier requirements is a golden opportunity to understand some critical facts: where customers place future value, the corresponding source and size of a pool of profit for you, and how to win that business.

Unfortunately, most analysts miss this golden opportunity by delegating the exercise to market researchers or untrained, poorly-briefed graduates.  These people are briefed to ask mindless, well-trodden questions about likes and dislikes, strengths and weaknesses; or they get interviewees to rank elements of the value proposition in terms of importance and performance. If you ever listen in to one of these interviews, you'd be shocked by its tedium and superficiality.  The consequent information is sometimes useful.  But it's only useful if it supplements some much more important and tangible information: which elements of the budget are growing or shrinking and by how much; who the budget holder is and how they make buying decisions; what causes people to stay with or switch from incumbent suppliers; what characteristics suppliers need to have in the future to win business; how the supplier can help them make or save more money.

Asking questions like these takes skill and commercial acumen, but their answers are worth more to you than every market research report you could ever commission. 

3. The analyst doesn't take enough care to define his market

"What market am I analysing?" is a much more important and difficult question than it looks at first sight.  Let's take an imaginary organic pet food manufacturer, based in Wales.  Does my market include every potential customer of pet food, even though my product costs three times as much as the non-organic market leader?  Or is it organic pet food, which is defined by the product and some kind of customer sentiment?  Or is it premium pet food, defined by some kind of price and quality level?  Do we include or ignore the supermarket customers that we will never access because we're too small to be stocked by the big chains? How much of the UK do we count as our market? Do we include continental Europe?  If so, how much of it?  Do we include dry food, even though we only do wet? Etc, etc.

The reason I've banged on with this definition example is that every definition I suggested gives you a completely different market, with different size, growth, trends, competitive set, etc.  And the definition you use for one circumstance, say understanding how your core customer group is growing, will likely be different for another equally valuable circumstance, such as how big could demand be if you cut prices by 30%.

The damaging temptation is always to ignore these factors and define the market according to what data is available, which gives you a substantiated quantification of a (probably) irrelevant market.  In our experience, you are almost always better off taking care to define your market to be as relevant as possible to your situation, and accepting that will need to use bottom-up assumptions to estimate the market size, structure and growth. 

4. The analysis obsesses about the competitive environment, to the exclusion of all else

Michael Porter's five forces are a very useful checklist, core competence/strategic intent is a useful mindset, the disciplines of market leaders and the 7S framework can help create insight.  I'll keep my thoughts about The Art of War to myself.  These common strategic tools can have value - competitive intensity is usually the most dominant driver of margins - but they're not the whole toolbox.  They usually only cover the competitive side of the picture and miss such fundamental issues as whether demand is shrinking or growing, what customers actually value and plan to pay for, and what your company actually, distinctively offers.

This obsession with macro competitive postioning distracts the analyst or manager from both the demand side, and from the micro analysis that solves the germane issue: what you actually need to do to make more money.

5. The analysis places far too much confidence in forecasts

Most analyses that we see use a single forecast for the future; no scenarios, no what-ifs, no ranges.  Even worse, that forecast is usually a projection of the recent past with little thought to what might drive any changes, or what the leading indicators are, or anything that might tell us what confidence we have in our estimates.

There is one thing that we can be sure about with all of our forecasts, and that is that they will be wrong.  If we took a moment to analyse the success of our historical attempts at forecasting individual markets, we would all be humbled by our enormous margin of error.  As humans, we regularly and grossly overestimate our ability to predict the future.  If our business relies on such forecasting performance, with no margin for the likely large error, then there is a high probability that it will be seriously compromised.

It's therefore wise to be realistic about our ability to forecast and act accordingly.  We have to accept that we cannot predict the future, we can only prepare for it.  So there is much more value in acknowledging our lack of prescience, and developing a series of scenarios.  Of course we need budgets and targets, but the discpline of working through how we can survive the disaster scenario, or how we can generate sufficient capacity in the optimistic case is of more tangible value than complacently forecasting and hoping that we're right.

6. There is a disconnect between market analysis and sales forecasts

I've lost track of the number of plans I've seen where the market grows at one rate, and the business grows at a completely different rate, almost always faster than the market.  There's rarely any justification for this implied share gain.  It's possible if the company has just entered a market, or if its product is suddenly better, or if it has a new channel, or if it has something else new and advantageous, or if its competitors have decided to lie down and let it win some of their pitches.  Our default position is to assume no share gain unless there's a very good reason to assume otherwise.  Anything else generates cognitive dissonance in our rational analytical minds.


So there you have it.  Six common mistakes, any one of which can cause a market analysis to be unhelpful, devalued or just plain misleading.  We see many, many more mistakes, but the list is too long to cover in this forum.

I hope by raising them that we have averted some problems and implied some solutions.  I don't have a catch-all unbreakable golden rule for analysing markets effectively.  But my best one is this: business transactions are about providing value and being rewarded for doing so, so to understand the market you need to look for where that value is, and follow the money.


Latitude Partners Ltd
19 Bulstrode Street, London W1U 2JN
www.latitude.co.uk

Monday 19 October 2009

Why Let Facts Spoil the Narrative?

I've just finished reading a very shakey due diligence report, in which one of the key questions to review was the impact of the recession on the company under investigation.  The report author covered the beneficial effects of recession on services (comparable to those of the target company) that people mainly use in their own homes.  The report talked about increases in satellite and cable TV subscriptions, growth in Dominos deliveries, and the trend to "staycations", and implied that, as a result, everything would be OK.

A question kept coming to my mind as I ploughed through this nonsense.  This question was: "But the recession has been happening for about a year - why don't they just look at what's been happening to the company?".  The analyst could have looked at sales, customer churn, average customer value, new sign-ups.  And they could have looked at them before, during, and after recession (now we're coming out of it for a short while).  They could have compared the company's performance to changes in disposable income, or employment, or interest rates, or consumer confidence.  They could have just looked at whether they rose or fell.  The data was available and staring them in the face; but they didn't look at any of the vast array of facts at their disposal, and instead indulged in this staycation narrative.  I'll let you guess whether the facts confirmed, contradicted, or made irrelevant, the report's conclusions .

I kept asking myself why any sane analyst would display such disregard for information.  After some reflection on examples of similar behaviour, here's my conclusion: given the choice between some compelling facts and a compelling narrative, people will often prefer the narrative.  From everyday observation, there are legion examples of people ignoring or skimming over facts that might get in the way of a good story.

This preference can be, literally, fatal; and if you'll indulge a longer-than-usual post, I'll illustrate it with a historical example.

A nineteenth century physician called Ignaz Semmelweis analysed the high incidence of childbirth mortality of Puerperal fever at one of the wards of Vienna General Hospital.  He noticed that Puerperal fever was high in wards where the same doctors also conducted post-mortems, and showed that if doctors washed their hands with chlorine solution after working with cadavers, then Puerperal fever incidence declined dramatically.

His data is hard to challenge:



Unfortunately, Semmelweis's facts didn't fit the narrative of the day.  Prevailing theories of health related to the balance of the four humours of the body, and the role of "bad air" in the spread of disease.  In fact, his implication, that lack of cleanliness in the surgeon was a cause of the disease spreading, was considered insulting to the gentlemen who administered medicine and surgery.

Semmelwies was roundly criticised, and his observation and recommendations were dismissed by the mainstream, despite their obvious life-saving results.  It was only after Pasteur's work into germ theory became accepted 20 years later that the establishment embraced the findings of, the by then dead, Semmelweis.

So coming back to my point.  There will always be an accepted or acceptable narrative to explain anything, be that the four humours of nineteenth century medicine, or the various dubious adspeak marketing theories we hear today.  We can pretty much guarantee that by blindly following the narrative, we will be proven as gullible, closed-minded and wrong as those olden-day physicians.  Alternatively, we can ignore the narrative for a moment, and just have a quick  look at the facts...

Copyright Latitude 2009. All rights reserved.


Related links.

On Ignaz Semmelweis
http://en.wikipedia.org/wiki/Ignaz_Semmelweis#Ideas_ran_contrary_to_established_medical_opinion

On truth, bias and disagreement
http://www.econtalk.org/archives/2009/03/klein_on_truth.html



Latitude Partners Ltd
19 Bulstrode Street, London W1U 2JN
www.latitude.co.uk

Wednesday 14 October 2009

So-What (SWOT) Analysis



One thing that makes my palms sweat when reviewing a business plan or strategy document is a SWOT analysis.  Reading one of those two-by-two tables makes me think that my generally very smart, commercial, rational clients, have decided to illustrate what they learned at primary school, or have accidentally inserted a no-idea-is-stupid whiteboard printout from the start of a brainstorming session.

I'm not saying that SWOT doesn't have its place at the beginning of the strategy development process; it does, especially if you start with the "O".

O, for opportunity, forces you to take a moment to look around and speculate where the future pools of profit might be, which is especially useful for bringing out those areas that you're currently not doing anything about.

S(trengths) helps you realise where your sources of competitive advantage might lie.

W(eaknesses) forces you to be realistic about what may need improving, and traits that might put you at a disadvantage.

T(hreats) forces you to look at those things coming over the horizon that might sink you below the water line.

This forced lookaround for factors that may be important is, in my experience, the entire benefit of SWOT.  But it's only of value if you go on to test properly which ones are true and material.  Unfortunately, most plans that I see stop with the SWOT output, and bung the list unqualified into the document.  This is worse than useless; it's foolhardy, because it can set in train a series of actions that are based on barely-substantiated speculation.

From the long list of strengths, weaknesses, opportunities and threats that emerge in the SWOT analysis, how do you know which are actually true as opposed to speculation? Which are material and will affect the entire future of your business, and which are pretty much irrelevant?  Which ones should you deliberately not do something about, for example the weakness in high-end products that would kill your cost advantage if you addressed it?  How do you know which opportunities are the ones to put time and money into, and which are the ones to deprioritise?

If you recognise SWOT's limitations, and treat it as a start point, from which you do some testing with facts, then you can create something valuable from this motley list of brainstormed hypotheses.

Start with the opportunities and ask some standard commercial questions.  How big are they? How well positioned are we to exploit them versus everyone else?  How much does it cost to start exploiting each of them?  How sustainable is the profit stream that comes from each?  Which of them is the most valuable use of a dollar of investment or an hour of management time?  If the business case of any one of them stacks up, what do we do next to get there?

Do the same kind of reality check and so-what test with the strengths, weaknesses and threats.  And you will end up with a short list of credible opportunities and actions, which I promise will pay back the additional time a hundred-fold.

You'll also have fewer business plan readers with sweaty palms asking "so what?".


Copyright Latitude 2009. All rights reserved.

Latitude Partners Ltd
19 Bulstrode Street, London W1U 2JN
www.latitude.co.uk

Sunday 11 October 2009

You Can't Take Vision to the Bank

She sat, watching him in the manner of a scientist: assuming nothing, discarding emotion, seeking only to observe and to understand.

A description of Dagny Taggart, “Atlas Shrugged”, by Ayn Rand

Earlier this year I worked with two companies that couldn’t be more different.

Company one is one of the most respected names in the FTSE, and operates in a classic recession-proof sector.  Company two is an unknown business in an unfashionable declining sub-segment of the telecoms sector.

Company one’s management team is smart and sharp, and would be intimidating if they weren’t such pleasant people.  The Directors have a cadre of direct reports who, to my initial and ongoing bemusement, make sure that everything that reaches the Directors is high level, conceptual and visual.  When working with us, one tried to insist that our presentations contained less data and more pictures – pictures for God’s sake!  But, the thing is, these people weren’t acting dysfunctionally – in every meeting with us, the company Directors dwelt and debated on the concepts and vision, and seemed to skip very quickly over all of our data and analysis.

Company two’s management team is one of the most uninspirational I’ve ever met.  The top two Directors could pass as the two main characters in Peep Show.  But these guys love their numbers.  Every question we asked them in our work with them was answered with numbers, supported by a flood of analysis.  The business is managed using a set of KPIs that would have a quantitative analyst in paroxysms of delight.  Everything is tested, everything is monitored.

Company one has had flat sales in a growing market, and so seen its share decline pretty much continually for the last ten years.  But it now has a striking vision of industry leadership for the future, which might work.  You never know.

Company two has grown revenue and profit more than 20% annually in the time since the management team came on board.  This isn’t from harvesting – new services launched in the last three years now make up about 25% of profit.   Company two’s vision – I’m quoting exactly here – “we’ll try a bunch of things and see what the numbers tell us”.

I think I’ve made my point.  Vision can be appealing, numbers count.

Copyright Latitude 2009. All rights reserved.

Latitude Partners Ltd
19 Bulstrode Street, London W1U 2JN
www.latitude.co.uk

Friday 9 October 2009

Burden of Proof


A long-established business I know is being mauled by its competitors. To get out of an apparently terminal decline, this year it threw millions into consulting fees. Following the consultants’ advice, it plans to throw hundreds of millions into systems and other investments. Management admits that the business case following the advice was built on hard-to-test, unresearched assumptions that could turn out to be very inaccurate. However, the Board made a decision to go ahead and agreed on an implementation plan.

Shortly after, my company was asked to look at the business case for expanding a currently small existing scheme that captures customer information for use in more effective marketing, range development and profitability analysis. The business case for this is a slam dunk: the existing smaller scheme is already highly profitable, investment is miniscule, roll out can be tested with a low cost pilot, payback is less than a year, the scheme adds hundreds of millions to shareholder value, and all the high growth competitors run similar schemes. Without the scheme, the company has no idea about customer profitability or marketing effectiveness, and is at a material disadvantage to competitors who already use the same kind of information from their own schemes to steal its loyal customers.

Management is going to reject the project. Why? Because they may be able to get "almost-as-good" additional customer information as a result of the plan they’ve already decided on (the one that costs hundreds of millions and is based on self-confessed shakey data). And, if that plan turns out well, they may be able to capture many of the same benefits from the scheme we tested. If the company captures these benefits, then the remaining incremental benefits of our scheme are just too small.

So management is going to make a terrible decision. This isn’t because of bad economics – I don’t disagree with the marginal benefit argument. They’re going to make a bad decision because of where they place the burden of proof. They’ve taken as read the shakey assumptions from the decision they’ve already made, and put the burden of proof on how another scheme can improve on that.

The lesson for us in all of this? We tend to place the burden of proof on the uncomfortable choice: the new, the unfamiliar, the thing that may cause us to change our ways. We won’t make good decisions unless we put the same burden of proof on all of our options, including what might happen if we don’t change our ways.

If we’re the frogs floating in the slowly-heating pan of water, when are we going to look at the risk of staying in the pan as hard as we look at the risk of jumping out?

Copyright Latitude 2009. All rights reserved.

Latitude Partners Ltd
19 Bulstrode Street, London W1U 2JN
www.latitude.co.uk