What is a Chief Growth Officer?

Marketing Week reports that a number of companies  have appointed Chief Growth Officers, e.g. Coty, Colgate-Palmolive, and Coca-Cola.  So what is a Chief Growth Officer?  Well, there are (at least) 3 options.

  1. It can be a new title for Chief Marketing Officer (CMO).  Maybe it’s a better title, maybe not – I suspect it all depends on the person.  Here, the role is to enhance the company’s marketing capability, to make the marketing department better, wiser, less wasteful, more effective.  And to make the marketing function be seen as capable of contributing to growth and being accountable for growth.  This is a tremendously important role, a never-ending one, where success depends substantially on bringing scientific evidence into the minds (and hearts) of the marketing team.  I wrote about this role previously.
  2. It can be the same role as CMO, but with the additional responsibility of the sales team.  Is this a better model?  I don’t know,  I suspect it depends a lot on the implementation.  The idea of marketing and sales reporting to one boss looks attractive, it might help them work together for the good of the brand(s).  Then again, this may be simply too big a job for one person.
  3. The Chief Growth Officer could be a role distinct from CMO.  Marketing capability is at the heart of the competitive performance of many corporations, as they work in increasingly competitive markets.  Mental and physical availability underpin the value of such companies, so the CMO’s role is vital….But, even with excellent marketing, company growth will be stymied if the company isn’t playing in the growing categories, in the growing markets/countries, in the growing distribution channels.  The job of the Chief Growth Officer can be to make the company better at making these investment decisions.  In this case, the CGO and CMO work side-by-side; the CMO builds a better marketing capability, while the CGO works to make the organisation better at deciding where to apply this capability (and resources).

All of the companies listed above are sponsors of the Ehrenberg-Bass Institute, a tribute to how they take marketing and business growth seriously.

Advertisements

Some inconvenient truths about brand image perceptions

A cautionary note….

Marketers spend quite a lot of money tracking perceptions of the brand.  There is some use in gathering this information at least once in a while, because if you know how consumers see your brand you can use this knowledge to craft your advertising (and other things like packaging) to look like you, so it will work more for you and is less likely to mistakenly work for competitors.  But this is not how image tracking is usually used.  Instead marketers look at small changes in particular brand associations, e.g. we are up a bit on “community minded” but down a bit on “a brand I can trust” and try to infer some significance.  What do such shifts mean?

Decades of research has documented how attitudinal perceptions (evaluative ie good or bad) strongly reflect the past buying of the respondents in the survey – so simply our market share (if our survey sample is a good one).  Of course attitudes also affect buying but the effect is turns out to be weaker than we used to think it was, while the effect of buying on brand attitudes is very strong.  So our brand trackers show attitudes improve, but mostly after we gain market share.

Some descriptive perceptions are reasonably straightforward to understand.  If only a third of the population know that we sell men’s as well as women’s shoes then this is going to restrict our men’s shoe sales.

Yet even with these less attitudinal, more descriptive associations, it’s not as clear as we might think, e.g.  supermarket chain might worry about their association with “low price”, because they make assumptions that being perceived as having “low prices” drives sales – but how much? It’s not an unreasonable assumption that perceptions of “low prices” probably affects shoppers’ overall attitudes (i.e. a multi-attribute attitudinal model where improvement on this feature nudges the overall attitude (how much?)).  Alternatively, it affects them in a probabilistic manner, when they happen to think of low prices, or desire low prices, the particular supermarket chain now has more chance of popping into memory as a suitable choice.  But… how often and how much this this affects behaviour isn’t known (isn’t documented over different conditions).

The truth is that we have practically no knowledge of how/where/when much particular perceptions affect behaviour – what is a tiny change worth?  Anyone who claims to know is either lying (trying to fool you), or fooling themselves.

Spider graphs, perceptual maps – none of them tell us how much any perception is worth.

Some analysts use regression type analyses to determine which perceptions are “drivers” of other perceptions, or of sales movements. Sadly this is more pseudo-science than science – fitting models of weak correlations to a single set of time series data, something well known to produce useless predictions (see Armstrong 2011, Dawes et al 2018).  Sales (i.e. behaviour) strongly affect perceptions, so correlations between the two are largely, if not totally, due to behaviour causing the perception.  This powerful causal relationship  makes quantifying how particular perceptions drive other perceptions or sales impossible.  All you get is a bunch of over-fitted models describing spurious relationships. It’s impossible to tell which model might be useful, not without doing many differentiated ‘replications’, the basic work of science (statistical gymnastics is no shortcut).

But we also don’t know how much these shifts in market research response are merely that – shifts in a particular (non sales) behaviour i.e. response to survey questions.  For example, for years Mars used the slogan in Australia for their market leading Mars bar “a Mars a day helps you work, rest, and play”.  So any survey that asks “which chocolate bar helps you work” will record many responses for Mars bar.  And the more recently that Mars have advertised using this slogan, the higher the response will be.  The market really does react to advertising, especially if it is done well – clearly branded, placed in broad reach media.  So perceptual shifts may be useful in evaluating advertising (see footnote).  But how can we interpret a 3% shift in respondents picking Mars bar for “helps you work”?  How much of this is them just parroting back the advertising versus actually believing that Mars bars help you work?  And even if they did believe how will this affect their behaviour?  We simply don’t know.

While we do know that people can learn things and yet never bring these beliefs into play in purchasing situations.

Another related problem is that people learn things about brands largely for identification, not for helping them evaluate, or even recall.  For example, lots of us know that Amazon’s book reader is called Kindle.  That we do is good for Amazon,  but who has thought about the meaning of the word, actually it was chosen because Amazon liked the “start a fire” connotation, that’s why Kindle Fire has the name it has – I suspect you never even noticed the connection.  In the same way that no one wonders why McDonalds has a Scottish name.

My point is that movements in market research surveys are precisely that, and we don’t know what they really tell us about how memories in brand buying situations have changed, let alone how this would affect behaviour/sales.

We have to be humble and realistic about our collective lack of knowledge.

All we have is the qualitative notion along the lines that it’s probably better for a supermarket to improve things like the proportion of people who associate it with “low prices”.  So we watch such metrics to check if they start dramatically trending downwards.  Though the reality is that this virtually never happens unless our sales collapse (when all perceptions track downwards), or that our prices really fall behind (in both cases it’s unlikely we need market research to alert us!).

Footnote: the Ehrenberg-Bass Institute has done research on how advertising affects image surveys.  We show that it does. And that without adjusting scores for changes in behaviour (because of sampling variation and real things going on in the market) the effect of particular messages can be missed or misinterpreted.

Anyone interested in this cautionary note on the interpretation of brand image associations and attitudes can read more in the chapter on “Meaningful Marketing Metrics” in the textbook “Marketing: theory, evidence, practice” 2nd edition, Oxford University Press 2013.

These patterns in image data have been document over decades, and many many brands, categories, countries eg:
Barwise, T. P. & Ehrenberg, A. 1985. ‘Consumer Beliefs and Brand Usage.’ Journal of the Market Research Society, 27:2, 81-93.

Bird, M., Channon, C. & Ehrenberg, A. 1970. ‘Brand image and brand usage.’ Journal of Marketing Research, 7:3, 307-14.

Romaniuk, J. & Gaillard, E. 2007. ‘The relationship between unique brand associations, brand usage and brand performance: Analysis across eight categories.’ Journal of Marketing Management, 23:3, 267-84.

Romaniuk, J., Bogomolova, S. & Dall’Olmo Riley, F. 2012. ‘Brand image and brand usage: Is a forty-year-old empirical generalization still useful?’ Journal of Advertising Research, 52:2, 243-51.

Mistaking statistical modelling for science

Marketing isn’t the only discipline to have been seduced by the idea that modelling can somehow bypass the hard work of developing empirical laws.  Few seem to realise how heroic the assumption is that teasing out a few weak correlations can quantify precisely how much [something of interest eg sales] will change in the future when [various other things, eg media choices] are altered.

Added to this is the ‘Big Data fallacy’ that adding together bunches of weak correlations will lead to more and more accurate predictions – “once we have enough data, a clever programmer, and a powerful enough computer, we’ll be able to predict everything we want”.  It’s as if chaos theory taught us nothing at all.

The basic work of science is making empirical observations, looking for patterns, and then…. once you have found one, looking to see where it holds and where it doesn’t.  This requires lots of replications/extensions over different conditions (eg countries, product categories, and so on).  This is how scientific laws are developed, that give us the ability to make predictions.  These replications/extensions also tell us what conditions don’t affect the law, and maybe some that do.  This leads to deep understanding of how the world works.  Experiments can be used to tease out the causal directions and magnitude, what really affects the pattern and how much.  Again these experiments need to be done carefully, across a range of conditions that might matter.

Yes, this doesn’t sound very glamorous, it takes much time and effort (1% inspiration, 99% perspiration).  Sometimes we get lucky, but generally many many studies are required.  By independent teams, using creatively different approaches – so we can be sure that the empirical phenomenon really does generalise, that it isn’t a fragile result (or a mistake) that only exists in one team’s laboratory.

Unsurprisingly the idea that a computer model could bypass much of this hard work is seductively attractive.

Terribly complicated, yet naive, modelling seems to be everywhere.  In population health statistical correlations deliver precise estimates that if people eat particular foods (or amounts of fat/sugar/alcohol, or sitting around) then their risk of dying early will be such and such.  There is nothing wrong with this, so long as we recognise the weakness of the method.  Unfortunately these correlations often get handed over to engineers who, with a spreadsheet and a few heroic assumptions about causality, produce model predictions that if the government taxed this, or regulated that, then x million lives would be saved, and x $billion saved in hospital bills.  These predictions need to be treated with a high degree of skepticism.  We need tests before legislation is changed and money spent.

In climate science, a rather new, and until recently very small discipline, modellers now seem to dominate.  In the 1970s a short period of cooling led to worry about global cooling, but then temperatures turned around to rising again, and climate scientists started to become seriously concerned about the role of rising CO2 levels.  They rushed to develop models and in the early 1990s they gave their predictions for CO2 emissions to lift global temperature, along with accompanying predictions of oceans rising, ice retreating, polar bears disappearing and so on.  25 years later they are confronted by the overwhelming predictive failures of these models, that is, the models substantially over-predicted the warming that was supposed to occur (given that the CO2 levels have risen – the IPCC, even though they are ‘marking their own homework’, admit this in their assessment).  The modellers are now starting the work to figure out why.  Meanwhile the forecasting scientists who criticised the climate scientists’ forecasting methods, and predicted this result, have been vindicated.

Models that show wonderful fit to historic data routinely fail in their predictions*.  That’s why we revere scientific laws (and the theories built on them) because they have made predictions that have come to pass, over and over.

 

* *See also Dawes, J. G. 2004. ‘Price changes and defection levels in a subscription-type market: can an estimation model really predict defection levels?‘ The Journal of Services Marketing, 18:1, 35-44.

Answering critics

Our critics have been few, and rather kind (nothing of substance has been raised).

Now and then a marketing guru issues a thinly disguised advertisement for their consulting services that tries to have a go at the laws and strategy conclusions in How Brands Grow.  They usually say something like:

“Our data confirms that larger market share brands have much higher market penetration BUT our whizz-bang proprietary metric also correlates with market share, and this proves that it drives sales growth, profits, share price, and whether or not you will be promoted to CMO”.

Often some obscure statistical analysis is vaguely mentioned, along with colourful charts, and buzzwords like:
algorithm
machine learning
emotional resonance
neuroscience

And sexy sounding (but meaningless) metrics along the lines of:
brand love
growth keys
brand velocity
true commitment
loyalty intensity

All of this should raise warning bells amongst all but the most gullible.

Let me explain the common mistakes….

Ehrenberg-Bass say brands grow only by acquiring new customers.
These critics somehow missed the word “double” in Double Jeopardy.  Larger brands have higher penetration, and all their loyalty metrics are a bit higher too, including any attitudinal metrics like satisfaction, trust, bonding… you name it.

Brands with more sales in any time period, are bought by more people in that time period.  So if you want to grow you must increase this penetration level.  In subscription markets (like home loans, insurance, some medicines) where each buyer has a repertoire of around 1, then penetration growth comes entirely from recruiting new customers to the brand.  In repertoire markets penetration growth comes from recruitment and increasing the buying frequency of the many extremely light customers who don’t buy you every period.

The “double” in Double Jeopardy tells us that some of the sales growth also comes from existing customers becoming a little more frequent, a little more brand loyal.  Also their attitudes towards the brand will improve a bit, as attitudes follow behaviour.

Improved mental and physical availability across the whole market are the main real world causes of the changes in these metrics.  The brand has become easier to buy for many of the buyers in the market, it is more regularly in their eyesight to be chosen, and more regularly present in their subconscious, ready to be recalled at the moment of choice.

Why does it matter anyway? Can’t we just build loyalty AND penetration?
Yes, that’s what Double Jeopardy says will happen if you grow.

Loyalty and penetration metrics are intrinsically linked.  They reflect the buying propensities of people in the market – propensities that follow the NBD-Dirichlet distribution and Ehrenberg’s law of buying frequencies.  Growth comes from nudging everyone’s propensity up just a little bit.  Because the vast majority of buyers in the market are very light buyers of your brand this nudge in propensities is seen largely among this group – a lot go from buying you zero times in the period to buying you once, so your penetration metric moves upwards (as do all other metrics, including attitudes).

For a typical brand hitting even modest sales/share targets requires doubling or tripling quarterly penetration, while only lifting average purchase rate by a fraction of one purchase occasion.  That tells us that we need to seriously reach out beyond ‘loyalists’, indeed beyond current customers, if we are to grow.

When budgets are limited (i.e. always) it’s tempting to think small and go for low reach, but this isn’t a recipe for growth, or even maintenance.

A focus on penetration ignores emotional decision making.
This is odd logic.  A focus on mental and physical availability explicitly realises that consumers are quick emotional decision makers, who make fast largely unthinking decisions to buy, but who if asked will then rationalise their decision afterwards.

Ehrenberg-Bass say there is no loyalty.
Really?!  On page 92 of “How Brands Grow” we write:
“Brand loyalty – a natural part of buying behaviour.  Brand loyalty is part of every market”.

On page 38 of our textbook  “Marketing: theory, evidence, practice” we write:
“Loyalty is everywhere.  We observe loyal behaviour in all categories” followed by extensive discussion of this natural behaviour.

In FMCG categories, buyers are regularly and measurably loyal – but to a repertoire of brands, not to a single brand.  And they are more loyal to the brands they see a bit more regularly, and buy a bit more regularly.

All brands enjoy loyalty, bigger brands enjoy a little bit more.

Ehrenberg-Bass analysis was only cross-sectional.
Actually, we published our first longitudinal analysis way back in 2003 (McDonald & Ehrenberg) titled “What happens when brands lose or gain share?”.  This showed, unsurprisingly, that brands that grew or lost share mainly experienced large change in their penetration.  This report also analysed which rival brands these customers were lost to or gained from.

In 2012 Charles Graham undertook probably the largest longitudinal analysis ever of buying behaviour, examining more than six years of changes in individual-level buying that accompanied brand growth and decline.  This highlighted the sales importance of extremely light buyers.

In 2014 we published a landmark article in the Journal of Business Research showing that sales and profit growth/decline was largely due to over or under performance in customer acquisition, not performance in retaining customers.  Far earlier we had explained that US car manufacturers did not experience a collapse in their customer retention when Japanese brands arrived, they each suffered a collapse in their customer acquisition rates.

But if we can change attitudes then surely that will unlock growth?

It’s rare that it’s a perceptual problem holding a brand back.  Few buyers reject any particular brand (and even most of these can be converted without changing their minds first).  The big impediment to growth is usually that most buyers seldom notice or think of our brand, and that the brand’s physical presence is less than ideal.

For more on “Marketing’s Attitude Problem” see chapter 2 of “Marketing: theory, evidence, practice” (Oxford University Press, 2013.

Attitudes can predict (some) behaviour change.  Light buyers with strong brand attitude were more likely to increase their buying next year.  And heavy buyers with weak brand attitude were more likely to decrease their buying next year.

This is a mistaken interpretation, something that has tripped up a few researchers.  I’ll explain….

First, understand that a snapshot of buying behaviour (even a year) misclassifies quite a few people: some of the lights are normally heavier but were light that particular year; some of the heavies were just heavy that year (kids party, friends visited, someone dropped a bottle) and next year revert closer to their normal behaviour.  Note: for many product categories just a couple of purchases is needed to move someone into, or out of, the heavy buyer group.

Second, we must remember that attitudes tend to reflect any buyer’s longer-term norm.

So someone who is oddly heavy in buying this year will tend to be less attitudinally loyal to the brand than ‘regular’ heavies.  Someone who is oddly light this year will tend to be more attitudinally loyal to the brand.  Next year, odds are, these people’s buying moves closer to their norm and their expressed attitude.  It looks like the attitude caused a shift in behaviour, but it’s an illusion.

This statistical ‘regression to the mean’ is not real longer-term change in behaviour of the kind marketers try to create.  Nor does this show that attitudes cause behaviour – their real influence is very weak, while the effect of behaviour on attitudes is much stronger.

Ehrenberg-Bass analysis is very linear reductionist, whereas we take a quadratic holistic approach.
Really not sure what these critics are talking about, nor perhaps do they.  This is pseudo-science.

I have a super large, super special data set.
Please put the data in the public domain, or at least show the world some easy-to-understand tables of data.  If you want us to consider your claims seriously then please don’t hide behind obscure statistics and jargon.

I have data that shows Ehrenberg-Bass are wrong, but can’t show it.
MRDA.