A cautionary note….
Marketers spend quite a lot of money tracking perceptions of the brand. There is some use in gathering this information at least once in a while, because if you know how consumers see your brand you can use this knowledge to craft your advertising (and other things like packaging) to look like you, so it will work more for you and is less likely to mistakenly work for competitors. But this is not how image tracking is usually used. Instead marketers look at small changes in particular brand associations, e.g. we are up a bit on “community minded” but down a bit on “a brand I can trust” and try to infer some significance. What do such shifts mean?
Decades of research has documented how attitudinal perceptions (evaluative ie good or bad) strongly reflect the past buying of the respondents in the survey – so simply our market share (if our survey sample is a good one). Of course attitudes also affect buying but the effect is turns out to be weaker than we used to think it was, while the effect of buying on brand attitudes is very strong. So our brand trackers show attitudes improve, but mostly after we gain market share.
Some descriptive perceptions are reasonably straightforward to understand. If only a third of the population know that we sell men’s as well as women’s shoes then this is going to restrict our men’s shoe sales.
Yet even with these less attitudinal, more descriptive associations, it’s not as clear as we might think, e.g. supermarket chain might worry about their association with “low price”, because they make assumptions that being perceived as having “low prices” drives sales – but how much? It’s not an unreasonable assumption that perceptions of “low prices” probably affects shoppers’ overall attitudes (i.e. a multi-attribute attitudinal model where improvement on this feature nudges the overall attitude (how much?)). Alternatively, it affects them in a probabilistic manner, when they happen to think of low prices, or desire low prices, the particular supermarket chain now has more chance of popping into memory as a suitable choice. But… how often and how much this this affects behaviour isn’t known (isn’t documented over different conditions).
The truth is that we have practically no knowledge of how/where/when much particular perceptions affect behaviour – what is a tiny change worth? Anyone who claims to know is either lying (trying to fool you), or fooling themselves.
Spider graphs, perceptual maps – none of them tell us how much any perception is worth.
Some analysts use regression type analyses to determine which perceptions are “drivers” of other perceptions, or of sales movements. Sadly this is more pseudo-science than science – fitting models of weak correlations to a single set of time series data, something well known to produce useless predictions (see Armstrong 2011, Dawes et al 2018). Sales (i.e. behaviour) strongly affect perceptions, so correlations between the two are largely, if not totally, due to behaviour causing the perception. This powerful causal relationship makes quantifying how particular perceptions drive other perceptions or sales impossible. All you get is a bunch of over-fitted models describing spurious relationships. It’s impossible to tell which model might be useful, not without doing many differentiated ‘replications’, the basic work of science (statistical gymnastics is no shortcut).
But we also don’t know how much these shifts in market research response are merely that – shifts in a particular (non sales) behaviour i.e. response to survey questions. For example, for years Mars used the slogan in Australia for their market leading Mars bar “a Mars a day helps you work, rest, and play”. So any survey that asks “which chocolate bar helps you work” will record many responses for Mars bar. And the more recently that Mars have advertised using this slogan, the higher the response will be. The market really does react to advertising, especially if it is done well – clearly branded, placed in broad reach media. So perceptual shifts may be useful in evaluating advertising (see footnote). But how can we interpret a 3% shift in respondents picking Mars bar for “helps you work”? How much of this is them just parroting back the advertising versus actually believing that Mars bars help you work? And even if they did believe how will this affect their behaviour? We simply don’t know.
While we do know that people can learn things and yet never bring these beliefs into play in purchasing situations.
Another related problem is that people learn things about brands largely for identification, not for helping them evaluate, or even recall. For example, lots of us know that Amazon’s book reader is called Kindle. That we do is good for Amazon, but who has thought about the meaning of the word, actually it was chosen because Amazon liked the “start a fire” connotation, that’s why Kindle Fire has the name it has – I suspect you never even noticed the connection. In the same way that no one wonders why McDonalds has a Scottish name.
My point is that movements in market research surveys are precisely that, and we don’t know what they really tell us about how memories in brand buying situations have changed, let alone how this would affect behaviour/sales.
We have to be humble and realistic about our collective lack of knowledge.
All we have is the qualitative notion along the lines that it’s probably better for a supermarket to improve things like the proportion of people who associate it with “low prices”. So we watch such metrics to check if they start dramatically trending downwards. Though the reality is that this virtually never happens unless our sales collapse (when all perceptions track downwards), or that our prices really fall behind (in both cases it’s unlikely we need market research to alert us!).
Footnote: the Ehrenberg-Bass Institute has done research on how advertising affects image surveys. We show that it does. And that without adjusting scores for changes in behaviour (because of sampling variation and real things going on in the market) the effect of particular messages can be missed or misinterpreted.
Anyone interested in this cautionary note on the interpretation of brand image associations and attitudes can read more in the chapter on “Meaningful Marketing Metrics” in the textbook “Marketing: theory, evidence, practice” 2nd edition, Oxford University Press 2013.
These patterns in image data have been document over decades, and many many brands, categories, countries eg:
Barwise, T. P. & Ehrenberg, A. 1985. ‘Consumer Beliefs and Brand Usage.’ Journal of the Market Research Society, 27:2, 81-93.
Bird, M., Channon, C. & Ehrenberg, A. 1970. ‘Brand image and brand usage.’ Journal of Marketing Research, 7:3, 307-14.
Romaniuk, J. & Gaillard, E. 2007. ‘The relationship between unique brand associations, brand usage and brand performance: Analysis across eight categories.’ Journal of Marketing Management, 23:3, 267-84.
Romaniuk, J., Bogomolova, S. & Dall’Olmo Riley, F. 2012. ‘Brand image and brand usage: Is a forty-year-old empirical generalization still useful?’ Journal of Advertising Research, 52:2, 243-51.
Marketing isn’t the only discipline to have been seduced by the idea that modelling can somehow bypass the hard work of developing empirical laws. Few seem to realise how heroic the assumption is that teasing out a few weak correlations can quantify precisely how much [something of interest eg sales] will change in the future when [various other things, eg media choices] are altered.
Added to this is the ‘Big Data fallacy’ that adding together bunches of weak correlations will lead to more and more accurate predictions – “once we have enough data, a clever programmer, and a powerful enough computer, we’ll be able to predict everything we want”. It’s as if chaos theory taught us nothing at all.
The basic work of science is making empirical observations, looking for patterns, and then…. once you have found one, looking to see where it holds and where it doesn’t. This requires lots of replications/extensions over different conditions (eg countries, product categories, and so on). This is how scientific laws are developed, that give us the ability to make predictions. These replications/extensions also tell us what conditions don’t affect the law, and maybe some that do. This leads to deep understanding of how the world works. Experiments can be used to tease out the causal directions and magnitude, what really affects the pattern and how much. Again these experiments need to be done carefully, across a range of conditions that might matter.
Yes, this doesn’t sound very glamorous, it takes much time and effort (1% inspiration, 99% perspiration). Sometimes we get lucky, but generally many many studies are required. By independent teams, using creatively different approaches – so we can be sure that the empirical phenomenon really does generalise, that it isn’t a fragile result (or a mistake) that only exists in one team’s laboratory.
Unsurprisingly the idea that a computer model could bypass much of this hard work is seductively attractive.
Terribly complicated, yet naive, modelling seems to be everywhere. In population health statistical correlations deliver precise estimates that if people eat particular foods (or amounts of fat/sugar/alcohol, or sitting around) then their risk of dying early will be such and such. There is nothing wrong with this, so long as we recognise the weakness of the method. Unfortunately these correlations often get handed over to engineers who, with a spreadsheet and a few heroic assumptions about causality, produce model predictions that if the government taxed this, or regulated that, then x million lives would be saved, and x $billion saved in hospital bills. These predictions need to be treated with a high degree of skepticism. We need tests before legislation is changed and money spent.
In climate science, a rather new, and until recently very small discipline, modellers now seem to dominate. In the 1970s a short period of cooling led to worry about global cooling, but then temperatures turned around to rising again, and climate scientists started to become seriously concerned about the role of rising CO2 levels. They rushed to develop models and in the early 1990s they gave their predictions for CO2 emissions to lift global temperature, along with accompanying predictions of oceans rising, ice retreating, polar bears disappearing and so on. 25 years later they are confronted by the overwhelming predictive failures of these models, that is, the models substantially over-predicted the warming that was supposed to occur (given that the CO2 levels have risen – the IPCC, even though they are ‘marking their own homework’, admit this in their assessment). The modellers are now starting the work to figure out why. Meanwhile the forecasting scientists who criticised the climate scientists’ forecasting methods, and predicted this result, have been vindicated.
Models that show wonderful fit to historic data routinely fail in their predictions*. That’s why we revere scientific laws (and the theories built on them) because they have made predictions that have come to pass, over and over.
* *See also Dawes, J. G. 2004. ‘Price changes and defection levels in a subscription-type market: can an estimation model really predict defection levels?‘ The Journal of Services Marketing, 18:1, 35-44.
Our critics have been few, and rather kind (nothing of substance has been raised).
Now and then a marketing guru issues a thinly disguised advertisement for their consulting services that tries to have a go at the laws and strategy conclusions in How Brands Grow. They usually say something like:
“Our data confirms that larger market share brands have much higher market penetration BUT our whizz-bang proprietary metric also correlates with market share, and this proves that it drives sales growth, profits, share price, and whether or not you will be promoted to CMO”.
Often some obscure statistical analysis is vaguely mentioned, along with colourful charts, and buzzwords like:
And sexy sounding (but meaningless) metrics along the lines of:
All of this should raise warning bells amongst all but the most gullible.
Let me explain the common mistakes….
Ehrenberg-Bass say brands grow only by acquiring new customers.
These critics somehow missed the word “double” in Double Jeopardy. Larger brands have higher penetration, and all their loyalty metrics are a bit higher too, including any attitudinal metrics like satisfaction, trust, bonding… you name it.
Brands with more sales in any time period, are bought by more people in that time period. So if you want to grow you must increase this penetration level. In subscription markets (like home loans, insurance, some medicines) where each buyer has a repertoire of around 1, then penetration growth comes entirely from recruiting new customers to the brand. In repertoire markets penetration growth comes from recruitment and increasing the buying frequency of the many extremely light customers who don’t buy you every period.
The “double” in Double Jeopardy tells us that some of the sales growth also comes from existing customers becoming a little more frequent, a little more brand loyal. Also their attitudes towards the brand will improve a bit, as attitudes follow behaviour.
Improved mental and physical availability across the whole market are the main real world causes of the changes in these metrics. The brand has become easier to buy for many of the buyers in the market, it is more regularly in their eyesight to be chosen, and more regularly present in their subconscious, ready to be recalled at the moment of choice.
Why does it matter anyway? Can’t we just build loyalty AND penetration?
Yes, that’s what Double Jeopardy says will happen if you grow.
Loyalty and penetration metrics are intrinsically linked. They reflect the buying propensities of people in the market – propensities that follow the NBD-Dirichlet distribution and Ehrenberg’s law of buying frequencies. Growth comes from nudging everyone’s propensity up just a little bit. Because the vast majority of buyers in the market are very light buyers of your brand this nudge in propensities is seen largely among this group – a lot go from buying you zero times in the period to buying you once, so your penetration metric moves upwards (as do all other metrics, including attitudes).
For a typical brand hitting even modest sales/share targets requires doubling or tripling quarterly penetration, while only lifting average purchase rate by a fraction of one purchase occasion. That tells us that we need to seriously reach out beyond ‘loyalists’, indeed beyond current customers, if we are to grow.
When budgets are limited (i.e. always) it’s tempting to think small and go for low reach, but this isn’t a recipe for growth, or even maintenance.
A focus on penetration ignores emotional decision making.
This is odd logic. A focus on mental and physical availability explicitly realises that consumers are quick emotional decision makers, who make fast largely unthinking decisions to buy, but who if asked will then rationalise their decision afterwards.
Ehrenberg-Bass say there is no loyalty.
Really?! On page 92 of “How Brands Grow” we write:
“Brand loyalty – a natural part of buying behaviour. Brand loyalty is part of every market”.
On page 38 of our textbook “Marketing: theory, evidence, practice” we write:
“Loyalty is everywhere. We observe loyal behaviour in all categories” followed by extensive discussion of this natural behaviour.
In FMCG categories, buyers are regularly and measurably loyal – but to a repertoire of brands, not to a single brand. And they are more loyal to the brands they see a bit more regularly, and buy a bit more regularly.
All brands enjoy loyalty, bigger brands enjoy a little bit more.
Ehrenberg-Bass analysis was only cross-sectional.
Actually, we published our first longitudinal analysis way back in 2003 (McDonald & Ehrenberg) titled “What happens when brands lose or gain share?”. This showed, unsurprisingly, that brands that grew or lost share mainly experienced large change in their penetration. This report also analysed which rival brands these customers were lost to or gained from.
In 2012 Charles Graham undertook probably the largest longitudinal analysis ever of buying behaviour, examining more than six years of changes in individual-level buying that accompanied brand growth and decline. This highlighted the sales importance of extremely light buyers.
In 2014 we published a landmark article in the Journal of Business Research showing that sales and profit growth/decline was largely due to over or under performance in customer acquisition, not performance in retaining customers. Far earlier we had explained that US car manufacturers did not experience a collapse in their customer retention when Japanese brands arrived, they each suffered a collapse in their customer acquisition rates.
But if we can change attitudes then surely that will unlock growth?
It’s rare that it’s a perceptual problem holding a brand back. Few buyers reject any particular brand (and even most of these can be converted without changing their minds first). The big impediment to growth is usually that most buyers seldom notice or think of our brand, and that the brand’s physical presence is less than ideal.
For more on “Marketing’s Attitude Problem” see chapter 2 of “Marketing: theory, evidence, practice” (Oxford University Press, 2013.
Attitudes can predict (some) behaviour change. Light buyers with strong brand attitude were more likely to increase their buying next year. And heavy buyers with weak brand attitude were more likely to decrease their buying next year.
The real discovery here is that a snapshot of buying behaviour (even a year) misclassifies quite a few people. Some of the lights are normally heavier but were light that particular year. Some of the heavies were just heavy that year (kids party, friends visited, someone dropped a bottle) and next year revert closer to their normal behaviour. Note: for many product categories just a couple of purchases is needed to move someone into, or out of, the heavy buyer group.
Attitudes tend to reflect any buyer’s longer-term norm. So someone who is oddly heavy in buying this year will tend to be less attitudinally loyal to the brand than ‘regular’ heavies. Someone who is oddly light this year will tend to be more attitudinally loyal to the brand. Next year, odds are, their buying moves closer to their norm and their expressed attitude.
This statistical ‘regression to the mean’ is not real longer-term change in behaviour of the kind marketers try to create. Nor does this show that attitudes cause behaviour – their real influence is very weak, while the effect of behaviour on attitudes is much stronger.
Ehrenberg-Bass analysis is very linear reductionist, whereas we take a quadratic holistic approach.
Really not sure what these critics are talking about, nor perhaps do they. This is pseudo-science.
I have a super large, super special data set.
Please put the data in the public domain, or at least show the world some easy-to-understand tables of data. If you want us to consider your claims seriously then please don’t hide behind obscure statistics and jargon.
I have data that shows Ehrenberg-Bass are wrong, but can’t show it.
Kevin Roberts has resigned (as head coach of Publicis Groupe, executive chairman of Saatchi & Saatchi/Fallon, and member of the Publicis Groupe management board) due to the storm of indignation after he made some mildly politically incorrect comments – he essentially said he thought sexism was worse in other sectors than it was in the advertising industry.
As a feminist I don’t think this does our cause any good, more important issues (e.g. how few female members of parliament we have, child marriage, and FGM) are being drowned out by issues that are easy to sell to the spoilt and trendy. I agree with Joanna Williams’ analysis of the affair.
As a marketing Professor I’m dismayed how, in comparison, the marketing community saw little problem with Kevin Robert’s decades of Brandlove nonsense; indeed many snapped up his silly book, built their brand metric systems on his ideas, and so on. It seems that marketers find it much easier to identify the politically incorrect than the scientifically incorrect.
Similarly fellow members of the Publicis management board had no problem with Kevin selling nonsense to advertisers (if it makes money…?) . Nor did they have any problem with someone without a marketing degree being “head coach” for their staff, most of whom are young and also without formal marketing training.
No wonder marketers aren’t taken seriously.
It’s tempting to convert brand value into temporary profits or sales.
For publicly traded companies the goal is sometimes to fool the financial markets into the thinking the company is healthier than it is. Some managers even say they have to do what they do in order to maintain the share price. The ethics, and even legality of such practice is questionable. Of course, converting brand value into temporary sales or profits really lowers the value of a company (and so will eventually lower market capitalization).
Here are the sorts of tricks managers use to hit immediate financial targets.
1. Cut advertising spend. For many packaged goods companies, advertising spend is equal to profits; or put another way if they didn’t advertise they could post double their normal profits for the year. So this gives management quite a lot of room to fill profit short-falls simply by reducing advertising spend. Of course this will depress sales, and therefore profit contribution, but the net effect will be a jump in profits. Next year sales will be even lower however, and will require more advertising to fix, or greater cuts to advertising to hide the reduction in profit contribution. A trick to hide reductions in advertising spend is to claim that marketing mix modelling or digital initiatives are delivering much greater efficiency so that the company can afford to cut advertising. As a marketing professor I can say that there are very good reasons not to buy this argument – financial analysts should beware of it.
2. More price promotion. The problem with cutting advertising is that sales revenue tends to also decline, probably not by much at first, but eventually the losses start to climb, and brands can risk being de-listed by retailers. So another trick is to replace some of the advertising cut with price promotions. These help fix (hide) sales declines.
3. Call discounts “marketing expenditure”. The problem with price promotions is that while they boost volume sales they decimate profitability and can even lower sales revenue. This can be fixed by registering sales as full price sales, not the discounted price they were really sold at, and instead booking the discount as a marketing expense (e.g. “trade marketing incentive”). This increased marketing cost can be useful to assay investment analysts who are worried that the company is inflating sales and profits by cutting the marketing budget (using the first two tricks above).
Few companies fully disclose and breakdown their marketing expenditure. This makes it easier for them to use such tricks to fool shareholders and potential investors. And ensure that management hit their performance targets.
It’s not just consumer goods companies that use such tricks. One automotive company told me that it’s common practice for manufacturers to ring their car dealers if they are going to miss a sales target and offer them cash payments if they can sell x number of cars/trucks in the few remaining days in the quarter/year. “Yes, no trouble” says each dealer, and then books a number of sales as they ‘sell’ these cars from one of the dealer’s registered companies to another of the dealer’s own companies. Then a “demo model” sign is placed on the cars along with a new discounted price (in effect paid for by the manufacturer). The car manufacturer hits their sales target, and books full price sales, they just register the money they paid the dealer as a marketing expense (possibly paid for out of the advertising budget). Of course, hitting sales targets next period will be even more difficult because there are just as many unsold cars sitting on the dealers’ lots.
In online advertising there is currently much controversy about charging advertisers for ads that could never be seen by consumers. In November 2014 Google, to their credit, issued a report that showed that only around half of the ads that were served by their servers were ever able to be viewed (e.g. many viewers did not scroll down far enough for it to appear on their computer or smartphone screen). Even more extraordinary, this figure of half the ad ‘impressions’ being (potentially) viewable was based on a very generous definition of “viewable”, that is, that at least 50% of the ads pixels were onscreen for one second or more. Unsurprisingly leading advertisers are calling for higher standards of viewability, in June 2015 Unilever’s Chief Marketing Officer Keith Weed said that only 100% viewability is acceptable, that for an online ad to count as an impression 100% of the ad needs to be onscreen not merely served by the web server to the web browser or app.
Of course Keith’s right, we don’t want to pay for vapourware, but we don’t have to, even if new standards of viewability are not agreed upon we can now easily calculate a figure closer to reality simply by halving the impression score. Or, put another way, double the CPM (cost per thousand impressions).
That gets us much closer to the truth, but not quite there though. Another consideration for online video and display ads is the problem is that some of the impressions that are served, and paid for by advertisers, are not reaching humans. Audience impression figures are inflated by views by other computers (‘spiders’ and ‘robots’) rather than actual humans. Some of this fake traffic is even fraudulent where firms collect money for delivering referrals to web sites, inflating their ratings. Fake clicks and video views can also be generated, often by virus software running on the computers of unsuspecting consumers. Major providers of online advertising space such as Facebook and Google have anti-fraud teams devoted to detecting this activity but it remains a problem. It’s difficult to know how large a problem it is, as many of the people reporting statistics have in interest in over-stating the problem (e.g. firms who sell anti-fraud solutions) or under-stating the problem (e.g. firms who sell on-line advertising space). In November 2014 Kraft is the US reported that it rejects more than three quarters of digital ad impressions deeming them “fraudulent, unsafe or non-viewable or unknown”.
That figure sounds about right given the Google research (others report similar numbers) and the fact that some impressions are non-human.
So about one in four online ad impressions is an actual opportunity to see (OTS) for a real human viewer. However, we can’t assume each ad impression is always actually seen, and therefore able to affect memory, we have to discount for the perceptual filters and inattention of these human beings. This is true for any media, an OTS is an opportunity for our ads to be seen not a guarantee. Just how much this varies by media, and by situation, is something that we are researching now in the Ehrenberg-Bass Institute. It will be some time before we have the solid empirical evidence needed to accurate compare the impact on brains of an OTS in different media. But until then we can still make meaningful comparisons between media at least in terms of the OTS, and a good guide for digital seems to be to quadruple the cost of each impression in order to compare it to another medium.
Professor Byron Sharp
University of South Australia
PS Google Ad Networks have announced that they are about to change bidding for CPM (per thousand impressions) to vCPM, which means you only pay for viewable ad impressions. Unfortunately viewable still merely means “when 50 percent of your ad shows on screen for one second or longer for display ads, and two seconds or longer for video ads”. So an important step in the right direction, now we have something closer to what would count as an OTS (or impression) in other media.
PPS Google suggest that you’ll need to double your old CPM bids now they are using vCPM. This is inline with the research cited at the start of this article.