Watch the TEDx video on YouTube.
Watch the TEDx video on YouTube.
The purpose of advertising is largely to encourage consumers to buy your brand. Controversy still reigns how this occurs, e.g. attitude shift vs salience, but it is uncontroversial that exposure to a brand’s advertising should increase the propensity (likelihood) to buy that brand – that’s what an advertiser hopes to achieve for their spend.
So this is the behavioural effect of advertising. This is what undrerpins its sales effect. So this is what should be measured in order to judge the advertising.
Yet this is not what is measured, hardly ever.
A little bit of this behavioural nudge shows up in a change in this week’s sales figures. But only a tiny bit, because most of the consumers exposed to the advertising didn’t buy from the category this week. What on earth do this week’s sales figures tell us about the total long-term affect of this bit of advertising ? Not much, because we don’t know how much.
Also this week’s sales figures are a mis-mash of all sorts of other effects, from in-store promotions to competitor advertising. Even if, in a pristine fantasy world, they were purely affected by your advertising alone what would they actually show about our advertising… is it measurement of the sales power of the ad? is it the quality of the media placement? is it a measure of the appropriateness of the spend (obviously the sales effect depends heavily on how many people the advertising reaches)?
Many marketers understand that sales figures are a messy, noisy indicator of advertising’s sales power. Largely they are put off by the fact that sales figures show little or no reaction to their new advertising campaign. So they employ proxy measures, like advertising awareness or perception shifts. But these measures are noisy messy measures too. Again, in a fantasy world where they were only affected by your advertising, it still isn’t clear if they are measuring the quality of the advertising, or the media placement, or whether the spend was appropriate. And proxy measures are just that, they are not measures of the behavioural change in buying propensities.
I hope I’ve convinced you that marketers, and market researchers, have largely been barking up the wrong tree for decades. The reason we know so little about the sales effects of advertising – and hence what is good advertising – is that we have been measuring the wrong effects.
Behaviours are what we need to measure. But aggregate level sales receipts, like weekly/monthly sales figures, are a lousy measure of the sales power of our ads. The solution is true single-source data capturing individual’s repeat-buying over time as well as their exposure to advertising over time. And fortunately, single-source data is becoming increasingly available.
The problem with the term “frequency”, in media scheduling of advertising exposures (OTS), is that it can refer to more than one thing.
The media agency can report “last year we achieved 98% reach of your target market with an average frequency of 24”. Which sounds as if your advertising was reaching practically everybody, every fortnight – great.
But it means nothing like that. In reality it probably means reaching some people (e.g. heavy TV viewers) many times, like more than 100 times. While an awful lot of people received only one or two OTS (that’s opportunities to see) in the entire year.
Sounds scary. But the real point I want to make is that even when we get a report on the typical frequency (e.g. “half of the target consumers received between 4 and 8 exposures”) this can mean around once every two months, or 4-8 in January following by 11 months of silence. Actually the latter is more likely given many advertising campaigns.
So “frequency” can mean….
a) frequency in the sense of coverage over time, so that consumers don’t forget about us, and so when they make a category purchase the gap since the last time they saw one of our ads isn’t too long
b) frequency in the sense of repeatedly seeing our advertising several times close together so that they can understand and learn the advertisement.
The two sorts of “frequency” are very different from each other.
PS The (b) type of frequency is based on some old, discredited, ideas about learning and advertising.
Mark Ritson has made two bold claims, that BP has negative brand equity, and that the company will fail.
It’s good to see such predictions. In this case it’s a bit of a battle of the branding gurus….
Who will be right…time will tell. But my sympathies are with YouGov’s prediction, because brand equity is much more than brand attitude – which makes Mark wrong on both counts, BP doesn’t have negative brand equity.
Meanwhile BusinessWeeks notes that BP’s share price has jumped 30% this month (July). That’s before they plugged the well. Today they report:
“The share-price gains have restored BP to its position as Europe’s second-biggest oil company by market capitalization after Royal Dutch Shell Group Plc, overtaking Paris-based Total SA. BP had overtaken Shell at the start of the year as shares climbed to a year-high of 655 pence on the day of the Gulf accident.”
So Byron, you’ve posted a couple of comments (one and two) recently to Robert Passikoff’s blog where you debunk his claims regarding the predictive validity of Brand Keys. What do you have against Brand Keys ?
Goodness, nothing. I’ve made these sorts of comments about a number of proprietary brand equity services. Robert’s working hard and doing a great job at getting publicy for his company, and that’s how he attracted my attention. I suppose perhaps I’m unwanted attention, but if you make public claims then you have to expect scrutiny. I’m sure Robert doesn’t take my comments personally.
But you don’t like these brand tracking services ?
There is an industry that provides special scores on brands, based on surveying customers. These services mostly claim to be measures of things like brand loyalty or brand equity. They usually have exotic names like commitment model, brand esteem, brand voltage, brand asset valuator. They offer to diagnose whether the brand is sick or not, and maybe to pin-point what is wrong, and suggest what to do about it – though most of the claims made for these services are simply about telling how weak or strong your brand is. Essentially they claim to be able to predict whether the brand is about to gain or lose market share.
I think any claims made for these proprietary products should be subject to independent examination. It’s the job of academics to do this testing.
Some of the claims are so extraordinary, and so important that they deserve to be checked out. If they are turn out to be true that would be fabulous.
And do these proprietary brand health surveys, these metrics, work?
Well that’s just the thing. No-one knows. In their sales pitches there are claims of ‘validation’ studies that ‘prove’ they work but when I look at these studies I find they prove no such thing. Bigger brands have more buyers who are more likely to say something (nice) about the brand in a survey – and that’s what appears to drive these metrics (that and sampling and other errors).
But some of these services do claim to be validated by academic studies.
Don’t get me started on this…it dismays me is when I see academics cosying up to the providers of these services and offering paid endorsements, or where academics themselves develop proprietary research approaches that they won’t allow others to test.
I don’t see any replicated tests by different teams of independent academics who aren’t being paid for their endorsement.
So this sort of market research is pointless, we should just look at our sales figures ?
Sales figures can be distorted by the stocking levels of the distribution system, but most marketers are well served by market research that accurately tracks sales and market share – and can break down the market share into penetration (numbers of customers) and behavioural loyalty metrics. This sort of market research data is very valuable.
You said something nice about market research.
I say lots of nice things about market research, and market research consultancies. I only sound grumpy when I hear people making bold empirical claims that haven’t been subjected to independent open tests. I don’t like ‘black box’ methodologies, and I especially don’t like ones where they haven’t had (or won’t let) people independently check out their claims.
Are you offering to do this?
Absolutely. I keep making the offer to the people who sell these services. I point out that they have a lot to gain by having an independent test. They often agree, but so far, sadly, an excuse seems to always pop up later why they can’t send the data or even a full description of previous analyses.
Presumably the data is commercially valuable or confidential.
Yes but they could send old data. They could disguise it a bit. They don’t have to necessarly reveal what’s inside the ‘black box’, for example, if they say that their black box predicts when a brand is going to change its sales trajectory then they should at least make some public predictions and then we can all wait and see how accurate they are.
I guess they have everything to lose and little to gain – especially if their ‘black box’ brand loyalty measure is already selling well to marketers.
That sounds like the same reason that psychics and astrologers tend to avoid independent tests. But I would hope that the market research industry operate to a higher level of ethics, and a greater respect for science.
Well I suppose the solution is for the clients of these services to demand independent testing?
Yes, there is nothing stopping marketers from doing this. When they market their own products (like pharmaceuticals) they have to have their benefit claims backed by independent science. They should demand the same from the people who are selling them ‘black box’ market research.
Oxford University Press will be publishing my book early in 2010. It’s available for pre-order in a number of countries – here is a list of online outlets where you can order it.
Science has revolutionized every discipline it has touched, now it is marketing’s turn!! All marketers need to move beyond the psycho-babble and read this book… or be left hopelessly behind.
Chief Marketing Officer,
The Coca-Cola Company
Myths continue to abound that US car brands have suffered a collapse in loyalty. Marketers believe this because they don’t know about the law-like patterns governing loyalty metrics. Put simply, they don’t vary massively between brands, and the variation that does occur depends on marketshare. Detroit has lost share, but it would have had to lose almost all their market share in order for their repeat-rates to plumment. I published an article on this earlier this year, with empirical evidence. Detroit’s real problem is a lack of customer acquisition.
I’ve written before about how silly loyalty ladders are. I’ve been asked, aren’t they harmless, just showing the heterogeneity within any brand’s customer base or market (from non buyers to highly loyals) ?
Here is what is wrong with loyalty (conversion) ladders:
The ratios of non-buyers, to light buyers, to medium, to heavy, are perfectly predictable (by the NBD-Dirichlet). So they are set. If a brand gains in share/sales, the ratios all move in a predictable way.
All loyalty ladders do is show these ratios – but they imply that you can change the ratios through particular strategies. This is wrong, they will only change if you increase or decrease in market share.
– Loyalty ladders imply that you should target particular levels of the ladder. This is wrong.
– Loyalty ladders imply that some brands are stronger or weaker – when really they are reporting brand size.
– Loyalty ladders are a waste of money spent on market research and reporting. Most of the tiny changes and differences they report are sampling (and other) error.
– Loyalty ladders imply that awareness is a “once off battle”, that once someone is aware they always notice, recognise, recall your brand – this is nonsense.
– Loyalty ladders imply that 100% loyals are a brand’s most valuable customers, whereas far more volume comes from heavy category buyers who buy a number of brands.
– AND REALLY IMPORTANTLY…..Loyalty ladders distract marketers from the real issue which is how to grow penetration (reach all sorts of category buyers).
Yesterday, Ad Age reported a study showing that US packaged goods brands typically lost more than half of their loyal users last year. Oh no! The sky is falling…next year we’ll have no loyal customers left at all!
“I think what’s surprising is the magnitude of some of the effects,” said Eric Anderson, associate professor of marketing at the Kellogg School of Management at Northwestern, who reviewed the study.
Hmm yes surprising. Let’s put our brains into gear here, are we to accept that all these brands, which are essentially stable in market-share, lost half of their most loyal customers? There may be a recession on but this is still nonsense.
The truth is that the analysts misunderstood their own results, because of ignorance of the law-like patterns in brand buying.
The brands haven’t lost most of their loyal customers, the results are simply due to normal random fluctuations in buying, i.e. sampling (in time) variation – something any analyst should be aware of. Nothing real or unusual is going on here.
Read on if you’d like to know why…
The marketing consultants who did the study used their loyalty program ‘panel’ data. They classified a consumer as a ‘brand loyalist’ if the brand represented 70% or more of their 2007 repertoire. If that consumer did not also devote 70%+ of their category buying to that brand in 2008 they classed them as lost (typically about one third were ‘lost’ completely, while the other 20% still bought the brand but it wasn’t 70% of their repertoire in 2008).
But from one time period to another the brand’s weight in a consumer’s repertoire fluctuates. And this normal fluctuation is what this study mistook for customer defection. These loyals aren’t gone, they’ll be back again next year or the next.
Effectively their analysis excluded most heavy category buyers because these households will have larger repertoires, and so it’s much more difficult for one brand to represent 70% of their buying. Most buyers are light category buyers and these light buyers are more likely to appear 70%+ loyal. In other words their analysis largely is a report on lots of buyers who bought the brand once out of 1 category purchase, or twice out of 2, or three out of 4 – purchases in the loyalty program stores.
Now, all buyers are subject to random fluctuations in their on-going, steady, purchase patterns. Sometimes you buy 3 times a year, sometimes 4. Even if you buy two brands equally it’s seldom ABAB, it’s patterns like ABBABBBAABABAAB. This stochastic variation is normal and follows predictable patterns. This variation means that lots of people who were classed as “loyals” in 2007 fall out of this classification in 2008 – when nothing real has changed in their buying behaviour, and nothing has happened to the brand’s market share.
PS The study was by Catalina Marketing and the CMO Council. The CMO Council should have known better. Catalina Marketing sell targeted marketing services based on using this loyalty program data – which is a bit odd because this fluctuation seriously undermines the capacity to target consumers based on their loyalty level.
PPS I’ll leave the last word on the Ad Age article to Professor Gerald Goodhardt (co-discoverer of the Dirichlet model):
“After ‘Some brands lost more than a third…… while others held on to more than 60%……’ I stopped reading!”
Ad Age today reports:
Despite the pounding global business is taking, the $2 trillion value of the top 100 brands has held steady, according to Millward Brown’s annual BrandZ report. “Consumers are blaming companies and leaders for the current troubles, not the brands,” said Joanna Seddon, exec VP at Millward Brown, the WPP-owned research company.
Wow, wouldn’t we marketers like to believe that, our assets are still fine, aren’t we good. But to believe this we have to close our eyes and pretend we are in wonderland.
An asset class that has remained immune to the global recession that has wiped trillions of dollars off the value of companies (the same companies that are made up of these brand assets). Hmm. So will WPP stand behind their valuations and be prepared to buy any of these brands at their recession-proof price?! Ah, no, Sir Martin Sorrell isn’t stupid.
This to me is the 13th stroke of the clock (that makes one wonder about all that came before). If anyone previously had any faith in the financial quackery that produces Brandz valuations then this should bring you back to reality. Perhaps I shouldn’t be so mean to single out Millward Brown’s Brandz when there are plenty of other equally fanciful brand equity valuators, it’s just the sort of financial silliness that was practiced by so many (mind you, including some crooks) prior to the bubble bursting. But what annoys me is that it sheds a poor light on marketers, it makes us look arrogant and stupid. We don’t know enough about marketing but we think we can take on finance as well.
In 2002 I published with Malcolm Wright and Gerald Goodhardt on an empirical discovery. Repeat-purchase markets are polarized into those that show repertoire patterns and those that show subscription patterns of loyalty. With no markets showing ‘in between’ patterns.
We also found that the Dirichlet model of repeat-purchase fitted both sorts of markets, predicting brands’ loyalty metrics rather well. This was a surprise. It highlights what an achievement this scientific theory is.
Here is the paper for download.
There is also a related test of the boundaries of repertoire markets:
There is a history of discussion amongst marketers about the relative merits and meaning of different awareness measures. Then in 1995 an article was published that appeared to lay all this debate to rest:
Gilles Laurent and colleagues appeared to show that different brand awareness measures were systematically related, simply reflecting different levels of difficulty for respondents (i.e. brand prompted being easier than unprompted). So the different measures all tapped one construct, and a score on one measure could be used to accurately predict a score on another measure. We thought that was an incredibly important and practical finding. However, not was all that it seemed.
Nearly a decade later we replicated this research, and extended it to ad awareness. We achieved the same empirical results, but in doing so we were able to more clearly see what the previous research had, and had not, found. The measures tend to vary together, brand to brand, because some brands are much larger and more salient than others, so all their awareness metrics are higher too. However, we also examined the relationships between the loyalty metrics for each brand over time. Contrary to Laurent’s conclusion we empirically found that it isn’t possible to use their model to predict a brand’s score on one metric from its score on another.
So while all these brand awareness measures share something in common they do not perfectly tap one underlying construct. That’s as important a finding as Laurent’s might have been (if it had turned out to be true). Different awareness measures measure (somewhat) different things, even if they are all loosely related to the brand’s overall salience (and market share).
Romaniuk, Jenni, Byron Sharp, Samantha Paech, and Carl Driesener (2004) “Brand and advertising awareness: A replication and extension of a known empirical generalisation” Australasian Marketing Journal, 12 (3), 70-80.
I few years ago Emma Macdonald and I published this work showing the power of familiarity. When we did this in the late 1990s there wasn’t a great deal of interest in heuristics, snap judgements, and gut feeling. But today psychologists and behavioural economists are gaining a great deal of attention for their work showing how reluctant consumers are undertake a lot of cognitive effort when buying.
I’ve often said it is wrong to call much buying “consumer decision making”, it’s more buying (doing) than decision making (thinking).
Macdonald, Emma and Byron Sharp (2000) “Brand Awareness Effects on Consumer Decision Making for a Common, Repeat Purchase Product: A Replication” Journal of Business Research, 48 (Number 1, April), 5-15.
Many market research houses now market a “loyalty ladder” or “loyalty pyramid” product. These dissect a brand’s customer base into 4-6 groups, starting with something like “no awareness” at the bottom and ending with something like “passionate loyals” at the top. This classification is usually based on behaviour (or claimed behaviour) such as share of category purchases devoted to the brand in question. Some add attitudinal statements into the customer classification. Others, like The Conversion Model, claim to be entirely attitidudinal.
All these do is reflect Continue reading
Jenni Romaniuk and I developed the concept we called Brand Salience as “the propensity of the brand to be noticed or come to mind in buying situations”.
So how do we think this construct should be measured ?
Salience is cue dependent, it is based on the memories associated with the brand, and so different cues have different tendency to elicit the brand. To measure salience we need to get a handle on these cues. Traditional awareness measures (top of mind etc) share the common failing that they use only one single cue and that is the name of the product category. This single cue can’t tell us about the propensity of the brand to come to mind in real world buying where a substantial range of cues can trigger noticing/recall of the brand.
Fortunately we don’t need to measure consumers reactions to the full vast range cues. In the same way we don’t need to sample everyone in China to know the Chinese view on a particular topic, we just need a smaller representative sample. We just need a sample of cues, a sample of brand associations.
We set out the characteristics for choosing cues in Report 41 for corporate members of the Ehrenberg-Bass Institute. We also explain how to measure these brand associations. To then test if the selection and number of cues is adequate to measure Salience we expect the distribution of survey responses to follow the same distribution of repeat-buying of brands (NBD-Dirichlet). This is because we expect Brand Salience to have the same structure as individuals’ brand buying repertoires. So this statistical distribution can be used to shape the set of brand attributes.
So most brand perception tracking surveys can be adapted to measure Brand Salience. The biggest fault we find with existing brand tracking surveys is that they have an emphasis on evaluation, so contain many attributes that don’t measure memory but rather measure attitude (which means they measure past usage). It also means they have a great deal of redundancy. All of this can be fixed.
In addition to the group of attributes that are used to measure Brand Salience, we encourage adding some descriptive assets to track the brand’s distinctive assets (e.g. tone, colours, logos, slogans, characters). These can’t be used in the Salience measure because they skew substantially to particular brands (e.g. American or red for Coca-cola) and so bias the estimate. But there is value in measuring these perceptions because they allow communication to be branded (and therefore build salience) and these cues are used by consumers in noticing brands.
Corporate members who are interested in measuring Brand Salience, or mining their tracking data to produce Salience metrics should contact Jenni.