The decline of science in marketing

My colleague Dr Jenni Romaniuk has just met with famous US Professor David Schmittlein.  She sought his guidance on the US academic scene, who we (the Ehrenberg-Bass Institute) might collaborate with and so on.  David generously gave his time and frank views.

Sadly she wrote to me:

“He was quite pessimistic about the Top US journals and the meaningfulness of the research that is published there.  He sees the marketing research orthodoxy polarising in to two camps – the applied economists and the cognitive scientists.  Neither of which look to the ‘real world’ for guidance on modelling.”

So someone on the inside of the US system shares our assessment.  Actually it’s not an uncommon view, particularly among older marketing academics.  What is happening to our discipline ?


Brand Keys (and other brand equity monitors) can’t predict a brand’s future

There are a number of market research products that claim to predict a brand’s future.  Some even make the outrageous claim that they can predict a company’s stock-price, which makes you wonder why these people are still doing the hard work of selling surveys, why aren’t they stockmarket billionaires by now?

Brand Keys is one such market research agency.  I asked them for evidence for their predictive claims and they were nice enough to point to documentation in their book (and many subsequent conference presentations).  But when I looked at the public evidence (it wasn’t hard, I just used Google) I found that the changes in the brand rank in their Customer Loyalty Index occured after real market place changes, not before as they had implied.

Below is the email I sent outlining the evidence to Brand Keys, I received no reply. I don’t mean to single out Brand Keys.  Their rivals in the brand equity business are no better – I have seen no evidence that such surveys can predict a brand’s future.  There is also no good reason to think they should/could.

Dear Robert

Thank you for sending the slide, I also bought your book and have read it, including the Starbucks case study. Unfortunately the evidence does not support the assertion that Brand Keys is able to predict changes in trends ahead of time.

The book and slide give a selective group of different metrics which are supposed to tell a story of Brand Keys predicting, at the start of 2007, Dunkin Donuts awaking from its slumber and Starbucks ending its growth run. It would be impressive if there was evidence of Brand Keys predicting ahead of time a change in trend for either brand but the evidence says differently.

Dunkin Donuts began its resurgence in 2003 (reported by BusinessWeek), long before the 2007 you predicted.  By Aug 2004 it posted an annual 6.9% increase in same store sales, opening 423 new stores, and hence 14% increase in overall sales. Back then Starbucks posted a 10% increase in same store sales, but that was their last year of rises in same store growth, i.e. things started going sour for them in 2004 (when you rated them as fantastic).

Perhaps your 2007 prediction of decline referred to Starbucks’ overall sales revenue – but in 2007 (they year they slipped on your ranking) they posted 22% increase in sales revenue.

Perhaps you meant to predict a change in Starbuck’s share price – but it started declining in 2006, i.e. before you predicted any change in trajectory. Perhaps you meant same store sales – but, as I said, that growth trend ended after 2004.  And actually went negative in 2008 (after practically no change in the Brand Keys score).

Perhaps you meant profits – but these dropped only in 2008, and rose again the next year.

Perhaps you meant market share – but Starbucks has led Dunkin Donuts throughout all this period (and still does). Yes Dunkin Donuts has been growing for a long time now, opening stores where it had none.  Yes Starbucks opened too many stores, especially overseas (it eventually happens to most companies on an expansion drive).  Yes Starbucks got hit by the housing crunch (with big exposure to California and Florida).  But in mid 2009 Starbucks posted a turnaround in same store sales growth achieving record quarterly earnings for the last 3 months of 2009  – note that this before the Brand Keys ranking for Starbuck rose from 3rd to 2nd.

So what predictive claim are you making ?  The facts suggest a rear-view mirror on a host of performance metrics. Please do tell me if I’ve missed some important facts.


Measuring Advertising Sales Effects

The purpose of advertising is largely to encourage consumers to buy your brand.  Controversy still reigns how this occurs, e.g. attitude shift vs salience, but it is uncontroversial that exposure to a brand’s advertising should increase the propensity (likelihood) to buy that brand – that’s what an advertiser hopes to achieve for their spend.

So this is the behavioural effect of advertising.  This is what undrerpins its sales effect.  So this is what should be measured in order to judge the advertising.

Yet this is not what is measured, hardly ever.

A little bit of this behavioural nudge shows up in a change in this week’s sales figures.  But only a tiny bit, because most of the consumers exposed to the advertising didn’t buy from the category this week.  What on earth do this week’s sales figures tell us about the total long-term affect of this bit of advertising ?  Not much, because we don’t know how much.

Also this week’s sales figures are a mis-mash of all sorts of other effects, from in-store promotions to competitor advertising.  Even if, in a pristine fantasy world, they were purely affected by your advertising alone what would they actually show about our advertising… is it measurement of the sales power of the ad? is it the quality of the media placement? is it a measure of the appropriateness of the spend (obviously the sales effect depends heavily on how many people the advertising reaches)?

Many marketers understand that sales figures are a messy, noisy indicator of advertising’s sales power.  Largely they are put off by the fact that sales figures show little or no reaction to their new advertising campaign.  So they employ proxy measures, like advertising awareness or perception shifts.  But these measures are noisy messy measures too.  Again, in a fantasy world where they were only affected by your advertising, it still isn’t clear if they are measuring the quality of the advertising, or the media placement, or whether the spend was appropriate.  And proxy measures are just that, they are not measures of the behavioural change in buying propensities.

I hope I’ve convinced you that marketers, and market researchers, have largely been barking up the wrong tree for decades.  The reason we know so little about the sales effects of advertising – and hence what is good advertising – is that we have been measuring the wrong effects.

Behaviours are what we need to measure.  But aggregate level sales receipts, like weekly/monthly sales figures, are a lousy measure of the sales power of our ads.  The solution is true single-source data capturing individual’s repeat-buying over time as well as their exposure to advertising over time.  And fortunately, single-source data is becoming increasingly available.

Do brand loyalty, commitment, engagement metrics work ?

So Byron, you’ve posted a couple of comments (one and two) recently to Robert Passikoff’s blog where you debunk his claims regarding the predictive validity of Brand Keys.  What do you have against Brand Keys ?

Goodness, nothing.  I’ve made these sorts of comments about a number of proprietary brand equity services. Robert’s working hard and doing a great job at getting publicy for his company, and that’s how he attracted my attention.  I suppose perhaps I’m unwanted attention, but if you make public claims then you have to expect scrutiny.  I’m sure Robert doesn’t take my comments personally.

But you don’t like these brand tracking services ?

There is an industry that provides special scores on brands, based on surveying customers.  These services mostly claim to be measures of things like brand loyalty or brand equity.  They usually have exotic names like commitment model, brand esteem, brand voltage, brand asset valuator.  They offer to diagnose whether the brand is sick or not, and maybe to pin-point what is wrong, and suggest what to do about it – though most of the claims made for these services are simply about telling how weak or strong your brand is.  Essentially they claim to be able to predict whether the brand is about to gain or lose market share.

I think any claims made for these proprietary products should be subject to independent examination.  It’s the job of academics to do this testing.

Some of the claims are so extraordinary, and so important that they deserve to be checked out.  If they are turn out to be true that would be fabulous.

And do these proprietary brand health surveys, these metrics, work?

Well that’s just the thing.  No-one knows.  In their sales pitches there are claims of ‘validation’ studies that ‘prove’ they work but when I look at these studies I find they prove no such thing.  Bigger brands have more buyers who are more likely to say something (nice) about the brand in a survey – and that’s what appears to drive these metrics (that and sampling and other errors).

But some of these services do claim to be validated by academic studies.

Don’t get me started on this…it dismays me is when I see academics cosying up to the providers of these services and offering paid endorsements, or where academics themselves develop proprietary research approaches that they won’t allow others to test.

I don’t see any replicated tests by different teams of independent academics who aren’t being paid for their endorsement.

So this sort of market research is pointless, we should just look at our sales figures ?

Sales figures can be distorted by the stocking levels of the distribution system, but most marketers are well served by market research that accurately tracks sales and market share – and can break down the market share into penetration (numbers of customers) and behavioural loyalty metrics.  This sort of market research data is very valuable.

You said something nice about market research.

I say lots of nice things about market research, and market research consultancies.  I only sound grumpy when I hear people making bold empirical claims that haven’t been subjected to independent open tests.  I don’t like ‘black box’ methodologies, and I especially don’t like ones where they haven’t had (or won’t let) people independently check out their claims.

Are you offering to do this?

Absolutely.  I keep making the offer to the people who sell these services.  I point out that they have a lot to gain by having an independent test.  They often agree, but so far, sadly, an excuse seems to always pop up later why they can’t send the data or even a full description of previous analyses.

Presumably the data is commercially valuable or confidential.

Yes but they could send old data.  They could disguise it a bit.  They don’t have to necessarly reveal what’s inside the ‘black box’, for example, if they say that their black box predicts when a brand is going to change its sales trajectory then they should at least make some public predictions and then we can all wait and see how accurate they are.

I guess they have everything to lose and little to gain – especially if their ‘black box’ brand loyalty measure is already selling well to marketers.

That sounds like the same reason that psychics and astrologers tend to avoid independent tests.  But I would  hope that the market research industry operate to a higher level of ethics, and a greater respect for science.

Well I suppose the solution is for the clients of these services to demand independent testing?

Yes, there is nothing stopping marketers from doing this.  When they market their own products (like pharmaceuticals) they have to have their benefit claims backed by independent science.  They should demand the same from the people who are selling them ‘black box’ market research.

How Brands Grow book now available for pre-order

Oxford University Press will be publishing my book early in 2010.  It’s available for pre-order in a number of countries – here is a list of online outlets where you can order it.

Science has revolutionized every discipline it has touched, now it is marketing’s turn!!  All marketers need to move beyond the psycho-babble and read this book… or be left hopelessly behind.
Joseph Tripodi,
Chief Marketing Officer,
The Coca-Cola Company

How Brands Grow by Dr Byron Sharp

Consumption Situations – some perspective

It’s important to know when consumers consume your brand.  Do they use it largely for a morning snack or for sharing with friends ?

However, some marketers over estimate the degree to which their brand is confined to a particular situation, used for a particular purpose.  Worse, they market in such a ways as to make it a ‘self-fulfilling prophecy’ hemming the brand into one situation through advertising nothing else.

In the same way that product categories can be too narrowly defined based on product features (e.g. the chocolate vs vanilla ice-cream categories), categories based on consumption situation can lull marketers into a false sense of limited competition.  e.g. nonsense like “Kit-Kat doesn’t compete with Snickers because Kit-Kat is for taking a break whereas Snickers is to satisfy a hunger craving”.

The reality is that few brands are exclusively bought for specific consumption situations, and which brands are bought for which situation varies between consumers and over time.

Yes the same person in the same situation can choose different brands on different buying times.

What’s wrong with loyalty ladders ?

I’ve written before about how silly loyalty ladders are.  I’ve been asked, aren’t they harmless, just showing the heterogeneity within any brand’s customer base or market (from non buyers to highly loyals) ?

Here is what is wrong with loyalty (conversion) ladders:

The ratios of non-buyers, to light buyers, to medium, to heavy, are perfectly predictable (by the NBD-Dirichlet).  So they are set.  If a brand gains in share/sales, the ratios all move in a predictable way.

All loyalty ladders do is show these ratios – but they imply that you can change the ratios through particular strategies.  This is wrong, they will only change if you increase or decrease in market share.

– Loyalty ladders imply that you should target particular levels of the ladder.  This is wrong.

– Loyalty ladders imply that some brands are stronger or weaker – when really they are reporting brand size.

– Loyalty ladders are a waste of money spent on market research and reporting.  Most of the tiny changes and differences they report are sampling (and other) error.

– Loyalty ladders imply that awareness is a “once off battle”, that once someone is aware they always notice, recognise, recall your brand – this is nonsense.

– Loyalty ladders imply that 100% loyals are a brand’s most valuable customers, whereas far more volume comes from heavy category buyers who buy a number of brands.

– AND REALLY IMPORTANTLY…..Loyalty ladders distract marketers from the real issue which is how to grow penetration (reach all sorts of category buyers).

How to join as a Corporate Sponsor of the Ehrenberg-Bass Institute

Readers of this blog have reminded me that I’ve never mentioned how to join as a member.

Sponsors pay an annual contribution which is pooled into a serious R&D budget.  For this sponsorship you gain immediate access to all the Institute’s reports and in-house briefings (plus gain direct access to the researchers).  We normally provide one live in-house briefing per year, but more can be arranged.

This web page has more details.

Here is a list of the Ehrenberg-Bass Institute’s sponsors from around the world.

Loyalty Program Misleading Effects

The Journal of Marketing last year (2007) published an article titled “The Long-Term Impact of Loyalty Programs on Consumer Purchase Behavior and Loyalty” by Yuping Liu. It purports to show the impact of a loyalty program on the buying rates and loyalty of those who join the program. The key finding is that very large changes are observed for the lighter and moderate buyers in the loyalty program while the heaviest buyers exhibited no change.

However, this finding, and the consequently very large sales effects that the program seemed to generate, are actually artifacts of the analysis method. Continue reading

Review of “How Customers Think” by Gerald Zaltman: This book talks a lot about insight but doesn’t deliver much.

Disappointing.  If you have read some bestsellers touching on recent findings in neuroscience (e.g. Antonio Damasio) and memory (e.g. Daniel Schacter) then what’s left of this book for you is largely an advertisement for Zaltman’s commercial and patented (!!) market research technique called ‘Zaltman’s metaphor elicitation’.

Yes there are good reasons to doubt focus groups Continue reading

Does advertising only work via driving intentions and preference ? No!

Apart from a very small amount of direct response advertising, advertising works (to generate sales) through memories.  This is an uncontroversial statement, yet it’s common for marketers and academics to forget the essential role of memory and instead think advertising works largely through persuasive, rational or emotional, arguments that shift brand evaluations.

The dominant way that advertising works is by refreshing, and occasionally building, memory structures that improve the chance of the brand being recalled and/or noticed in buying situations and hence bought.  Memory structures such as what the brand does, what it looks like, where it’s available, when it’s consumed, where it is consumed, by who, with whom and so on.  Associations with cues that can bring the brand to mind.

Some advertising creates a purchase intention, gaining a reaction like “I should buy that” or “that’s interesting, I must check that out”.  It’s commonly assumed that such advertising must be more sales effective, but this does not follow.  Continue reading

A problem with ad awareness norms to assess advertising quality

It is now common for market research agencies to promise their clients norms against which they can compare their advertising campaign.  For example, they might report…

“The new campaign for Fabulo achieved 37% ad awareness, this compares well to the average of 31% for new campaigns after 3 weeks”.

This sounds like good practice, but the norm is meaningless.

Better yet the research agency might compare against campaigns in a particular product category, or adjust for a particular GRP/TARP weight.  But this still isn’t good enough, GRPs (Gross Rating Points) tell us nothing about the reach and frequency of the campaign.

Worse still the metric confounds both media strategy effects and advertisement quality effects.  What is really needed is measurement immediately after the ad goes into the market, just of those consumers who had a potential exposure (OTS).  This can measure the ability of the advertisement to cut through and impact on memory structures, i.e. assess the quality of the advertisement live in-market.  Only then, when you know if the ad itself is working well or not, can you later use ad awareness metrics to evaluate the media strategy.

The concept of brand awareness has been hijacked by poor measures

When marketers first came up with the very worthy concept of brand awareness they were thinking, obviously, about the number of consumers who know the brand. Intuitively you would measure this by showing it to consumers and asking them if they are familiar with it, but last century this was expensive, phone surveys were cost effective but the brand couldn’t be shown (and printing pictures in mail surveys was expensive).

So rapidly the measures of brand awareness became verbal/written product category prompts, e.g. “what brands of fabric conditioner are you aware of ?” The problem with this type of measure is that it doesn’t really fit the concept. This measure doesn’t so much measure awareness as association of the brand with the product category cue. It also assumes that consumers can remember and say or write the brand name. Continue reading

Snake (oil) and loyalty ladders

Many market research houses now market a “loyalty ladder” or “loyalty pyramid” product. These dissect a brand’s customer base into 4-6 groups, starting with something like “no awareness” at the bottom and ending with something like “passionate loyals” at the top. This classification is usually based on behaviour (or claimed behaviour) such as share of category purchases devoted to the brand in question. Some add attitudinal statements into the customer classification. Others, like The Conversion Model, claim to be entirely attitidudinal.

All these do is reflect Continue reading