What causes the Double Jeopardy law?

I was recently asked for a causal explanation of marketing’s Double Jeopardy pattern.

This is discussed in How Brands Grow (e.g. table 3.3 and surrounding text). Also see page 113 of my textbook. Though the most complete explanation is in the forthcoming “How Brands Grow part 2”.

It’s worth noting that causal explanations turn out to be ‘in the eye of the beholder’… e.g. what caused that window to break?
… the speed and mass of the ball resulting in sufficient force to break the molecular bonds in the glass of that window
… Jonny playing baseball on the front lawn when his Mum told him not to
… the wind, the pitch, the sun in Jonny’s eyes
… the Smith’s skimping and not installing double glazing ignoring their builder’s advice

All are better or worse explanations, depending on your point of view.

It’s the same for Double Jeopardy.

One explanation is simply that it’s a scientific law, it describes a bit of the universe, and that’s it… it’s simply how the world is. We don’t tend to ask why is there an opposite and equal reaction for every action (Newton’s first law), there just is.

The statistical explanation of Double Jeopardy is that it is a selection effect. Because  brand share depends largely on mental and physical availability, rather than differentiated appeals of different brands.  For marketers this is pretty important, pretty insightful, we wouldn’t get Double Jeopardy if brands were highly differentiated appealing to different segments of the market.  Since we do see Double Jeopardy all over the place that suggests that real-world differentiation is pretty mild.  Mental and physical availability must be a much bigger story than differentiation.  That’s a very important insight.

Advertisements

Behaviours can be useful predictors of other behaviours

MCDONALD, Heath, CORKINDALE, David & SHARP, Byron 2003. Behavioral versus demographic predictors of early adoption: a critical analysis and comparative test. Journal of Marketing Theory and Practice, 11, 84-95.

click here to download

 

Abstract

Predicting which consumers will be amongst the first to adopt an innovative product is a difficult task but is valuable in allowing effective and efficient use of marketing resources. This paper examines the accuracy of predictions made about likely first adopters based on the most widely accepted theory and compares them to predictions made by examining the relevant past behavior of consumers. A survey of over 1000 consumers examined adoption of an innovative technology: compact fluorescent lightglobes. The results show that variables which were derived from a utility and awareness perspective were a more accurate and managerially useful predictor than the demographic variables derived from the widely accepted theory based on the work of Rogers. It is suggested that these alternative variables could be utilized more readily by marketing managers in many circumstances.

Consideration sets for Banking and Insurance purchases

Dawes J., Mundt, K. & Sharp, Byron. 2009. Considerations sets for financial services brands. Journal of Financial Services Marketing, vol. 14, pp. 190-202.

ABSTRACT This study examines the extent of consumer information search and consideration of financial services brands. It uses data from two surveys of purchasing behavior. This study finds a surprisingly low level of consumer consideration, either by personal enquiry or via the internet. The most common consideration set comprised only one brand, and this was the case for both high-value and low-value services. The managerial implication is that services marketers should make brand salience a top priority, with the competitiveness of their offer not being the primary driver of sales. If a financial services brand is salient to a consumer, there is a very high chance they will purchase that brand, without extensive comparison of the merits of alternatives.

Journal of Financial Services Marketing (2009) 14, 190–202. doi:10.1057/fsm.2009.19 Keywords: consideration sets; evaluation; financial services; loyalty; brand switching

Download PDF.

 

Two type of repeat purchase market, with different loyalty patterns

Sharp, Byron. & Wright, Malcolm (1999) ‘There are Two Types of Repeat Purchase Markets’, paper presented to the 28th European Marketing Academy Conference, Berlin, Germany, 11-14 May.

Abstract

In this paper we report on a pattern in aggregate buying behaviour. We have observed two distinct types of repeat purchase markets with very different patterns of customer loyalty. These differences have profound implications for marketing theory and practice.

The first, and best known, are markets with relatively few solely loyal buyers and with buyers allocating their category requirements across several brands; we call these repertoire markets. Examples of repertoire markets include fast moving consumer goods, store choice, medical prescriptions, and television channel selection.

The second are markets with many solely loyal buyers, and with buyers allocating their category requirements almost entirely to one brand; we call these subscription markets. Examples of subscription markets include insurance policies, long distance phone calls, and banking services.

The distinction between these two types of markets is not a theoretical taxonomy, but is instead a dramatic empirical difference. For example, the proportion of solely loyal buyers enjoyed by a brand over a year seldom exceeds 20% in a repertoire market, but seldom falls below 70% in a subscription market. There is virtually no middle ground between these extremes.

Download full paper as PDF.

 

https://www.dropbox.com/s/7ufcbx1tfpzcfzg/6007.pdf

Emotional Branding Pays Off illusion

Behavioural loyalty is strongly correlated with propensity to agree to ‘brand love’ survey questions but…… most lovers still buy other brands, and most of a brand’s buyers don’t love it.

John Rossiter & Steve Bellman (2012) “Emotional Branding Pays Off – how brands meet share of requirements through bonding, commitment and love”, Journal of Advertising Research, Vol.52, No.3, pages 291-296.

Rossiter and Bellman (2012) purport to show how consumers’ attachment of “strong usage relevant emotions” to a brand affects behavioural loyalty. All they actually show is that if you buy a brand more then you are more likely to agree (on a market research survey) to positive statements about that brand. We’ve known for 50 or so years that people do this – that stated attitudes reflect past behaviour. Or more succinctly: attitudes reflect loyalty.

Specifically Rossiter & Bellman showed that people who ticked “I regard it as ‘my’ brand” tended to report that this brand made up more of their category buying (than for buyers who didn’t (regard it as their brand)). What an amazing discovery!

“I regard it as ‘my’ brand” was, by far, the most common of the ’emotional attachments’ they measured – with about 20% of the buyer bases of particular brands of beer, instant coffee, gasoline, and laundry detergent ticking this box. It was also most associated with higher share of requirements (behavioural loyalty). I’m not surprised because it is most like a direct measure of behavioural loyalty. If I mostly buy this brand of coffee then I’m much more likely to tick “I regard it as ‘my’ brand”. If I buy another brand(s) more then I’m hardly likely to tick that I regard this one as my special brand.

So reasonably we’d call this question (“I regard it as ‘my’ brand”) a measure of reported behavioural loyalty, and so it would have to be highly associated with any other measure of reported behavioural loyalty. But Rossiter & Bellman in classic sleight-of-hand call this question a measure of “bonding”, which they say is a measure of an emotion (not a self-report of behaviour)! Naughty naughty.

On safer ground their measure of “brand love” was if brand buyers agreed “I would say that I feel deep affection for this brand, like ‘love’, and would be really upset if I couldn’t have it”. Interestingly, hardly any of any brand’s buyers ticked this box. Just 4% of the average beer brand’s (male) buyers, just 4% of the average laundry detergent’s (female) buyers, 8% of the average instant coffee brand’s (female) buyers, and a mere 0.5% of the average gasoline brand’s (male) buyers. Restricting the samples to the specific gender that represents the main weight of buyers reduced the proportion of light and lower involvement category buyers. This would have increased the incidence of brand love yet it was still about as low as is possible. Rossiter & Bellman wrote that these results “reveal the difficulty of attaining strong attachment-like emotions”. Hmmm, well yes and these results also reveal how successful brands largely do without brand love.

With so very few of any brand’s buyers agreeing that they feel deep affection for the brand we would expect the few that did would be quite different from the average. We’d expect that they would be the heaviest, most loyal in the buyer base. And these lovers did report higher behavioural loyalty though it was far from absolute (100% share of category buying). In fact, ‘lovers’ only reported buying the brand about half the time (50% SoR). Behavioural loyalty is strongly correlated with propensity to agree to ‘brand love’ questions but…… most lovers still buy other brands, and most of a brand’s buyers don’t love it.

Rossiter & Bellman interpret their results differently. Their article title says emotional branding pays off, even if the article does nothing to investigate marketing practices. They act as if they are unaware of the research going back decades that shows, over and over, that usage affects propensity to react to attitudinal type survey questions (see Romaniuk & Sharp 2000). Instead, this single cross-sectional survey data is supposed to show that if marketers (somehow) run advertising that presents attachment emotions, then consumers will link these to the brand, and then change their behaviour to buy that brand more often than they buy rival brands. Rossiter and Bellman’s results show nothing of the sort, their clearly written article turns out to be highly misleading. Yet I fear that this will not stop many unscholarly academics citing the article, and many believers in this discredited theory citing it as evidence to support their blind faith. Beware of such nonsense.

What’s wrong in the house of academia, and a suggestion how to fix it

Presented to the Australia & NZ Marketing Academy Conference December 2012.

Marketing has a small ‘crisis literature’ where academics themselves bemoan the lack of real-world importance of academic research into marketing. For example, “Is Marketing Academia Losing Its Way”, Journal of Marketing, 2009.

Back in 1989 John Rossiter documented the growing gap between marketing scientists and consumer researchers even though they were supposed to be studying similar things. He warned the consumer researchers in particular that they were in danger of retreating into an Ivory Tower detached from the empirical findings regarding mass buying behaviour. Yet the trend continued unabated.

I myself, and colleagues, have had articles rejected from good journals when they chiefly documented a substantive finding about the world. My ‘favourite’ was when the editor of Marketing Science wrote to Jenni Romaniuk and myself about our work documenting the 60/20 law (ie it’s not 80/20). Effectively he said, “great stuff, I’m going to use this in my teaching, but we can’t publish it in Marketing Science because the journal tries to feature leading edge analysis whereas what you did was simple and transparent”. An open admission that Marketing Science is really a journal about engineering above science.

Yet around the same time the Nobel Prize for Physics was awarded to two Russian scientists who found a way of producing graphene, a single atom thick layer of carbon, this potentially extremely useful material had once been thought unlikely to exist in the real-world. They isolated graphene with a simple technique using common household sticky tape. Placing bulk graphite between two sheets of Scotch Tape they simply repeatedly pulled the tape apart removing layers of atoms until they achieved graphene. A colleague remarked that it showed you could still win a Nobel Prize “for mucking around in the lab”. In physics there is still respect for substantive discoveries.

The defence or excuse from marketing academia is that we have been placing our emphasis on rigour over relevance. But recent shocking findings in marketing academia, other social sciences, and even medical research have exposed the myth of improved quality. There have been been some high profile examples of scientists disgraced for falsifying results (including in marketing), while 10% of psychologists admit to falsifying data (but they presumably evaded discovery), and most admitted to sometimes practicing dubious practices like selectively reporting the studies “that worked” (and hiding those that did not support their hypotheses. Relatively higher rates of dubious practice were found among neuroscientists and cognitive & social psychologists. What do you think the rates would be in marketing?

A recent analysis (Wilhite and Fong 2012) of the dubious practice of journals encouraging (or bullying) authors to cite other articles from the same journal reported that the Journal of Retailing, Journal of Business Research, and Marketing Science were stand-outs at the very top of the suspect list – and that’s not a list of only marketing journals. Indeed marketing journals stood out from other disciplines as being into coercive citation to try to manipulate their citation impact scores.

In medical research standards are undoubtably higher. Yet when pharmaceutical companies seek to replicate findings reported in medical journals in most cases it can’t be done – even though they try hard, after all they are hoping to make money from the discovery. Many of the findings for cancer drugs are highly specific to particular circumstances (e.g. patients with particular genetic profiles) but the researchers didn’t explore these conditions, they just got lucky with their so far unrepeatable finding.

In marketing Hubbard & Armstrong (1994) documented that academics hardly ever try to replicate findings. We simply assume they are true (or perhaps not worth bothering with). Not surprising perhaps, when replications are done they usually are unable to repeat the original result. The same sort of scandal has hit a number of famous psychology experiments. “The conduct of subtle experiments has much in common with the direction of a theatre performance,” says Daniel Kahneman, a Nobel-prizewinning psychologist at Princeton University. Trivial details such as the day of the week or the colour of a room could affect the results, and these subtleties never make it into methods sections of research articles. Hmm, what’s the difference between a result that is so sensitive to many trivial, unknown and unpredictable details and no result at all? Why should we care about a finding that only occurs in high particular circumstances?

This is a bigger problem than fraud and dubious research practice. We need to stop publishing one-off flukes and explore the generalizability of findings – where and when does a result hold? How does it vary across product categories, brand size, brand age, different types of consumers, at different times, and so on.

Even large and varied data sets are being wasted in marketing when results presented as an average across many different conditions eg “marketing orientation is associated with higher financial performance r=0.28”. This tells us little about the real world; the average may even not actually apply in any of the major conditions.

We need to explore generalizability or otherwise our ‘discoveries’ tell us very little about the marketing world that we are supposed to be studying.

And we need to stop prematurely building shaky prescriptive theoretical edifices upon these doubtful, poorly documented findings.

If we don’t carefully and thoughtfully (call it ‘theory driven’ if you wish) examine a finding and how it varies (or not) across conditions then we are stuck with findings that probably were one-off events – with no way of telling. Currently we have to treat our findings as either applying to one historic data set covering one particular set of conditions that may never be seen again OR a result that generalises to all product categories, all countries, all seasons. Both views are preposterous, something in between is far more likely but there is a lot of land “in between”, it needs to be explored.

The dubious research practices discussed above come partly from ‘confirmation bias’, the fact that (marketing) scientists want to find evidence to support their hypotheses – and they want “a positive result” otherwise they lack the motivation to publish, or the belief that they will be accepted by any decent journal. Brodie and Armstrong (2001) suggested researchers adopt multiple competing hypotheses as a way of overcoming this bias. A worthy suggestion, but those implementing it tend to simply have their favoured hypothesis and the opposite – and they still obviously want to see their favoured hypothesis supported. So I would like to make a different suggestion. Let’s use research questions with the words “when”, “where” and “under what conditions”. Rather than black and white “does X cause Y” type hypotheses let’s ask “when does X cause Y?”, “does X cause Y in highly advertised categories?”, “is X more a cause of Y in developing economies?”. This is the basic work of science, documenting patterns in the real world. When do things vary and when to they not.

If researchers use “when”, “where” and “under what conditions” research questions they are aren’t trying to prove a proposition, so they don’t have to worry about failure, so they should hopefully be less likely to tweak data and pick findings. Also, very importantly, researchers will be documenting something useful about the world because they will be exploring generalizability.

PS The Nobel Prizes for Physics are awarded in line with Alfred Nobel’s criteria “to those who, during the preceding year, shall have conferred the greatest benefit on mankind” which explains the worthy emphasis on substantive fundings. Alexander Fleming’s accidental discovery of penicillin is another example of the Nobel prize committee valuing important discovery over display of academic prowess.

REFERENCES

ARMSTRONG, J. S., BRODIE, R. J. & PARSONS, A. G. “Hypotheses in marketing science: Literature review and publication audit.” Marketing Letters 12, 2 (2001): 171-187.

HUBBARD, R. & ARMSTRONG, J. S. “Replications and Extensions in Marketing: Rarely Published but Quite Contrary.” International Journal of Research in Marketing 11, (1994): 233-248.

REIBSTEIN, D. J., DAY, G. & WIND, J. “Guest editorial: is marketing academia losing its way?” Journal of Marketing 73, 4 (2009): 1-3.

ROSSITER, J. R. “Consumer Research and Marketing Science.” Advances in Consumer Research 16, (1989): 407-413.

WILHITE, A.W & FONG, E.A. “Coercive citation in academic publishing”. Science 335, (Feb 2012): 542-543.

Why stores stock many items that hardly sell

One line take-out: Each of us has a very different opinion on what the store should stock.  To win us all stores need a wide range.

The top selling 1000 items in a supermarket generate about half of its sales revenue. Which means that it’s vital that store managers make these items easy to see and buy – but that’s another story.

What I’d like to highlight today is that the other 30,000 or so items they stock sell very little volume.  This is what is sometimes called “the long tail”.

Stores try hard to weed out items that don’t sell.  So the typical store item does sell, but rarely. Stores are full of stock that barely moves while a tiny percentage of the items fly off the shelf.

This cam lead marketing consultants to advise retailers to pare back their range to concentrate on the items that deliver most of their revenue and profits. Yet this range (and cost) cutting strategy often fails.  Unfortunately, it’s been encouraged by recent research (some of it flawed) on consumer confusion mistakenly suggesting that smaller ranges will increase sales.

It’s true that stores look cluttered and complicated.  The average household only buys a few hundred different items from a supermarket in a year. That is, they do a lot of repeat buying of some items over and over.  So each buyer is looking for a few things out of the 30-50,000 on offer in the store.  That makes shopping sound like a horribly complicated task.

So why on earth would consumers be attracted to stores that stock so many items – most of which they don’t buy?  One notion is that consumers like the IDEA of choice, that they are attracted to variety but once they actually arrive in-store they fall back on their habitual nature and existing loyalties.

There may be a little truth in this explanation but the real reason is that consumers are very heterogeneous in the items they buy.  Remember that all those items in the store do sell, each item has its buyers.  So given that each of us is buying only a tiny proportion of the items in store the odds that my shopping basket will share anything in common with the person in front of me in the queue (or anyone else for that matter) is very low.  As I often point out, if you look at what’s in the shopping trolleys of fellow shoppers you see that “other people buy weird stuff”, or at least that they buy different items from you.

The few items in common in any two trolleys are, of course, most likely to be those items that sell in large volumes.  These will appear in many more people’s trolleys.  Even so most of the items in our trolley will not be from the ‘top 1000’ and so hardly anyone else will buy them.

The Double Jeopardy Law tells us that an item with low market share will be repeat-bought less often than its rivals, but not dramatically less often, the main reason that it sells so little is that few people ever buy it.  Which means that many of the many low selling items in a supermarket are, in effect, being stocked for just a few consumers.  Some may even be stocked for a single household.  But for these few buyers these items are important, they buy them, maybe not that often (but that’s true of most things we buy), they know them, they are in their heads and their pantries – but not many other people’s.

Because we buy these items we like stores that stock them.  We each enter a store looking for “our stuff”. If the store doesn’t stock the things we buy we can sometimes find ourselves inconvenienced.  We want to see, and be able to find, the items of interest to us.  That makes a store attractive to us.  Fortunately for store managers consumers are extraordinarily good at filtering out all the brands and SKUs that aren’t in their personal repertoire and finding their brands.  Successful stores make this even easier for consumers.

So my point is don’t make the mistake of thinking that a store can do without 90%+ of its range.  Stores compete for shoppers, and shoppers vary enormously in what they look for, in what mental structures are in their head, in what they see.  Each of us has a very different opinion on what the store should stock.  To win us all stores need a wide range.