Mistaking statistical modelling for science

Marketing isn’t the only discipline to have been seduced by the idea that modelling can somehow bypass the hard work of developing empirical laws.  Few seem to realise how heroic the assumption is that teasing out a few weak correlations can quantify precisely how much [something of interest eg sales] will change in the future when [various other things, eg media choices] are altered.

Added to this is the ‘Big Data fallacy’ that adding together bunches of weak correlations will lead to more and more accurate predictions – “once we have enough data, a clever programmer, and a powerful enough computer, we’ll be able to predict everything we want”.  It’s as if chaos theory taught us nothing at all.

The basic work of science is making empirical observations, looking for patterns, and then…. once you have found one, looking to see where it holds and where it doesn’t.  This requires lots of replications/extensions over different conditions (eg countries, product categories, and so on).  This is how scientific laws are developed, that give us the ability to make predictions.  These replications/extensions also tell us what conditions don’t affect the law, and maybe some that do.  This leads to deep understanding of how the world works.  Experiments can be used to tease out the causal directions and magnitude, what really affects the pattern and how much.  Again these experiments need to be done carefully, across a range of conditions that might matter.

Yes, this doesn’t sound very glamorous, it takes much time and effort (1% inspiration, 99% perspiration).  Sometimes we get lucky, but generally many many studies are required.  By independent teams, using creatively different approaches – so we can be sure that the empirical phenomenon really does generalise, that it isn’t a fragile result (or a mistake) that only exists in one team’s laboratory.

Unsurprisingly the idea that a computer model could bypass much of this hard work is seductively attractive.

Terribly complicated, yet naive, modelling seems to be everywhere.  In population health statistical correlations deliver precise estimates that if people eat particular foods (or amounts of fat/sugar/alcohol, or sitting around) then their risk of dying early will be such and such.  There is nothing wrong with this, so long as we recognise the weakness of the method.  Unfortunately these correlations often get handed over to engineers who, with a spreadsheet and a few heroic assumptions about causality, produce model predictions that if the government taxed this, or regulated that, then x million lives would be saved, and x $billion saved in hospital bills.  These predictions need to be treated with a high degree of skepticism.  We need tests before legislation is changed and money spent.

In climate science, a rather new, and until recently very small discipline, modellers now seem to dominate.  In the 1970s a short period of cooling led to worry about global cooling, but then temperatures turned around to rising again, and climate scientists started to become seriously concerned about the role of rising CO2 levels.  They rushed to develop models and in the early 1990s they gave their predictions for CO2 emissions to lift global temperature, along with accompanying predictions of oceans rising, ice retreating, polar bears disappearing and so on.  25 years later they are confronted by the overwhelming predictive failures of these models, that is, the models substantially over-predicted the warming that was supposed to occur (given that the CO2 levels have risen – the IPCC, even though they are ‘marking their own homework’, admit this in their assessment).  The modellers are now starting the work to figure out why.  Meanwhile the forecasting scientists who criticised the climate scientists’ forecasting methods, and predicted this result, have been vindicated.

Models that show wonderful fit to historic data routinely fail in their predictions*.  That’s why we revere scientific laws (and the theories built on them) because they have made predictions that have come to pass, over and over.

 

* *See also Dawes, J. G. 2004. ‘Price changes and defection levels in a subscription-type market: can an estimation model really predict defection levels?‘ The Journal of Services Marketing, 18:1, 35-44.

Advertisement

The heavy buyer fallacy

It seems obvious, a brand’s currently heaviest buyers generate more sales and profits (per customer) so they should be the primary target for marketing.

This is commonly held misconception. The rise of direct marketing and CRM gave this fallacy a big plug, after all it can be hard to justify sending expensive letters to light customers.

But if our aim is to grow sales then our efforts should be directed at those most likely to increase their buying as a result of our attention. It takes only a moment of thought to realise that customers who already buy our brand frequently are going to be difficult to nudge even higher.

If, instead, our aim is to prevent sales losses then heavier customers would seem more promising – after all they represent a lot of sales we might lose. But then again, they are more loyal, other brands make up less of their repertoire, their habit to buy our brand is more ingrained, our brand has rather good mental and physical availability for them. In short, they aren’t particularly at great risk of defecting nor of downgrading.

So the idea that heavy buyers of your brand (“golden households” or “super consumers”) are your best target is flawed. Dangerously simplistic.

Apple’s mythical price premium

I’ve written previously questioning the marketing orthodoxy to aim for a price premium, and specifically on the myth of Apple’s price premium.

Here is another nice quote from Steve Jobs, interviewed on stage alongside Tim Cook (current CEO). He was asked if Apple’s goal was to win back dominant share of the PC market

“I’ll tell you what our goal is…to make the best personal computers in the world and products we are proud to sell and would recommend to our family and friends. And we want to do that [raises voice] at the lowest prices we can, but I have to tell you there is some stuff out there in our industry that we wouldn’t be proud to ship, that we wouldn’t be proud to recommend to our family and friends…and we just can’t do it, we can’t ship junk”.

The role of government is to provide great infrastructure/environment for ALL business

Job losses at icon brand companies make big headlines. Politicians are addicted to giving taxpayers money to large businesses, including trying to lure them into their electorate. We all love the idea of having the next Apple or Google in our electorate, but look at the US economy and its poor performance over the past decade despite having these two emerge as giants over that period.

It’s hard to name icon companies from Singapore or Netherlands or Austria yet these are some of the richest, most productive economies in the world. It’s a reminder to government that any modern economy is diverse and complex, even a Google is a tiny player within it. What matters much more than having a few of these star companies is that thousands of less-than-household name companies can do business easily. e.g. an efficient retail sector is a stark difference between highly productive and less productive economies. I know this isn’t sexy, but it seems to be the truth.

This means that government should concentrate on infrastructure, on good and simple laws, less red tape, flexible workforces, access to training/re-training.

I note that Australian government investment in Roseworthy Agricultural College, and the Waite Institute, and setting the levy that funded the Grape and Wine R&D Corp has returned an astonishing return – creating Australia’s modern export wine industry (and better wine for Australians). But all attempts to save the industry in times of a high Australian dollar and so on (e.g. the 1980s grape vine pull) did little good, just put money in the pockets of a few lucky businesspeople.

I suspect there are many similar examples in many other industries.

True Brand Loyalty – it doesn’t matter

This is a footnote from “How Brands Grow“.

There is a very long (too long!) history of marketing writers debating what ‘true loyalty’ is. This is a perfect example of what the twentieth century’s most famous philosopher of science, Sir Karl Popper, called essentialism: seeking to define the essence of an abstract theoretical concept (Esslemont & Wright, 1994). We can forever debate issues (like, What is true love? What is marketing?), but as these questions are simply about the definitions we decide to use there is no logical way of ever resolving them. To suggest that one approach captures the true meaning of brand loyalty, while another (by implication) does not, is bad philosophy, and bad science. Contrary to popular belief the purpose of science is not to say what things are but rather to say things about things: how they behave, how they relate to other things. Physicists can tell you a great deal about the properties and behaviour of things like gravity and mass while there is no rigid definition of what these things ‘truly’ are. Hence, this book is about what we can say about real world loyalty-type behaviour, both verbal behaviour (expressed attitudes) and overt behaviour (buying).

“Never let yourself be goaded into taking seriously problems about words and their meanings. What must be taken seriously are questions of fact, and assertions about facts; theories and hypotheses; the problems they solve; and the problems they raise” (Popper, 1976).

New marketing practice, evidence or fashion?

A new study of 10 years of medical research in one of the very top journals shows that reversals are not uncommon.  This is where later evidence shows that a new medical practice is no better or worse than older practice (or doing nothing).

40% of the studies that examined a current practice found it shouldn’t have been adopted.

The problem is partly that tests of new practice tend to be biased towards being positive. So later, better, studies are going to find that a good number of their findings were wrong.

Also practice tends to adopt invasive practices perhaps due to patient pressure, and doctor desire, to “do something, rather than nothing”. Though there are also reversals to current practices that refuse to take up something new (e.g. vaccinate, take aspirin) because of some (often theory-based) fear, which turns out to be unfounded.

This shows that the advance of (medical) science, and evidence-based practice, is not a straight line. Reversals are common.

Now in marketing practice the tinniest whiff of evidence that something might be useful is enough to send lemmings running for the cliff! Fear of missing out? I’d put both the widespread adoption of banner and search advertising, marketing mix modelling, ROI calculations, and equity monitors in this camp.  We praise doing something new, even if it is harmful.

Then there are things like loyalty programs which were adopted without any evidence at all, just theory.

When evidence finally does emerge that a practice is flawed there can be great reluctance to accept it – especially amongst those who make money from it.  For example, how many market research agencies have changed their practices in presenting segmentation data after we showed comprehensively that brands do not differ from their competitors in the types of customer they attract?  For example, marketers are still launching new loyalty programs with the aim of extracting lots more business out of existing customers.

And not enough marketers worry when “emperor’s new clothes” type questions highlight the astonishing lack of credible evidence and testing of techniques like mix modelling or brand equity based predictions.  People even say things like “but if I stop doing this, what will I do instead”, as if doing something useless is better than doing nothing.  Marketing needs to grow up, because the medical example shows how easy it is to get something wrong even when you are as careful and circumspect as doctors are.

Consideration sets for Banking and Insurance purchases

Dawes J., Mundt, K. & Sharp, Byron. 2009. Considerations sets for financial services brands. Journal of Financial Services Marketing, vol. 14, pp. 190-202.

ABSTRACT This study examines the extent of consumer information search and consideration of financial services brands. It uses data from two surveys of purchasing behavior. This study finds a surprisingly low level of consumer consideration, either by personal enquiry or via the internet. The most common consideration set comprised only one brand, and this was the case for both high-value and low-value services. The managerial implication is that services marketers should make brand salience a top priority, with the competitiveness of their offer not being the primary driver of sales. If a financial services brand is salient to a consumer, there is a very high chance they will purchase that brand, without extensive comparison of the merits of alternatives.

Journal of Financial Services Marketing (2009) 14, 190–202. doi:10.1057/fsm.2009.19 Keywords: consideration sets; evaluation; financial services; loyalty; brand switching

Download PDF.

 

Emotional Branding Pays Off illusion

Behavioural loyalty is strongly correlated with propensity to agree to ‘brand love’ survey questions but…… most lovers still buy other brands, and most of a brand’s buyers don’t love it.

John Rossiter & Steve Bellman (2012) “Emotional Branding Pays Off – how brands meet share of requirements through bonding, commitment and love”, Journal of Advertising Research, Vol.52, No.3, pages 291-296.

Rossiter and Bellman (2012) purport to show how consumers’ attachment of “strong usage relevant emotions” to a brand affects behavioural loyalty. All they actually show is that if you buy a brand more then you are more likely to agree (on a market research survey) to positive statements about that brand. We’ve known for 50 or so years that people do this – that stated attitudes reflect past behaviour. Or more succinctly: attitudes reflect loyalty.

Specifically Rossiter & Bellman showed that people who ticked “I regard it as ‘my’ brand” tended to report that this brand made up more of their category buying (than for buyers who didn’t (regard it as their brand)). What an amazing discovery!

“I regard it as ‘my’ brand” was, by far, the most common of the ’emotional attachments’ they measured – with about 20% of the buyer bases of particular brands of beer, instant coffee, gasoline, and laundry detergent ticking this box. It was also most associated with higher share of requirements (behavioural loyalty). I’m not surprised because it is most like a direct measure of behavioural loyalty. If I mostly buy this brand of coffee then I’m much more likely to tick “I regard it as ‘my’ brand”. If I buy another brand(s) more then I’m hardly likely to tick that I regard this one as my special brand.

So reasonably we’d call this question (“I regard it as ‘my’ brand”) a measure of reported behavioural loyalty, and so it would have to be highly associated with any other measure of reported behavioural loyalty. But Rossiter & Bellman in classic sleight-of-hand call this question a measure of “bonding”, which they say is a measure of an emotion (not a self-report of behaviour)! Naughty naughty.

On safer ground their measure of “brand love” was if brand buyers agreed “I would say that I feel deep affection for this brand, like ‘love’, and would be really upset if I couldn’t have it”. Interestingly, hardly any of any brand’s buyers ticked this box. Just 4% of the average beer brand’s (male) buyers, just 4% of the average laundry detergent’s (female) buyers, 8% of the average instant coffee brand’s (female) buyers, and a mere 0.5% of the average gasoline brand’s (male) buyers. Restricting the samples to the specific gender that represents the main weight of buyers reduced the proportion of light and lower involvement category buyers. This would have increased the incidence of brand love yet it was still about as low as is possible. Rossiter & Bellman wrote that these results “reveal the difficulty of attaining strong attachment-like emotions”. Hmmm, well yes and these results also reveal how successful brands largely do without brand love.

With so very few of any brand’s buyers agreeing that they feel deep affection for the brand we would expect the few that did would be quite different from the average. We’d expect that they would be the heaviest, most loyal in the buyer base. And these lovers did report higher behavioural loyalty though it was far from absolute (100% share of category buying). In fact, ‘lovers’ only reported buying the brand about half the time (50% SoR). Behavioural loyalty is strongly correlated with propensity to agree to ‘brand love’ questions but…… most lovers still buy other brands, and most of a brand’s buyers don’t love it.

Rossiter & Bellman interpret their results differently. Their article title says emotional branding pays off, even if the article does nothing to investigate marketing practices. They act as if they are unaware of the research going back decades that shows, over and over, that usage affects propensity to react to attitudinal type survey questions (see Romaniuk & Sharp 2000). Instead, this single cross-sectional survey data is supposed to show that if marketers (somehow) run advertising that presents attachment emotions, then consumers will link these to the brand, and then change their behaviour to buy that brand more often than they buy rival brands. Rossiter and Bellman’s results show nothing of the sort, their clearly written article turns out to be highly misleading. Yet I fear that this will not stop many unscholarly academics citing the article, and many believers in this discredited theory citing it as evidence to support their blind faith. Beware of such nonsense.

Refreshing brand memories after a gap

I haven’t seen a Life Savers ad in ages, most probably because they haven’t been advertising – a lot of brands go “off air” for long periods.

Yet when I saw the ad (see below) the jingle bounced back into my consciousness – thankfully they are still using the slogan.

Advertising exposures that follow a long gap can be particularly powerful memory refreshers.

Unfortunately this is another factor that tempts marketers to go off-air for periods, when the real lesson is don’t bunch your exposures together (burst).

Think how much better effect Life Savers would get if instead of bursts followed by long gaps they just kept on advertising at very low levels. If they did Most of us wouldn’t see a Life Saver ad very often, but we would see them regularly if infrequently and each time they would have a tremendous refreshing effect.

With the burst and silence pattern we seldom ever see Life Saver advertising but when we do we see it several times close together when the 2nd, 3rd and 4th exposures don’t have anywhere near the refreshing effect as the first. That’s wasted advertising money that could have been used to reduce the long silence between bursts.

Get a hole lot more out of your advertising, don’t burst, don’t go off-air. Spend less, for longer.

20121216-105651.jpg

Stores compete for shopping trips

One of the fallacies of retailing is that stores compete in terms of selling items.  Of course they need to sell items to make money but they do that by attracting customers (or rather, shopping trips).

The more attractive a store is, i.e. the greater the share of shopping trips it wins, the more it sells.  And this is the real retail battle.

The more shoppers a store attracts the more brands will compete to buy their shelf space. In a way a store is like a TV station, it needs to attract viewers so that advertisers will pay a lot for the little bit of advertising space it has to sell.  Stores work to attract shoppers so that they can take a bigger slice of brand owner’s sales to consumers.

Store owners can easily lose sight of this.  They do strange things like try to trap consumers in store, making it harder for them to find the things they buy regularly in the vain hope that they will spend more in the store if they are trapped there for longer.  This is not a good way to earn repeat shopping trips.

Category management systems can send stores astray.  Each category manager wants only to increase sales of their category, and loses sight of the bigger picture, which is for the store to win a greater share of all the shoppers.

It’s the brand marketers (the store’s suppliers) who want to sell specific items.  If they want the store to stock their items, and to give better display space then they need to show that doing so will make the store more attractive to shoppers.  That their brand will help the store win shopping trips from the other retailers.  When you think about it this way one realises that price specials are just one very small part of making a store attract more shoppers.

Why stores stock many items that hardly sell

One line take-out: Each of us has a very different opinion on what the store should stock.  To win us all stores need a wide range.

The top selling 1000 items in a supermarket generate about half of its sales revenue. Which means that it’s vital that store managers make these items easy to see and buy – but that’s another story.

What I’d like to highlight today is that the other 30,000 or so items they stock sell very little volume.  This is what is sometimes called “the long tail”.

Stores try hard to weed out items that don’t sell.  So the typical store item does sell, but rarely. Stores are full of stock that barely moves while a tiny percentage of the items fly off the shelf.

This can lead marketing consultants to advise retailers to pare back their range to concentrate on the items that deliver most of their revenue and profits. Yet this range (and cost) cutting strategy often fails.  Unfortunately, it’s been encouraged by recent research (some of it flawed) on consumer confusion – research that mistakenly suggested that smaller ranges will increase sales.

It’s true that stores look cluttered and complicated.  The average household only buys a few hundred different items from a supermarket in a year. That is, they do a lot of repeat buying of some items over and over.  So each buyer is looking for a few things out of the 30-50,000 on offer in the store.  That makes shopping sound like a horribly complicated task.

So why on earth would consumers be attracted to stores that stock so many items – most of which they don’t buy?  One notion is that consumers like the IDEA of choice, that they are attracted to variety but once they actually arrive in-store they fall back on their habitual nature and existing loyalties.

There may be a little truth in this explanation but the real reason is that consumers are very heterogeneous in the items they buy.  Remember that all those items in the store do sell, each item has its buyers.  So given that each of us is buying only a tiny proportion of the items in store the odds that my shopping basket will share anything in common with the person in front of me in the queue (or anyone else for that matter) is very low.  As I often point out, if you look at what’s in the shopping trolleys of fellow shoppers you see that “other people buy weird stuff”, or at least that they buy different items from you.

The few items in common in any two trolleys are, of course, most likely to be those items that sell in large volumes.  These will appear in many more people’s trolleys.  Even so most of the items in our trolley will not be from the ‘top 1000’ and so hardly anyone else will buy them.

The Double Jeopardy Law tells us that an item with low market share will be repeat-bought less often than its rivals, but not dramatically less often, the main reason that it sells so little is that few people ever buy it.  Which means that many of the many low selling items in a supermarket are, in effect, being stocked for just a few consumers.  Some may even be stocked for a single household.  But for these few buyers these items are important, they buy them, maybe not that often (but that’s true of most things we buy), they know them, they are in their heads and their pantries – but not many other people’s.

Because we buy these items we like stores that stock them.  We each enter a store looking for “our stuff”. If the store doesn’t stock the things we buy we can sometimes find ourselves inconvenienced.  We want to see, and be able to find, the items of interest to us.  That makes a store attractive to us.  Fortunately for store managers consumers are extraordinarily good at filtering out all the brands and SKUs that aren’t in their personal repertoire and finding their brands.  Successful stores make this even easier for consumers.

So my point is don’t make the mistake of thinking that a store can do without 90%+ of its range.  Stores compete for shoppers, and shoppers vary enormously in what they look for, in what mental structures are in their head, in what they see.  Each of us has a very different opinion on what the store should stock.  To win us all stores need a wide range.

The danger of chasing market share and trying to harm competitors

It might seem odd for the author of “How Brands Grow” to warn against aiming to grow market share, but here I’m offering a reminder that growth should be an outcome of a strategy to grow profits – profits should not be sacrificed for growth, and especially not for the goal of harming competitors.

We all know that market share growth can deliver increased profits.  But we also know that it can also decimate profits.

It’s fine to aim for share gains, so long as the strategy is carefully developed so that the share gains really do deliver profits.  Because research shows that companies that focus on profits are more profitable, while companies that aim for winning market share from competitors are LESS profitable, and more likely to go broke.

Here is a short essay by Wharton and Ehrenberg-Bass Institute Professor Scott Armstrong which does a pretty good job of summarising his extensive research on this topic.

The Dangers of a Competitor Orientation

Question: Do profits improve when firms attempt to gain market share?

If you believe in the common wisdom of students, managers, and professors of marketing, the answer would be “yes.” However, the evidence tells a different story.

Fred Collopy and I summarized prior research consisting of nearly 30 previously published empirical studies. Twenty-three different laboratory experiments were conducted with 43 groups spaced over many years and countries. In addition, we analyzed 54 years of field data for 20 companies to compare companies that used market share as an objective versus those that focused only on profits. Our research extended over a decade. The results from all approaches showed that market-share objectives harmed profits and put the survival of firms at risk (Armstrong and Collopy 1996).

The paper was difficult to publish. Reviewers disagreed with our findings and seemed intent on blocking publication. They kept finding what they thought to be serious problems with the research. When we would respond to their criticisms with additional experiments, they became incensed. In all, it took about five years to get through the review process. In the end, an editor over-ruled the reviewers.

In a follow-up paper, Kesten Green and I described new evidence from 12 studies that were conducted since the 1996 publication. The new evidence provided further support for the conclusion that competitor-oriented objectives are harmful. In fact, there has been no empirical evidence to date to challenge this conclusion.

While our research has received much attention (e.g., 167 citations for the 1996 paper), it seems to have had little effect on what is learned in business schools.

In teaching the introductory marketing class to Wharton MBAs, I would present the results of this research. This proved to be upsetting to many students as it conflicted with their beliefs and with what they said they were learning in other courses. After one session in which I described this research, an MBA class representative came to me with the “friendly advice” that the students did not appreciate hearing about my research; they would prefer to know what is going on in the real world.

To illustrate the dangers of a competitor-orientation, I also used an experiential exercise known as the “dollar auction” (Shubik 1971). In this exercise, the top two bidders pay, but only the top bidder wins the dollar. Typically the bidding would start at a penny, then move up at an increasing rate. I always made money on the dollar auction. But in 1982, I had my most successful session when I received over $20 for my dollar. I have kept in touch with the 2nd highest bidder, Ravi Kumar, over the years. On a recent trip to India, Ravi reminded me of the name of the winning bidder – Raj Rajaratnam, a hedge-fund manger who was found guilty of insider trading in May 2011, and who is suspected of funding suicide bombers in Sri Lanka (New York Times May 12 stories starting on the front page). Apparently I failed to convince Mr. Rajaratnam that a competitor orientation is harmful to oneself as well as to others.

Professor J Scott Armstrong

References

Armstrong, J.S and K. C. Green, “Competitor-oriented Objectives: The Myth of Market Share,” International Journal of Business, 12 (2007), 117-136.

Armstrong, J.S. and F. Collopy (1996), “Competitor Orientation: Effects of Objectives and Information on Managerial Decisions and Profitability,” Journal of Marketing Research, 33 (1996), 188-199.

Shubik, M. (1971), “The Dollar Auction game: A paradox in noncooperative behavior and escalation,” Journal of Conflict Resolution, 15, 109-111.

Can you use facebook to stimulate your fans to talk about you?

Since the Advertising Age covered the Ehrenberg-Bass Institute’s analysis of facebook’s ‘talking about’ metric there has been a flurry of internet coverage.

The findings got reduced to a sound bite of “only 1% of facebook fans engage with brands”. Which could easily be misinterpreted. Dr Karen Nelson-Field’s result is actually that around 0.4% (ie less than one percent) of the fans of a brand actually interact with it on facebook in a typical wek.

The interaction is what facebook report as “Talking about’, and includes activity such as to like, comment on or share a Brand Page post (or other content on a page, like photos, videos or albums), post to a Page’s Wall, answer a posted question, liking or sharing a check-in deal, RSVP to an event, mention a Page in a post, phototag a Brand Page…all the activity that facebook measure.

Now 0.4% in a week doesn’t sound so bad. It sounds like it might cumulate to near 25% in a year, but this would be a heroic assumption. In these sorts of social phenomenon we usually see highly skewed distributions. There will be a small percent of fans who do most of the talking every week. So this probably cumulates to something much less than 10% in a year. Karen is investigating.

Even facebook’s own fans don’t talk much about facebook (on facebook)

One of the questions asked of Dr Karen Nelson-Field’s analysis of facebook fans engagement with their brands on facebook is whether the result is simply due to slack social marketing by the brands in question.

Given that Karen analysed the 200 brands with the most facebook fans it seems a bit of a stretch to say that these brands “don’t understand facebook”.

Some have speculated that brands that understand passionate loyalty probably do much better.  But Karen’s analysis included brands such as Old Spice, Harley-Davidson, Ferrari, and Tiffany & co.

Finally, Karen’s analysis included facebook’s own facebook fans.  In a typical week only 0.28% ‘talk about’ facebook on facebook.  Maybe facebook itself doesn’t care much about fan engagement, after all they are clever marketers.