Mistaking statistical modelling for science

Marketing isn’t the only discipline to have been seduced by the idea that modelling can somehow bypass the hard work of developing empirical laws.  Few seem to realise how heroic the assumption is that teasing out a few weak correlations can quantify precisely how much [something of interest eg sales] will change in the future when [various other things, eg media choices] are altered.

Added to this is the ‘Big Data fallacy’ that adding together bunches of weak correlations will lead to more and more accurate predictions – “once we have enough data, a clever programmer, and a powerful enough computer, we’ll be able to predict everything we want”.  It’s as if chaos theory taught us nothing at all.

The basic work of science is making empirical observations, looking for patterns, and then…. once you have found one, looking to see where it holds and where it doesn’t.  This requires lots of replications/extensions over different conditions (eg countries, product categories, and so on).  This is how scientific laws are developed, that give us the ability to make predictions.  These replications/extensions also tell us what conditions don’t affect the law, and maybe some that do.  This leads to deep understanding of how the world works.  Experiments can be used to tease out the causal directions and magnitude, what really affects the pattern and how much.  Again these experiments need to be done carefully, across a range of conditions that might matter.

Yes, this doesn’t sound very glamorous, it takes much time and effort (1% inspiration, 99% perspiration).  Sometimes we get lucky, but generally many many studies are required.  By independent teams, using creatively different approaches – so we can be sure that the empirical phenomenon really does generalise, that it isn’t a fragile result (or a mistake) that only exists in one team’s laboratory.

Unsurprisingly the idea that a computer model could bypass much of this hard work is seductively attractive.

Terribly complicated, yet naive, modelling seems to be everywhere.  In population health statistical correlations deliver precise estimates that if people eat particular foods (or amounts of fat/sugar/alcohol, or sitting around) then their risk of dying early will be such and such.  There is nothing wrong with this, so long as we recognise the weakness of the method.  Unfortunately these correlations often get handed over to engineers who, with a spreadsheet and a few heroic assumptions about causality, produce model predictions that if the government taxed this, or regulated that, then x million lives would be saved, and x $billion saved in hospital bills.  These predictions need to be treated with a high degree of skepticism.  We need tests before legislation is changed and money spent.

In climate science, a rather new, and until recently very small discipline, modellers now seem to dominate.  In the 1970s a short period of cooling led to worry about global cooling, but then temperatures turned around to rising again, and climate scientists started to become seriously concerned about the role of rising CO2 levels.  They rushed to develop models and in the early 1990s they gave their predictions for CO2 emissions to lift global temperature, along with accompanying predictions of oceans rising, ice retreating, polar bears disappearing and so on.  25 years later they are confronted by the overwhelming predictive failures of these models, that is, the models substantially over-predicted the warming that was supposed to occur (given that the CO2 levels have risen – the IPCC, even though they are ‘marking their own homework’, admit this in their assessment).  The modellers are now starting the work to figure out why.  Meanwhile the forecasting scientists who criticised the climate scientists’ forecasting methods, and predicted this result, have been vindicated.

Models that show wonderful fit to historic data routinely fail in their predictions*.  That’s why we revere scientific laws (and the theories built on them) because they have made predictions that have come to pass, over and over.


* *See also Dawes, J. G. 2004. ‘Price changes and defection levels in a subscription-type market: can an estimation model really predict defection levels?‘ The Journal of Services Marketing, 18:1, 35-44.

The heavy buyer fallacy

It seems obvious, a brand’s currently heaviest buyers generate more sales and profits (per customer) so they should be the primary target for marketing.

This is commonly held misconception. The rise of direct marketing and CRM gave this fallacy a big plug, after all it can be hard to justify sending expensive letters to light customers.

But if our aim is to grow sales then our efforts should be directed at those most likely to increase their buying as a result of our attention. It takes only a moment of thought to realise that customers who already buy our brand frequently are going to be difficult to nudge even higher.

If, instead, our aim is to prevent sales losses then heavier customers would seem more promising – after all they represent a lot of sales we might lose. But then again, they are more loyal, other brands make up less of their repertoire, their habit to buy our brand is more ingrained, our brand has rather good mental and physical availability for them. In short, they aren’t particularly at great risk of defecting nor of downgrading.

So the idea that heavy buyers of your brand (“golden households” or “super consumers”) are your best target is flawed. Dangerously simplistic.

Apple’s mythical price premium

I’ve written previously questioning the marketing orthodoxy to aim for a price premium, and specifically on the myth of Apple’s price premium.

Here is another nice quote from Steve Jobs, interviewed on stage alongside Tim Cook (current CEO). He was asked if Apple’s goal was to win back dominant share of the PC market

“I’ll tell you what our goal is…to make the best personal computers in the world and products we are proud to sell and would recommend to our family and friends. And we want to do that [raises voice] at the lowest prices we can, but I have to tell you there is some stuff out there in our industry that we wouldn’t be proud to ship, that we wouldn’t be proud to recommend to our family and friends…and we just can’t do it, we can’t ship junk”.

The role of government is to provide great infrastructure/environment for ALL business

Job losses at icon brand companies make big headlines. Politicians are addicted to giving taxpayers money to large businesses, including trying to lure them into their electorate. We all love the idea of having the next Apple or Google in our electorate, but look at the US economy and its poor performance over the past decade despite having these two emerge as giants over that period.

It’s hard to name icon companies from Singapore or Netherlands or Austria yet these are some of the richest, most productive economies in the world. It’s a reminder to government that any modern economy is diverse and complex, even a Google is a tiny player within it. What matters much more than having a few of these star companies is that thousands of less-than-household name companies can do business easily. e.g. an efficient retail sector is a stark difference between highly productive and less productive economies. I know this isn’t sexy, but it seems to be the truth.

This means that government should concentrate on infrastructure, on good and simple laws, less red tape, flexible workforces, access to training/re-training.

I note that Australian government investment in Roseworthy Agricultural College, and the Waite Institute, and setting the levy that funded the Grape and Wine R&D Corp has returned an astonishing return – creating Australia’s modern export wine industry (and better wine for Australians). But all attempts to save the industry in times of a high Australian dollar and so on (e.g. the 1980s grape vine pull) did little good, just put money in the pockets of a few lucky businesspeople.

I suspect there are many similar examples in many other industries.

True Brand Loyalty – it doesn’t matter

This is a footnote from “How Brands Grow“.

There is a very long (too long!) history of marketing writers debating what ‘true loyalty’ is. This is a perfect example of what the twentieth century’s most famous philosopher of science, Sir Karl Popper, called essentialism: seeking to define the essence of an abstract theoretical concept (Esslemont & Wright, 1994). We can forever debate issues (like, What is true love? What is marketing?), but as these questions are simply about the definitions we decide to use there is no logical way of ever resolving them. To suggest that one approach captures the true meaning of brand loyalty, while another (by implication) does not, is bad philosophy, and bad science. Contrary to popular belief the purpose of science is not to say what things are but rather to say things about things: how they behave, how they relate to other things. Physicists can tell you a great deal about the properties and behaviour of things like gravity and mass while there is no rigid definition of what these things ‘truly’ are. Hence, this book is about what we can say about real world loyalty-type behaviour, both verbal behaviour (expressed attitudes) and overt behaviour (buying).

“Never let yourself be goaded into taking seriously problems about words and their meanings. What must be taken seriously are questions of fact, and assertions about facts; theories and hypotheses; the problems they solve; and the problems they raise” (Popper, 1976).

New marketing practice, evidence or fashion?

A new study of 10 years of medical research in one of the very top journals shows that reversals are not uncommon.  This is where later evidence shows that a new medical practice is no better or worse than older practice (or doing nothing).

40% of the studies that examined a current practice found it shouldn’t have been adopted.

The problem is partly that tests of new practice tend to be biased towards being positive. So later, better, studies are going to find that a good number of their findings were wrong.

Also practice tends to adopt invasive practices perhaps due to patient pressure, and doctor desire, to “do something, rather than nothing”. Though there are also reversals to current practices that refuse to take up something new (e.g. vaccinate, take aspirin) because of some (often theory-based) fear, which turns out to be unfounded.

This shows that the advance of (medical) science, and evidence-based practice, is not a straight line. Reversals are common.

Now in marketing practice the tinniest whiff of evidence that something might be useful is enough to send lemmings running for the cliff! Fear of missing out? I’d put both the widespread adoption of banner and search advertising, marketing mix modelling, ROI calculations, and equity monitors in this camp.  We praise doing something new, even if it is harmful.

Then there are things like loyalty programs which were adopted without any evidence at all, just theory.

When evidence finally does emerge that a practice is flawed there can be great reluctance to accept it – especially amongst those who make money from it.  For example, how many market research agencies have changed their practices in presenting segmentation data after we showed comprehensively that brands do not differ from their competitors in the types of customer they attract?  For example, marketers are still launching new loyalty programs with the aim of extracting lots more business out of existing customers.

And not enough marketers worry when “emperor’s new clothes” type questions highlight the astonishing lack of credible evidence and testing of techniques like mix modelling or brand equity based predictions.  People even say things like “but if I stop doing this, what will I do instead”, as if doing something useless is better than doing nothing.  Marketing needs to grow up, because the medical example shows how easy it is to get something wrong even when you are as careful and circumspect as doctors are.