I think econometric modelling is over used in marketing. And routinely produces misleading results. Let me explain…
Warning: people who make their living from such modelling may not like what I have to say.
Astrology, like econometric 4Ps modelling, has many fans, many of whom are very intelligent capable people. Proponents of astrology and marketing mix modelling each use the same arguments to justify their practice (eg that it is popular, that it is rigorous, highly technical etc). And the documented predictive track record of each is similar, ie sparse and not flattering.
Econometric style modelling has a great future in marketing but it is over used and not subjected to serious enough criticism and scrutiny. I’m talking about statistical ‘best fit’ modelling of marketing mix effects, i.e. attempts to quantify the sales effect of different aspects of the marketing mix. This is now a common service offered by all the big market research companies as well as specialist consultancies.
How your marketing mix affects sales is an important question, so I don’t blame marketers trying something that offers a solution. Unfortunately it is very much a second best solution. Instead, or at least in addition, they should be doing experiments, and replying more on scientific models that generalise across a known range of conditions and so can be used to predict and explain.
Here are some of the problems with econometric marketing mix modelling:
1) Firstly, such statistical modelling works on variation in the dependent variable (eg sales or share) and the independent variables (advertising spend or SoV or exposures, pricing, media strategy, timing, point of sale, sales team emphasis etc). But very often there is little in the way of variation, especially in the dependent variable. Big established brands show little sales reaction to changes in advertising. While even smaller growing brands simply keep on their trajectory. This makes it practically impossible to correctly statistically model the impact of advertising on sales.
And some important sales drivers, like distribution, change very occasionally (new stores/channel) and often in different ways every time. This makes distribution effects difficult to incorporate into the model.
In contrast interventions like price promotions routinely produce swings up and down in sales. So these can be modelled, indeed they show up prominently… but too prominently ???
2) Secondly this modelling of variation assumes that correlations imply causality. And we all know that is very often not the case. Both sales and advertising spend go up at Christmas but it doesn’t mean the advertising is causing (some ? all ? any ?) of the sales increase. And sales increases can cause advertising increases, as well as the other way round. Inferring causality from statistical association can be a dodgy business, but it is the business of econometric modelling.
And it is especially easy to get the strength or direction of causal assumptions wrong when you are dealing with small correlations – and generally marketing mix models are nothing more than a description of a huge host of small weak associations.
3) Thirdly there is the (subjective art) of what to include in the model and what to leave out. Typically advertising spend and price are included as potential drivers of sales, but what about competitors’ advertising and prices ? And what about point of sale ? And numbers of merchandisers and sales people ? And trade promotions ? And publicity levels ? And product placements ? And allowance for media strategy ? And creativity and branding execution ? And new variant/product/brand launches – ours and competitors ? And seasonality ? And what’s going on in other categories – complementary and competitive ?
The problem is that if some things are left out then the resulting statistical model may be wrong/misleading. But if too many variables are put in the model will be horribly complex and complex models don’t tend to work – see the 4th problem below. How can the modeller know if they got it right ?
We know that modern markets are complex. And that a brand’s sales depend on a huge range of influences, many of them interacting. Much of the time these complex forces (some of which are marketer controlled, most not) result in an equilibrium where brands maintain much the same level of sales this year as next and last. It says much about our knowledge of these forces and their dynamic interaction that there are so few known guaranteed ways to increase sales, and even if we know what might increase sales we can’t accurately predict by how much. Awkward things get in the way like unpredictable competitor reaction, unanticipated interactive effects etc.
Yet marketing mix models claim to be able to explain sales, ie to show how the complex interaction of forces work. This seems a naive heroic goal.
4) Fourthly, the modeller has to make many decisions about how the various ‘sales drivers’ might interact with one another. For instance might a price promotion increase the sale effects of accompanying advertising and/or the other way round ? Might radio advertising work better if it is accompanied by TV ? Or might the effects be independent ?
And then there are decisions about dynamics. If one variable changes will it cause a change in another ? If we drop price what will competitors do ?
Modellers might use ‘theory’ to guide these decisions (rather than use their own intuition or guesses). However, currently there is so little in the way of empirically grounded marketing theory that they are really flying blind.
Typically then those modelers with more time and larger budgets will compare a variety of models. But against what criteria ?
5) Finally, the modeller has to choose a model. It’s rare that they ever present more than one to the client (or send to the journal). Typically they use a ‘best fit’ criteria, ie which model fits the existing (ie historical) data best. Unfortunately, and seldom mentioned to modelling clients, this means picking between a number of different models with very similar fit. That’s different models each with markedly different managerial implications, but all with quite similar fit to the data. There is nothing to say that the chosen one (ie best fit) is actually the right one (if any are).
The best fit model simply fits the historic data a tiny bit better than the others, this means it models everything in the data (including all the error) a tiny bit better. But this says absolutely nothing about its ability to model a different set of data, like a future set.
And yet this is what marketers want of course – to predict in the future. Not to describe the past but to describe a different circumstance, ie one where there are changes (to their marketing mix, their competitors, and a host of other things that happen as time marches on – like that buyers age… we all get older). Marketers want to know what will happen if they do X, not what happened to happen last time.
What I’m pointing out is that the chosen ‘best fit’ model is not chosen against the criteria that matters. Not because the modellers aren’t clever, but simply that it’s an impossible task.
In stark contrast, scientists look for simple models that hold under a wide range of conditions. This is the criteria that we use to choose a scientific model (not ‘best fit’ to one now historic data set). Consequently econometric style techniques don’t get anywhere near the amount of use in science than marketers are led to believe. And when such techniques are used it is generally with multiple sets of data covering as many different conditions as possible. And scientists do deliberate interventions and experimentally observe what happens. The approach of scientists is a completely different approach to modelling the real world. It is one that marketers (and marketing academics) should make more use of.
PS It isn’t hard to integrate a scientific approach into your marketing modelling. Key ingredients: