Marketing isn’t the only discipline to have been seduced by the idea that modelling can somehow bypass the hard work of developing empirical laws. Few seem to realise how heroic the assumption is that teasing out a few weak correlations can quantify precisely how much [something of interest eg sales] will change in the future when [various other things, eg media choices] are altered.
Added to this is the ‘Big Data fallacy’ that adding together bunches of weak correlations will lead to more and more accurate predictions – “once we have enough data, a clever programmer, and a powerful enough computer, we’ll be able to predict everything we want”. It’s as if chaos theory taught us nothing at all.
The basic work of science is making empirical observations, looking for patterns, and then…. once you have found one, looking to see where it holds and where it doesn’t. This requires lots of replications/extensions over different conditions (eg countries, product categories, and so on). This is how scientific laws are developed, that give us the ability to make predictions. These replications/extensions also tell us what conditions don’t affect the law, and maybe some that do. This leads to deep understanding of how the world works. Experiments can be used to tease out the causal directions and magnitude, what really affects the pattern and how much. Again these experiments need to be done carefully, across a range of conditions that might matter.
Yes, this doesn’t sound very glamorous, it takes much time and effort (1% inspiration, 99% perspiration). Sometimes we get lucky, but generally many many studies are required. By independent teams, using creatively different approaches – so we can be sure that the empirical phenomenon really does generalise, that it isn’t a fragile result (or a mistake) that only exists in one team’s laboratory.
Unsurprisingly the idea that a computer model could bypass much of this hard work is seductively attractive.
Terribly complicated, yet naive, modelling seems to be everywhere. In population health statistical correlations deliver precise estimates that if people eat particular foods (or amounts of fat/sugar/alcohol, or sitting around) then their risk of dying early will be such and such. There is nothing wrong with this, so long as we recognise the weakness of the method. Unfortunately these correlations often get handed over to engineers who, with a spreadsheet and a few heroic assumptions about causality, produce model predictions that if the government taxed this, or regulated that, then x million lives would be saved, and x $billion saved in hospital bills. These predictions need to be treated with a high degree of skepticism. We need tests before legislation is changed and money spent.
In climate science, a rather new, and until recently very small discipline, modellers now seem to dominate. In the 1970s a short period of cooling led to worry about global cooling, but then temperatures turned around to rising again, and climate scientists started to become seriously concerned about the role of rising CO2 levels. They rushed to develop models and in the early 1990s they gave their predictions for CO2 emissions to lift global temperature, along with accompanying predictions of oceans rising, ice retreating, polar bears disappearing and so on. 25 years later they are confronted by the overwhelming predictive failures of these models, that is, the models substantially over-predicted the warming that was supposed to occur (given that the CO2 levels have risen – the IPCC, even though they are ‘marking their own homework’, admit this in their assessment). The modellers are now starting the work to figure out why. Meanwhile the forecasting scientists who criticised the climate scientists’ forecasting methods, and predicted this result, have been vindicated.
Models that show wonderful fit to historic data routinely fail in their predictions*. That’s why we revere scientific laws (and the theories built on them) because they have made predictions that have come to pass, over and over.
* *See also Dawes, J. G. 2004. ‘Price changes and defection levels in a subscription-type market: can an estimation model really predict defection levels?‘ The Journal of Services Marketing, 18:1, 35-44.