Conflicts in the marketing system

I do sometimes hear an ad agency people say “we don’t care about creative awards, we are totally dedicated to each client’s business objectives”, especially when in front of clients.  It makes me wonder whether they are lying (that’s bad), or that they are deluding themselves (which may even be worse), or if they are admitting that they simply aren’t good enough to win creative awards (and that’s not good either).

I think it is important to be grown-up, honest and up-front about conflicts of interest.e.g. Martin Sorrell wants to sell marketers stuff, his empire (like his competitors) will sell whatever marketers will buy that he can deliver profitably.  This matters far more to the agency than whether or not it is the best way to build their clients’ brands.

Creatives want to win awards.  And if this doesn’t sell a single extra of your product they aren’t really worried.

Media agencies want to do what they know, what’s easy, and they have to honour sell media they have committed to buy.

Market research agencies want to sell standardised products, ideally that use automated data collection and analysis, or low-level people.  They can’t make big profits from stuff that requires in-depth analysis by expensive people.  They do far more R&D into reducing data collection costs than into better research.

Retailers want to win share from other retailers.  They don’t care if this means selling another box of your product or not.

So partners yes.  But there are conflicts in the system.  This is fine, so long as everyone understands the conflicts then they can be managed – it’s possible for everyone to win.  But pretending these don’t exist is dangerous.

Professor Byron Sharp

July 2014.

Brand Equity twaddle

I occasionally send some friends interesting (both good and bad) articles from marketing academia.  This is an interesting reply.  I won’t name the academic paper.

Dear Byron,

Thank you for sending this paper. I think the correct response, using the scientific vernacular, is ‘utter twaddle’.

The framework below is very neat. It’s very sequential. But it’s also very wrong.

When marketing academics observe what really happens in the real world, they can make powerful discoveries that help further the discourse around how people behave and make choices. But when marketing academics start with a hunch (disguised as a testable hypothesis) and then find data to back it up, they are, at best, worthless, and at worst, damaging.

I wouldn’t waste my time critiquing each component in this model. What I will do is give you an example of a very real ‘real world’ observation about how people behave, despite what one might think they have in their heads regarding Brand Associations and so called Brand Equity.

I am lucky to live in a very nice suburb of southwest London called St.Margarets. It’s what one might call leafy and affluent. Its residents are, on the whole, fortunate to be significantly better off than the average UK population in socio-economic terms. Lots of doctors and lawyers and bankers and media types.

 The overwhelming majority of my St. Margaretian friends and acquaintances are well-educated and, again on the whole, politically liberal. Generally left of centre, having evolved from the armchair socialism of their more zealous, youthful days. I should put an important caveat in place here; I was never an armchair socialist, nor indeed a socialist of any kind really. Anyway, I digress.

There is a nice sense of community in St. Margarets and I have made many good friends here over the years. And in addition to these friends, there are plenty of others with whom I can enjoyably engage in pleasant and cordial passing conversations. As you can imagine, it’s fertile ground for many dinner parties and for gatherings in local hostelries.

Once the wine has started flowing, and the initial greetings and polite exchanges (such as how the kids are getting on) have been completed, conversations inevitably move on to the more ‘serious’ topics du jour. House prices, gossip about who Sally was seen with last week, standards of schooling, what Charles said to his accountant, how moral standards are becoming polarised between the haves and the have nots, what Carol was caught doing with Bob down by the river. You know the sort of thing, I’m sure.

Commerce will often have it’s place in this cauldron of righteousness as well. I distinctly recall more than a few conversations about business ethics. And a number of these have centred around a well-known retailer which has had the temerity to open one of its smaller store formats slap bang in the middle of St. Margarets. Right next door to the railway station. Outrage abounds.

“Tesco Express … they’re crucifying all our little local traders,” opines Gareth. “They bully farmers into bulk deals with derisory margins … Tesco is ruining our agriculture,” shrieks Camilla. “The way they treat their shop workers … it’s slave labour … they should be taken to the International Court of Human Rights,” booms Barry.

I listen with interest. Sometimes, I must admit, the odd fair point can be heard from time to time amongst the remonstrations and general distaste for having such a purportedly disreputable behemoth impose itself on our little suburban ‘village’ (as the Estate Agents like to describe it). But the over-riding theme is one of deep-seated antipathy. A theme with which, I must say for the record, I disagree. I think Tesco is a great business and great for our economy.

The dinner parties end, the hostelries close, and we all go home to our beds. Watered, fed and safe in the knowledge that the world would be a better place … if only ‘they’ just listened to our wisdom.

When I travel home from work during the week, I frequently do so by train. Most of my friends and acquaintances do the same. We come in to St. Margarets station, wearied by the day’s travails, ready to put our feet up and watch the telly. We trudge up the station stairs to the street. As I start to walk down the street  I remember that Cathy called me to remind me to pick up a pint of milk and some chicken breasts for dinner. Ooh, and I can pick up a half decent bottle of wine too … why not!

I turn in to a shop which is already teeming with St. Margaretian commuters.

Before I can even reach down to pick up the chicken breasts I’m tapped on the shoulder. I turn around to see a smiling friend; it’s Camilla. “How lovely to see you”, she says, “(mwah mwah) feels like I saw you just two days ago at Barry’s for dinner.” We both laugh. “Oh, look, speak of the devil, Barry’s over there with Gareth at the check-out.”

“Anyway, see you soon I hope, Camilla,” I say, “We’re going round to the Greensmith’s next Saturday, probably see you there.”

As I leave Tesco, which is slap bang in the middle of St.Margarets, right next to the railway station (to where thousands of well-heeled St. Margaretians return every evening), I give a little wave to Sally, Charles, Carol and Bob. They have arrived back on the next train. They’re just popping in to Tesco to pick up some things before they go home.

As I open my front door, a question comes to mind; can the need to get a pint of milk, as easily as possible, really trump the most heartfelt attitudes expressed around a dinner table in St. Margarets only a day or so earlier! It would appear so.

‘There’s nowt as queer as folk,’ as the old Yorkshire saying goes.

People may claim to hold firm perspectives about brands. The truth is that there is a world of difference between what someone consciously says and what they actually decide (primarily subconsciously) to do.

So, yes, that paper is truly dreadful.



On 4 May 2013, at 01:25, Byron Sharp wrote:

A poster-child for everything that is wrong with brand equity research.  If you can’t be bothered reading the article just look at the struggle they had to come up with any findings or implications.

Behaviours can be useful predictors of other behaviours

MCDONALD, Heath, CORKINDALE, David & SHARP, Byron 2003. Behavioral versus demographic predictors of early adoption: a critical analysis and comparative test. Journal of Marketing Theory and Practice, 11, 84-95.

click here to download



Predicting which consumers will be amongst the first to adopt an innovative product is a difficult task but is valuable in allowing effective and efficient use of marketing resources. This paper examines the accuracy of predictions made about likely first adopters based on the most widely accepted theory and compares them to predictions made by examining the relevant past behavior of consumers. A survey of over 1000 consumers examined adoption of an innovative technology: compact fluorescent lightglobes. The results show that variables which were derived from a utility and awareness perspective were a more accurate and managerially useful predictor than the demographic variables derived from the widely accepted theory based on the work of Rogers. It is suggested that these alternative variables could be utilized more readily by marketing managers in many circumstances.

Making marketing science easier to read & understand – suggested format for articles

I sometimes read academic articles in very different disciplines, like medicine and biology. They have some different formats than we have in marketing. Often their articles are much shorter, yet just as detailed when it comes to describing the research, how it was done and what were the results.

Why are marketing journal articles so long? And so obscure?

Do they need to be?

Now that publishing has gone online we don’t need to be subject to the same constraints as in the past. I wonder if the best format would be for articles to be about 800 words long, a clear exposition of what was done and what was found and what it might mean. After the article there could be a moderated/refereed Question & Answer section. This would be enormously useful, and take the pressure off authors to write perfectly, fully anticipating the needs of all readers on the first go.

Extensive details, like the whole questionnaire or data coding frame could be made available by links.

What’s wrong in the house of academia, and a suggestion how to fix it

Presented to the Australia & NZ Marketing Academy Conference December 2012.

Marketing has a small ‘crisis literature’ where academics themselves bemoan the lack of real-world importance of academic research into marketing. For example, “Is Marketing Academia Losing Its Way”, Journal of Marketing, 2009.

Back in 1989 John Rossiter documented the growing gap between marketing scientists and consumer researchers even though they were supposed to be studying similar things. He warned the consumer researchers in particular that they were in danger of retreating into an Ivory Tower detached from the empirical findings regarding mass buying behaviour. Yet the trend continued unabated.

I myself, and colleagues, have had articles rejected from good journals when they chiefly documented a substantive finding about the world. My ‘favourite’ was when the editor of Marketing Science wrote to Jenni Romaniuk and myself about our work documenting the 60/20 law (ie it’s not 80/20). Effectively he said, “great stuff, I’m going to use this in my teaching, but we can’t publish it in Marketing Science because the journal tries to feature leading edge analysis whereas what you did was simple and transparent”. An open admission that Marketing Science is really a journal about engineering above science.

Yet around the same time the Nobel Prize for Physics was awarded to two Russian scientists who found a way of producing graphene, a single atom thick layer of carbon, this potentially extremely useful material had once been thought unlikely to exist in the real-world. They isolated graphene with a simple technique using common household sticky tape. Placing bulk graphite between two sheets of Scotch Tape they simply repeatedly pulled the tape apart removing layers of atoms until they achieved graphene. A colleague remarked that it showed you could still win a Nobel Prize “for mucking around in the lab”. In physics there is still respect for substantive discoveries.

The defence or excuse from marketing academia is that we have been placing our emphasis on rigour over relevance. But recent shocking findings in marketing academia, other social sciences, and even medical research have exposed the myth of improved quality. There have been been some high profile examples of scientists disgraced for falsifying results (including in marketing), while 10% of psychologists admit to falsifying data (but they presumably evaded discovery), and most admitted to sometimes practicing dubious practices like selectively reporting the studies “that worked” (and hiding those that did not support their hypotheses. Relatively higher rates of dubious practice were found among neuroscientists and cognitive & social psychologists. What do you think the rates would be in marketing?

A recent analysis (Wilhite and Fong 2012) of the dubious practice of journals encouraging (or bullying) authors to cite other articles from the same journal reported that the Journal of Retailing, Journal of Business Research, and Marketing Science were stand-outs at the very top of the suspect list – and that’s not a list of only marketing journals. Indeed marketing journals stood out from other disciplines as being into coercive citation to try to manipulate their citation impact scores.

In medical research standards are undoubtably higher. Yet when pharmaceutical companies seek to replicate findings reported in medical journals in most cases it can’t be done – even though they try hard, after all they are hoping to make money from the discovery. Many of the findings for cancer drugs are highly specific to particular circumstances (e.g. patients with particular genetic profiles) but the researchers didn’t explore these conditions, they just got lucky with their so far unrepeatable finding.

In marketing Hubbard & Armstrong (1994) documented that academics hardly ever try to replicate findings. We simply assume they are true (or perhaps not worth bothering with). Not surprising perhaps, when replications are done they usually are unable to repeat the original result. The same sort of scandal has hit a number of famous psychology experiments. “The conduct of subtle experiments has much in common with the direction of a theatre performance,” says Daniel Kahneman, a Nobel-prizewinning psychologist at Princeton University. Trivial details such as the day of the week or the colour of a room could affect the results, and these subtleties never make it into methods sections of research articles. Hmm, what’s the difference between a result that is so sensitive to many trivial, unknown and unpredictable details and no result at all? Why should we care about a finding that only occurs in high particular circumstances?

This is a bigger problem than fraud and dubious research practice. We need to stop publishing one-off flukes and explore the generalizability of findings – where and when does a result hold? How does it vary across product categories, brand size, brand age, different types of consumers, at different times, and so on.

Even large and varied data sets are being wasted in marketing when results presented as an average across many different conditions eg “marketing orientation is associated with higher financial performance r=0.28″. This tells us little about the real world; the average may even not actually apply in any of the major conditions.

We need to explore generalizability or otherwise our ‘discoveries’ tell us very little about the marketing world that we are supposed to be studying.

And we need to stop prematurely building shaky prescriptive theoretical edifices upon these doubtful, poorly documented findings.

If we don’t carefully and thoughtfully (call it ‘theory driven’ if you wish) examine a finding and how it varies (or not) across conditions then we are stuck with findings that probably were one-off events – with no way of telling. Currently we have to treat our findings as either applying to one historic data set covering one particular set of conditions that may never be seen again OR a result that generalises to all product categories, all countries, all seasons. Both views are preposterous, something in between is far more likely but there is a lot of land “in between”, it needs to be explored.

The dubious research practices discussed above come partly from ‘confirmation bias’, the fact that (marketing) scientists want to find evidence to support their hypotheses – and they want “a positive result” otherwise they lack the motivation to publish, or the belief that they will be accepted by any decent journal. Brodie and Armstrong (2001) suggested researchers adopt multiple competing hypotheses as a way of overcoming this bias. A worthy suggestion, but those implementing it tend to simply have their favoured hypothesis and the opposite – and they still obviously want to see their favoured hypothesis supported. So I would like to make a different suggestion. Let’s use research questions with the words “when”, “where” and “under what conditions”. Rather than black and white “does X cause Y” type hypotheses let’s ask “when does X cause Y?”, “does X cause Y in highly advertised categories?”, “is X more a cause of Y in developing economies?”. This is the basic work of science, documenting patterns in the real world. When do things vary and when to they not.

If researchers use “when”, “where” and “under what conditions” research questions they are aren’t trying to prove a proposition, so they don’t have to worry about failure, so they should hopefully be less likely to tweak data and pick findings. Also, very importantly, researchers will be documenting something useful about the world because they will be exploring generalizability.

PS The Nobel Prizes for Physics are awarded in line with Alfred Nobel’s criteria “to those who, during the preceding year, shall have conferred the greatest benefit on mankind” which explains the worthy emphasis on substantive fundings. Alexander Fleming’s accidental discovery of penicillin is another example of the Nobel prize committee valuing important discovery over display of academic prowess.


ARMSTRONG, J. S., BRODIE, R. J. & PARSONS, A. G. “Hypotheses in marketing science: Literature review and publication audit.” Marketing Letters 12, 2 (2001): 171-187.

HUBBARD, R. & ARMSTRONG, J. S. “Replications and Extensions in Marketing: Rarely Published but Quite Contrary.” International Journal of Research in Marketing 11, (1994): 233-248.

REIBSTEIN, D. J., DAY, G. & WIND, J. “Guest editorial: is marketing academia losing its way?” Journal of Marketing 73, 4 (2009): 1-3.

ROSSITER, J. R. “Consumer Research and Marketing Science.” Advances in Consumer Research 16, (1989): 407-413.

WILHITE, A.W & FONG, E.A. “Coercive citation in academic publishing”. Science 335, (Feb 2012): 542-543.

The flawed Stengel Study of Business Growth

Here I describe the ‘Stengel Study of Business Growth’ using quotes from “Grow: How Ideals Power Growth and Profit at the World’s Greatest Companies” by Jim Stengel, published by Crown Business 2011. Along the way I point out the fatal flaws in the research design.

The ‘Stengel Study of Business Growth’ started in 2007 when Procter & Gamble’s CEO A.G. Lafley endorsed Jim’s idea to “commission a study to identify and learn from businesses that were growing even faster than we were, in whatever industry” (p. 24).

Initially the P&G team studied “the fastest growing brands over the previous five years” (p.24) identified in collaboration with market research agency Millward Brown Optimor using their BrandZ database. The team “assembled five-year financial trends on twenty-five businesses that had grown faster than P&G over that period. The teams then dug behind the numbers with additional research, including interviewing business executives, agency leaders, brand experts, and academics at Harvard, Duke and Columbia”. (p.25)

“We went in looking for superior financial growth, and only after that for whatever the top-ranked businesses were doing differently from the competition” (p26). Professor Philip Rosenzweig explains this classic sampling mistake as being like trying to learn about blood pressure by only looking at a small group of patients who all have high blood pressure.

Another very important mistake, that we have learnt about as various strategy researchers have made it over the years, is to look for causes of success by interviewing managers and ‘experts’ for their opinions on firms that have been doing well. Known as “the Halo effect” people tend to say that firms they know are performing well possess all sorts of desirable characteristics in terms of culture, leadership, values and more. No one describes a known winner as having “unfocused strategy”, or “weak leadership”, or “lack of customer focus”, or “lack of ideals”, or whatever the researchers choose to decide to ask opinions about. As Philip Rosensweig shows clearly in his book “many things we commonly claim drive business performance are simply attributions based on past performance”.

“Successful companies will almost always be described in terms of clear strategy, good organization strong corporate culture, and customer focus” (p.87). Rosenzweig dramatically shows how when successful companies falter experts abruptly change their assessment. Suddenly the previously described “strong culture” is now described as “rigid”, their previously declared “promising new initiatives” are now described as “straying”, their “careful planning” now in hindsight turns out to be “slow bureaucracy” and so on. In reality large businesses change very slowly, but opinions about them change quickly and are largely based on current financial performance (which is itself is largely due to environmental and competitor effects).

The Halo Effect is particularly strong for subjective, nebulous concepts such as ‘values’ and ‘ideals’. The ‘Stengel Study’ made no attempt to supplement their judgements with ‘hard’ objective measures.

In the Stengel study they ‘discovered’ that their chosen high-growth firms were ‘ideal driven’. The central finding therefore was that “businesses driven by a higher ideal, a higher purpose, outperform their competition by a wide margin”. Yet there is no mention of any systematic investigation of competitors, perhaps many of these lesser performers were also ‘ideal driven’? What we can expect is that because of the Halo Effect less successful performers would be less likely to have been described by interviewees as having a clear ideals well activated throughout the business – irrespective of reality.

Subjective concepts such as ‘ideals’ almost certainly introduce confirmation bias on the part of researchers – when there are no objective measures it’s near impossible for a researcher to stop themselves seeing what they want to see. The “unexpected discovery” of the causal effect of ideals, says Jim Stengel “corroborated what I had implicitly believed and acted upon throughout my career”. Hmm, of course it did.

With this ‘ideals’ hypothesis now firmly in place the full ‘Stengel Study’ was then done after Jim Stengel left P&G by selecting 50 brands based on their excellent recent financial performance over 10 years. As a whole this group (refered to as “The Stengel 50″) “grew three times faster over the 2000s than their competitors…individually some of the fastest-growing of the Stengal 50, such as Apple and Google, grew as much as ten times faster than their competition from 2001 to 2011.”

Promotional material for Stengel’s book says that “over the 2000s an investment in these companies—“The Stengel 50”—would have been 400 percent more profitable than an investment in the S&P 500″. The implication is that this proves Stengel’s ‘ideals’ thesis – but Stengel picked these companies for their financial growth!

If they have been picked purely based on some, ideally ‘hard’ (or intersubjectively certifiable), measure of being ‘ideals driven’ then correlations with financial performance might mean something. Especially if this were future, not past, performance. But as these companies were picked for their financial performance then their stock price performance over the same period shows nothing.

A team of four second-year MBA students being taught by Jim Stengel and Professor Sanjay Sood made the Stengal Study the subject of their required applied management research thesis; “this team crawled all over the Stengal 50 to test the role of ideals” conducting interviews with executives, academics and consultants”. No one should be surprised that they found what their instructors believed. They and the ‘Stengal Study’ both passed with flying colors reports Jim Stengel (page 34).

There is one addition to the Stengel Study which is different from previous similar (flawed) studies of business success. Stengel arranged his leading brands into “five fields of fundamental human values that improve people’s lives” by (1) eliciting joy, (2) enabling connection, (3) inspiring exploration, (4) evoking pride, or (5) impacting society (sic). Millward Brown then used implicit and explicit association measures and found that the Stengel 50 brands are perceived as more associated with their selected ideals than competitors.

Again this is a staggering piece of circular logic. First analyse a select group of brands for what particular ideals they represent, then take these ideals into market research and viola these brands turn out to be more associated with these particular ideals. This is not a test that these ideals drive performance, it is simply a test of the researchers’ judgement of brand image. It merely shows that the researchers live in the same culture as the market research respondents. Jim Stengel thinks Backberry ‘enables connection’ and so does the market, Jim Stengel thinks Mercedes Benz ‘evokes pride’ and so do normal people.

Now one might reasonably argue that there is advantage in FedEx and Blackberry being more associated with a category benefit such as “enables connection” than their competitors. However, leading brands always show higher associations because they have more users, who use them more often. Behaviour has a powerful effect on attitudes and memory, for evidence see BIRD, M. & EHRENBERG, A. 1972 “Consumer Attitudes and Brand Usage – Some Confirmations”. Journal of the Market Research Society, 14, 57. RIQUIER, C. & SHARP, B. 1997 “Image Measurement and the Problem of Usage Bias” in proceedings of 26th European Marketing Academy Conference, Warwick Business School, U.K., 1067-1083. ROMANIUK, J. & SHARP, B. 2000 “Using Known Patterns in Image Data to Determine Brand Positioning”, International Journal of Market Research, 42, 219-230.

There is no mention of controlling for this effect.


In summary, the ‘Stengel Study’ makes the same or similar mistakes as much earlier flawed studies that claimed to uncover the secret of sustained financial success. Jim Stengel, and none of his team appear to have read Philip Rosenzweig’s “The Halo Effect: … and the Eight Other Business Delusions That Deceive Managers” which turns out to be a great pity. As he writes “if the data aren’t of good quality, it doesn’t matter how much we have gathered or how sophisticated our research methods appear to be”. The Stengel Study is yet another study that is deeply flawed, it tries to look like science, but turns out to be merely a story, one that will appeal to many but tells us nothing reliable (or new) about the world.

A final note: Based on the track record of previous such studies I expect the financial performance of these ‘ideals driven’ companies to fall back in the near future. Some such as Blackberry, HP have already suffered very dramatic reversals of fortune.

Review of Jim Stengel’s disappointing book “Grow”

Research reveals the hidden secret to business success? No, sadly this is pseudoscience – that will only convince the most gullible of minds.

Jim Stengel seems a nice guy, he wants us to be passionate about our business and to feel that there is a greater purpose than simply making money.  Few would disagree.  But he also claims to have discovered the secret to sustained super profits – based on a flawed study dressed up as science.

Stengel is a marketing consultant, a famous one because he was formerly Chief Marketing Officer of Procter & Gamble 2001-08 until he surprisingly ‘retired’ to consult (and write this book). During the decade that Jim mostly presided over marketing at P&G the company was pretty successful, at least in comparison to the 3 year period immediately before he became CMO of unsuccessful restructuring and CEO turnover. However the success of the 2000s has been exaggerated; the reality is that during Jim’s decade P&G’s stock price doubled, though that is a misleading overstatement due to the brief dramatic dip in 2000 (the reasons why are discussed here). Without that dip the year before Jim took over as CMO the stock price only improved 20% over the full decade. That’s less impressive than the previous decade (90s) when stock price had increased 5-fold (or 3 fold when the brief dip of 2000 is considered), similar gains were also made in the prior decade (80s). So P&G’s performance during Jim’s tenure should perhaps more accurately described as a mild turnaround, or partial restoration. This chart shows the full history of the stock price.

In all fairness though, Jim Stengel doesn’t ask us to believe his amazing discovery just because he was (like millions of others) a successful practitioner, his claims are based on what he calls an unprecedented 10-year empirical study of highly successful firms and the brands they own. But his study does have precedent, it joins a growing list of books that claim to have discovered a few simple rules for business that near guarantee profit performance that will beat all rivals. Each of these books are based on severely flawed research that ‘proves’ just what the author wanted to say in the first place (which is the opposite of a surprising discovery). “In Search of Excellence” was one of the first of these books, which was largely discredited when the excellent companies went on to make poor financial returns in the years after the book came out.

Professor Phil Rosenzweig exposes these flaws in his 2007 book “The Halo Effect: … and the Eight Other Business Delusions That Deceive Managers“.

I describe and critique “The Stengel Study”, which is the basis of Stengel’s book, here. A quick summary is that to detect factors that might cause financial success then Stengel should at least compared very carefully matched samples of both successful and unsuccessful firms, and developed hard objective measures of strategy – not relied almost entirely on interviews with experts. Also, to avoid confirmation bias, the researchers who described the firms and their strategies should not have been aware of which were the successful and unsuccessful ones. And finally, any resulting theories should be tested against the future performance of the firms. Otherwise what looks like science turns out to be simply a story.

Tellingly the ‘research’ takes up small portion of Stengel’s book, the rest is a story: anecdote and assertion. Jim tells us what to do, but experienced marketers looking for strategic advice won’t find much new or particularly helpful. It’s pretty much the standard sort of consultant fare such as “deliver a near-ideal customer experience”.

It’s well meaning though, Stengel wants us to all be passionate about our business and to feel that there is a greater purpose than simply making money (even if finding out how to make money was the motivation of his ‘research’).  This is a nice sentiment, however, the success of brands (and the large corporations behind them) is far more complex than Stengel’s book and its predecessors claim.

Zero Moment of Truth – Hype, Nonsense, and PseudoScience

Shock, how amazing – new ‘research’ from Google shows that advertisers should be spending far more of their advertising dollars online with Google.

In a report that insults the intelligence of the marketing community Google tell us that consumers are doing more on-line product research than they did in the past (when they weren’t online). Unless you have been in a coma for the past decade you didn’t need Google to tell you that. But some quantitative insight would be useful – how much are consumers using on-line sources of product information, and what sources? Unfortunately Google’s research and data presentation is so shoddy we can gleam nothing reliable.

They did an online survey (ie biased towards heavier online users) of various subsamples (eg 500 people who had bought an automobile, another 250 who had applied for a new credit card in the past 6 months, and so on).

All the data concerns claimed (recalled) behaviour and the sub-sample results are then often averaged into meaningless metrics.

The report highlights stupid meaningless quotes like “70% of Americans say they look at product reviews before making a purchase”. Is this every purchase ? Or 70% have a least once in their lives looked at a product review ? Actually this quote is sourced from an equally sloppy 2009 study but not by Google – why they chose it when they have their own “new research” puzzles me.

I could spend all day pointing out how meaningless the metrics are in the Google report, but I don’t think there is any need. Only extremely gulliable marketers would rely on such a sloppy blatent piece of self-promotion disguised as research.

In May, Professor Jerry Wind and I are hosting a conference at Wharton. If Google had some meaningful, reliable data on the value of online touchpoints we would be delighted to invite them to present.

More choices increase sales

Early this year I attended an excellent, thought-provoking presentation by the very lovely Professor Sheena Iyengar from Columbia Business School  on her (small-scale) choice experiments.  The results seemed to suggest that consumers could easily experience choice overload.  And the implication for marketers was to beware of offering lots of choices because this can actually depress sales.

It was this last implication that worried me because (a) it seemed to clash with the real-world evidence, and (b) there are good logical reasons why different consumers on different days might notice/want different things, so more choice should satisfy more people.

I asked Prof Jordan Louviere, director of the Centre for the Study of Choice, and one of the world’s top authorities.  He replied bluntly “it’s worthless. These guys do not understand how to run experiments properly and/or how to properly analyse data, so they draw totally inappropriate conclusions about their results.”

Supporting Jordan’s assessment is that replications of Sheena’s experiments by other researchers have failed.

Now the Journal of Consumer Research has published a meta-analysis of 50 different choice-overload experiments (including Sheena Iyengar’s) across categories and countries.  The results show more choice options led to more (not less) consumption, there is no generalised choice-overload effect, and no conditions could identify why different studies get different effects.


No doubt consumers can find choices bewildering at times.  Marketers need to help them out e.g. by giving them signposts.

But the conclusion that offering more choice can easily decrease sales is an incorrect message.  More choices increases sales.

Can There Ever Be Too Many Options? A Meta‐Analytic Review of Choice Overload
Author(s): Benjamin Scheibehenne, Rainer Greifeneder, Peter M. Todd
Journal of Consumer Research, Vol. 37, No. 3 (October 2010), pp. 409-425

US brands are not losing their loyal customers – even more misleading metrics

Oops they did it again. Catalina Marketing have announced that packaged goods brands in the US have lost about half of their loyalty customers – AGAIN. Oh no how horrific. It’s a wonder they have any customers left. It’s a wonder that major brands aren’t tumbling out of the market. Sell your shares in Kraft, P&G, Unilever, Coke….

Of course it is complete nonsense. There is nothing wrong with the data, just faulty analysis.

This is the 2nd time Catalina Marketing have made this mistake. They are misinterpreting the natural wobble in people’s purchasing histories as real change in their loyalties.

Marketing scientists have know about this wobble for decades, it can actually be quantified using the NBD-Dirichlet. But Catalina in ignorance instead report that brands have lost nearly half their loyal customers. Oh no the sky is falling!!! Nice headline but it is wrong, plain wrong.

A few years ago when they reported this they said it was an unusual event, due to the GFC. Now having noticed it happens each year they say it is just a terrible indictment on marketing.

All of this is wrong, because this would still happen even in perfectly stationary conditions where no brands are growing or declining, and no consumers are changing their propensity to buy the category nor their loyalties to the particular brands in the category.

Let me explain. Each of us has a tendency to buy from the category, some of us are heavy category buyers and most of us are light. On top of this we each have our own particular loyalties so we buy some brands more often than others. These two mixed distributions mean that there is a lot of diversity between consumers of any product category. Diversity which is modelled extremely well by the Dirichlet.

On top of this we don’t buy like robots. I might have a tendency to buy chocolate bars 5 times a year, and have loyalties so that I buy Snickers 30% of the time, so that’s once or twice a year. But some years I’ll buy chocolate bars more than 5 times a year, and some years less – for thousands of random potential reasons. Plus some years I’ll give Snickers more than 30% of my purchasing and some years less.

So even if I don’t make any changes to my tastes, habits and loyalties I could buy zero Snickers in a year or 5+.

If we classify people into “loyals” or “heavies”, or whatever, based on what they do in one year then a lot of people are going to be misclassified. They aren’t really super-loyals it’s just that in that year they were – perhaps they had a party, some friends visited…a thousand potential reasons. Next year they are likely to revert to closer to their normal purchasing. It looks like they have changed when they haven’t. This is behind the phenomenon statisticians call regression to the mean. Catalina Marketing don’t seem to have learnt their basic statistics.

Catalina categorised someone as loyal if they gave 70% of their purchasing to the brand. If they didn’t in the next year they said they were lost. This means it is largely an analysis of lighter category buyers, as heavier buyers are less to give one brand such weight. So it’s people who bought the brand once out of one category purchase, 2 out of 2, 3 out of 3, or 3 out of 4, or 4 out of 5. So buying the brand just once less, or buying the category just once more means you get classed as lost, defected, no longer loyal. That’s why they get such a high figures as 50% being ‘lost’.

So It’s all an illusion. I explained this back in 2009. Sad that I have to say it again. I’ll repeat what I said then: Catalina Marketing sell targeted marketing services based on using this loyalty program data – which is a bit odd because this fluctuation seriously undermines the capacity to target consumers based on their past buying.


Professor of Marketing Science

Director, Ehrenberg-Bass Institute

University of South Australia

See the official website for the book “How Brands Grow”

Laws of Marketing – to find them we have to look

How Brand Grow” presents almost a dozen scientific laws relating to marketing and buying behaviour. Not laws like the Ries and Trout “thou shalt” laws based on anecdotes, but law-like regularities, relationships that keep on occurring in a wide range of conditions. So we can make predictions based on these laws. In science such laws are the building block of knowledge.

Marketing academia has for too long failed to look for laws, and ignored those that have been discovered. Professor Shelby Hunt, marketing’s most famous student of philosophy of science was a big advocate for laws, yet, to my knowledge he didn’t practice what he preached.

Marketing is awash with ‘theory’ based on speculation or reading non-empirical literature. Theory not based on any empirical laws, much of it in direct ignorance of existing laws, and some in direct conflict with such laws.

Academia should be helping sort this all out, but we (and many other social sciences) are gripped by the model of doing research which says “do some weak theorising largely based on other theoretical literature (not empirical laws) and then conduct a weak empirical test – one that does not rule out many other potential explanations”.

And empirical work in marketing tends to be highly specific. In effect the data sets are tiny slices of the empirical phenomena of interest – one questionnaire, one country, one time.

It’s time we dropped this narrow, and wimpy, model of how to do research.

Professor Byron Sharp.

Links between music artists and brands – micro targeting nonsense

There are lots of people trying to sell all sorts of things to unsuspecting marketers.  Here is one I came across today, NPD Group offer a product called ‘Brand-Link’ which on their webpage says “Sheryl Crow fans are more likely to drive Jeep… which means that both Jeep and Sheryl Crow could benefit from partnering on promotions!”

The exclamation mark is theirs not mine.  I’m underwhelmed.  Because if 5% of Americans are Sheryl Crow fans then an index of 142 for Jeep would mean that almost 7% of Jeep owners are Sheryl Crow fans (or 93% aren’t).

And the index for Sheryl Crow says  that more of her fans drive Jeep than in the normal population, but not many people drive Jeep so again that index means that if she teams up with Jeep that might communicate something special to only a tiny proportion of her fan base.

Actually more Sheryl Crow fans drive Ford than Jeep.

Who cares about the index.  What Sheryl Crow should ask is which car do more of my fans drive ( i.e. in total number)?  And the answer will be Ford, Toyota or GM because that’s what more Americans drive.

Oh dear, indicies can be very misleading.  One might have hoped for more from a market research agency, after all they are supposed to be experts in presenting and interpreting data.


2011 has been a good year for StarBucks – but where were the guru’s predictions ?

20011 has been good for Starbucks.  It’s stock-price has been rising.  Last month it reported that its growing customer base has driven Q2 profits up 20%.  And Advertising Age now reports that “Last week Starbucks blasted past Wendy’s and Burger King to become the No. 3 restaurant chain, posting $9.07 billion in domestic restaurant sales last year, up 8.7% from 2009.”

I find this interesting because consultancy Brand Keys offer a Starbucks case study as the main evidence of the predictive power of their ‘Customer Loyalty Engagement’ metric, a survey that you can (only) buy from them.  I previously examined all the predictive claims within their Starbucks case study and found nowhere did they ever manage to predict a change in the firm’s fortunes (either sales or profits) before it happened, only afterwards.

So Starbucks has had almost two years of rebound now but I haven’t heard lots of positive news from Brand Keys (or anyone else for that matter).  In fact in February 2011 they once again listed Dunkin Donuts as the coffee shop with highest ‘loyalty’.   See here for  Dunkin Donuts’ proud announcement.  I’m guessing that they are a client of Brand Keys, and that Star Bucks is probably not.

Where were the gurus in 2010, or better yet 2009, predicting the resurgence of StarBucks ?  Does anyone know of any prescient predictions ?

American marketers can now see the real sales effect of their advertising

Single source (longitudinal, individual level) data is now available in the USA, showing buying and TV advertising exposure.

This is terribly exciting, because this data can, with careful analysis, provide a high quality quasi experiment. That is, without the effort and expense of devising a controlled experiment you can use this live market data to give you the same experimental outcome. You do it by sorting category purchases into those that were preceded by no recent exposures to your advertising and those that were (and further divide these into 1-exposure, 2-exposures etc). Then you simply compare your brand’s share of these different groups of purchase occasions to see the real sales strength of your advertising. Your brand’s share, of course, should be higher amongst purchase occasions that were preceded by your advertising!

This is vastly more trustworthy than trying to achieve the near impossible and quantifying the sales effects of a particular ad using a statistical analysis of aggregate time series data. It’s also much faster, you don’t have to wait a year before finding out what the effect of last year’s advertising was supposed to have been.

TRA and Nielsen Catalina Solutions are two companies that currently offer single-source data by overlapping data from buying panels and TV viewing panels (i.e some households are in both). NCS also monitors on-line and mobile media exposure.

These data let you identify which ads work better so you can drop non-performing ads and drastically improve the effectiveness of your advertising. And you can use this sales effectiveness data to learn how to make better ads.

And you can measure how much incremental effect is gained by additional recent exposures, i.e. is bunching exposures worth it ?

And whether ads work better in different contexts, on different channels, in different pod positions. There is so much valuable information that can be learned once the true sales effect of advertising is known. Much R&D needs to be done.  The potential to improve the sales effectiveness of TV advertising is immense.

The decline of science in marketing

My colleague Dr Jenni Romaniuk has just met with famous US Professor David Schmittlein.  She sought his guidance on the US academic scene, who we (the Ehrenberg-Bass Institute) might collaborate with and so on.  David generously gave his time and frank views.

Sadly she wrote to me:

“He was quite pessimistic about the Top US journals and the meaningfulness of the research that is published there.  He sees the marketing research orthodoxy polarising in to two camps – the applied economists and the cognitive scientists.  Neither of which look to the ‘real world’ for guidance on modelling.”

So someone on the inside of the US system shares our assessment.  Actually it’s not an uncommon view, particularly among older marketing academics.  What is happening to our discipline ?