Conflicts in the marketing system

I do sometimes hear an ad agency people say “we don’t care about creative awards, we are totally dedicated to each client’s business objectives”, especially when in front of clients.  It makes me wonder whether they are lying (that’s bad), or that they are deluding themselves (which may even be worse), or if they are admitting that they simply aren’t good enough to win creative awards (and that’s not good either).

I think it is important to be grown-up, honest and up-front about conflicts of interest.e.g. Martin Sorrell wants to sell marketers stuff, his empire (like his competitors) will sell whatever marketers will buy that he can deliver profitably.  This matters far more to the agency than whether or not it is the best way to build their clients’ brands.

Creatives want to win awards.  And if this doesn’t sell a single extra of your product they aren’t really worried.

Media agencies want to do what they know, what’s easy, and they have to honour sell media they have committed to buy.

Market research agencies want to sell standardised products, ideally that use automated data collection and analysis, or low-level people.  They can’t make big profits from stuff that requires in-depth analysis by expensive people.  They do far more R&D into reducing data collection costs than into better research.

Retailers want to win share from other retailers.  They don’t care if this means selling another box of your product or not.

So partners yes.  But there are conflicts in the system.  This is fine, so long as everyone understands the conflicts then they can be managed – it’s possible for everyone to win.  But pretending these don’t exist is dangerous.

Professor Byron Sharp

July 2014.

Brand Equity twaddle

I occasionally send some friends interesting (both good and bad) articles from marketing academia.  This is an interesting reply.  I won’t name the academic paper.

Dear Byron,

Thank you for sending this paper. I think the correct response, using the scientific vernacular, is ‘utter twaddle’.

The framework below is very neat. It’s very sequential. But it’s also very wrong.

When marketing academics observe what really happens in the real world, they can make powerful discoveries that help further the discourse around how people behave and make choices. But when marketing academics start with a hunch (disguised as a testable hypothesis) and then find data to back it up, they are, at best, worthless, and at worst, damaging.

I wouldn’t waste my time critiquing each component in this model. What I will do is give you an example of a very real ‘real world’ observation about how people behave, despite what one might think they have in their heads regarding Brand Associations and so called Brand Equity.

I am lucky to live in a very nice suburb of southwest London called St.Margarets. It’s what one might call leafy and affluent. Its residents are, on the whole, fortunate to be significantly better off than the average UK population in socio-economic terms. Lots of doctors and lawyers and bankers and media types.

 The overwhelming majority of my St. Margaretian friends and acquaintances are well-educated and, again on the whole, politically liberal. Generally left of centre, having evolved from the armchair socialism of their more zealous, youthful days. I should put an important caveat in place here; I was never an armchair socialist, nor indeed a socialist of any kind really. Anyway, I digress.

There is a nice sense of community in St. Margarets and I have made many good friends here over the years. And in addition to these friends, there are plenty of others with whom I can enjoyably engage in pleasant and cordial passing conversations. As you can imagine, it’s fertile ground for many dinner parties and for gatherings in local hostelries.

Once the wine has started flowing, and the initial greetings and polite exchanges (such as how the kids are getting on) have been completed, conversations inevitably move on to the more ‘serious’ topics du jour. House prices, gossip about who Sally was seen with last week, standards of schooling, what Charles said to his accountant, how moral standards are becoming polarised between the haves and the have nots, what Carol was caught doing with Bob down by the river. You know the sort of thing, I’m sure.

Commerce will often have it’s place in this cauldron of righteousness as well. I distinctly recall more than a few conversations about business ethics. And a number of these have centred around a well-known retailer which has had the temerity to open one of its smaller store formats slap bang in the middle of St. Margarets. Right next door to the railway station. Outrage abounds.

“Tesco Express … they’re crucifying all our little local traders,” opines Gareth. “They bully farmers into bulk deals with derisory margins … Tesco is ruining our agriculture,” shrieks Camilla. “The way they treat their shop workers … it’s slave labour … they should be taken to the International Court of Human Rights,” booms Barry.

I listen with interest. Sometimes, I must admit, the odd fair point can be heard from time to time amongst the remonstrations and general distaste for having such a purportedly disreputable behemoth impose itself on our little suburban ‘village’ (as the Estate Agents like to describe it). But the over-riding theme is one of deep-seated antipathy. A theme with which, I must say for the record, I disagree. I think Tesco is a great business and great for our economy.

The dinner parties end, the hostelries close, and we all go home to our beds. Watered, fed and safe in the knowledge that the world would be a better place … if only ‘they’ just listened to our wisdom.

When I travel home from work during the week, I frequently do so by train. Most of my friends and acquaintances do the same. We come in to St. Margarets station, wearied by the day’s travails, ready to put our feet up and watch the telly. We trudge up the station stairs to the street. As I start to walk down the street  I remember that Cathy called me to remind me to pick up a pint of milk and some chicken breasts for dinner. Ooh, and I can pick up a half decent bottle of wine too … why not!

I turn in to a shop which is already teeming with St. Margaretian commuters.

Before I can even reach down to pick up the chicken breasts I’m tapped on the shoulder. I turn around to see a smiling friend; it’s Camilla. “How lovely to see you”, she says, “(mwah mwah) feels like I saw you just two days ago at Barry’s for dinner.” We both laugh. “Oh, look, speak of the devil, Barry’s over there with Gareth at the check-out.”

“Anyway, see you soon I hope, Camilla,” I say, “We’re going round to the Greensmith’s next Saturday, probably see you there.”

As I leave Tesco, which is slap bang in the middle of St.Margarets, right next to the railway station (to where thousands of well-heeled St. Margaretians return every evening), I give a little wave to Sally, Charles, Carol and Bob. They have arrived back on the next train. They’re just popping in to Tesco to pick up some things before they go home.

As I open my front door, a question comes to mind; can the need to get a pint of milk, as easily as possible, really trump the most heartfelt attitudes expressed around a dinner table in St. Margarets only a day or so earlier! It would appear so.

‘There’s nowt as queer as folk,’ as the old Yorkshire saying goes.

People may claim to hold firm perspectives about brands. The truth is that there is a world of difference between what someone consciously says and what they actually decide (primarily subconsciously) to do.

So, yes, that paper is truly dreadful.

Cheers,

Seamus

 
On 4 May 2013, at 01:25, Byron Sharp wrote:

A poster-child for everything that is wrong with brand equity research.  If you can’t be bothered reading the article just look at the struggle they had to come up with any findings or implications.

Behaviours can be useful predictors of other behaviours

MCDONALD, Heath, CORKINDALE, David & SHARP, Byron 2003. Behavioral versus demographic predictors of early adoption: a critical analysis and comparative test. Journal of Marketing Theory and Practice, 11, 84-95.

click here to download

 

Abstract

Predicting which consumers will be amongst the first to adopt an innovative product is a difficult task but is valuable in allowing effective and efficient use of marketing resources. This paper examines the accuracy of predictions made about likely first adopters based on the most widely accepted theory and compares them to predictions made by examining the relevant past behavior of consumers. A survey of over 1000 consumers examined adoption of an innovative technology: compact fluorescent lightglobes. The results show that variables which were derived from a utility and awareness perspective were a more accurate and managerially useful predictor than the demographic variables derived from the widely accepted theory based on the work of Rogers. It is suggested that these alternative variables could be utilized more readily by marketing managers in many circumstances.

Making marketing science easier to read & understand – suggested format for articles

I sometimes read academic articles in very different disciplines, like medicine and biology. They have some different formats than we have in marketing. Often their articles are much shorter, yet just as detailed when it comes to describing the research, how it was done and what were the results.

Why are marketing journal articles so long? And so obscure?

Do they need to be?

Now that publishing has gone online we don’t need to be subject to the same constraints as in the past. I wonder if the best format would be for articles to be about 800 words long, a clear exposition of what was done and what was found and what it might mean. After the article there could be a moderated/refereed Question & Answer section. This would be enormously useful, and take the pressure off authors to write perfectly, fully anticipating the needs of all readers on the first go.

Extensive details, like the whole questionnaire or data coding frame could be made available by links.

What’s wrong in the house of academia, and a suggestion how to fix it

Presented to the Australia & NZ Marketing Academy Conference December 2012.

Marketing has a small ‘crisis literature’ where academics themselves bemoan the lack of real-world importance of academic research into marketing. For example, “Is Marketing Academia Losing Its Way”, Journal of Marketing, 2009.

Back in 1989 John Rossiter documented the growing gap between marketing scientists and consumer researchers even though they were supposed to be studying similar things. He warned the consumer researchers in particular that they were in danger of retreating into an Ivory Tower detached from the empirical findings regarding mass buying behaviour. Yet the trend continued unabated.

I myself, and colleagues, have had articles rejected from good journals when they chiefly documented a substantive finding about the world. My ‘favourite’ was when the editor of Marketing Science wrote to Jenni Romaniuk and myself about our work documenting the 60/20 law (ie it’s not 80/20). Effectively he said, “great stuff, I’m going to use this in my teaching, but we can’t publish it in Marketing Science because the journal tries to feature leading edge analysis whereas what you did was simple and transparent”. An open admission that Marketing Science is really a journal about engineering above science.

Yet around the same time the Nobel Prize for Physics was awarded to two Russian scientists who found a way of producing graphene, a single atom thick layer of carbon, this potentially extremely useful material had once been thought unlikely to exist in the real-world. They isolated graphene with a simple technique using common household sticky tape. Placing bulk graphite between two sheets of Scotch Tape they simply repeatedly pulled the tape apart removing layers of atoms until they achieved graphene. A colleague remarked that it showed you could still win a Nobel Prize “for mucking around in the lab”. In physics there is still respect for substantive discoveries.

The defence or excuse from marketing academia is that we have been placing our emphasis on rigour over relevance. But recent shocking findings in marketing academia, other social sciences, and even medical research have exposed the myth of improved quality. There have been been some high profile examples of scientists disgraced for falsifying results (including in marketing), while 10% of psychologists admit to falsifying data (but they presumably evaded discovery), and most admitted to sometimes practicing dubious practices like selectively reporting the studies “that worked” (and hiding those that did not support their hypotheses. Relatively higher rates of dubious practice were found among neuroscientists and cognitive & social psychologists. What do you think the rates would be in marketing?

A recent analysis (Wilhite and Fong 2012) of the dubious practice of journals encouraging (or bullying) authors to cite other articles from the same journal reported that the Journal of Retailing, Journal of Business Research, and Marketing Science were stand-outs at the very top of the suspect list – and that’s not a list of only marketing journals. Indeed marketing journals stood out from other disciplines as being into coercive citation to try to manipulate their citation impact scores.

In medical research standards are undoubtably higher. Yet when pharmaceutical companies seek to replicate findings reported in medical journals in most cases it can’t be done – even though they try hard, after all they are hoping to make money from the discovery. Many of the findings for cancer drugs are highly specific to particular circumstances (e.g. patients with particular genetic profiles) but the researchers didn’t explore these conditions, they just got lucky with their so far unrepeatable finding.

In marketing Hubbard & Armstrong (1994) documented that academics hardly ever try to replicate findings. We simply assume they are true (or perhaps not worth bothering with). Not surprising perhaps, when replications are done they usually are unable to repeat the original result. The same sort of scandal has hit a number of famous psychology experiments. “The conduct of subtle experiments has much in common with the direction of a theatre performance,” says Daniel Kahneman, a Nobel-prizewinning psychologist at Princeton University. Trivial details such as the day of the week or the colour of a room could affect the results, and these subtleties never make it into methods sections of research articles. Hmm, what’s the difference between a result that is so sensitive to many trivial, unknown and unpredictable details and no result at all? Why should we care about a finding that only occurs in high particular circumstances?

This is a bigger problem than fraud and dubious research practice. We need to stop publishing one-off flukes and explore the generalizability of findings – where and when does a result hold? How does it vary across product categories, brand size, brand age, different types of consumers, at different times, and so on.

Even large and varied data sets are being wasted in marketing when results presented as an average across many different conditions eg “marketing orientation is associated with higher financial performance r=0.28″. This tells us little about the real world; the average may even not actually apply in any of the major conditions.

We need to explore generalizability or otherwise our ‘discoveries’ tell us very little about the marketing world that we are supposed to be studying.

And we need to stop prematurely building shaky prescriptive theoretical edifices upon these doubtful, poorly documented findings.

If we don’t carefully and thoughtfully (call it ‘theory driven’ if you wish) examine a finding and how it varies (or not) across conditions then we are stuck with findings that probably were one-off events – with no way of telling. Currently we have to treat our findings as either applying to one historic data set covering one particular set of conditions that may never be seen again OR a result that generalises to all product categories, all countries, all seasons. Both views are preposterous, something in between is far more likely but there is a lot of land “in between”, it needs to be explored.

The dubious research practices discussed above come partly from ‘confirmation bias’, the fact that (marketing) scientists want to find evidence to support their hypotheses – and they want “a positive result” otherwise they lack the motivation to publish, or the belief that they will be accepted by any decent journal. Brodie and Armstrong (2001) suggested researchers adopt multiple competing hypotheses as a way of overcoming this bias. A worthy suggestion, but those implementing it tend to simply have their favoured hypothesis and the opposite – and they still obviously want to see their favoured hypothesis supported. So I would like to make a different suggestion. Let’s use research questions with the words “when”, “where” and “under what conditions”. Rather than black and white “does X cause Y” type hypotheses let’s ask “when does X cause Y?”, “does X cause Y in highly advertised categories?”, “is X more a cause of Y in developing economies?”. This is the basic work of science, documenting patterns in the real world. When do things vary and when to they not.

If researchers use “when”, “where” and “under what conditions” research questions they are aren’t trying to prove a proposition, so they don’t have to worry about failure, so they should hopefully be less likely to tweak data and pick findings. Also, very importantly, researchers will be documenting something useful about the world because they will be exploring generalizability.

PS The Nobel Prizes for Physics are awarded in line with Alfred Nobel’s criteria “to those who, during the preceding year, shall have conferred the greatest benefit on mankind” which explains the worthy emphasis on substantive fundings. Alexander Fleming’s accidental discovery of penicillin is another example of the Nobel prize committee valuing important discovery over display of academic prowess.

REFERENCES

ARMSTRONG, J. S., BRODIE, R. J. & PARSONS, A. G. “Hypotheses in marketing science: Literature review and publication audit.” Marketing Letters 12, 2 (2001): 171-187.

HUBBARD, R. & ARMSTRONG, J. S. “Replications and Extensions in Marketing: Rarely Published but Quite Contrary.” International Journal of Research in Marketing 11, (1994): 233-248.

REIBSTEIN, D. J., DAY, G. & WIND, J. “Guest editorial: is marketing academia losing its way?” Journal of Marketing 73, 4 (2009): 1-3.

ROSSITER, J. R. “Consumer Research and Marketing Science.” Advances in Consumer Research 16, (1989): 407-413.

WILHITE, A.W & FONG, E.A. “Coercive citation in academic publishing”. Science 335, (Feb 2012): 542-543.

The flawed Stengel Study of Business Growth

Here I describe the ‘Stengel Study of Business Growth’ using quotes from “Grow: How Ideals Power Growth and Profit at the World’s Greatest Companies” by Jim Stengel, published by Crown Business 2011. Along the way I point out the fatal flaws in the research design.

The ‘Stengel Study of Business Growth’ started in 2007 when Procter & Gamble’s CEO A.G. Lafley endorsed Jim’s idea to “commission a study to identify and learn from businesses that were growing even faster than we were, in whatever industry” (p. 24).

Initially the P&G team studied “the fastest growing brands over the previous five years” (p.24) identified in collaboration with market research agency Millward Brown Optimor using their BrandZ database. The team “assembled five-year financial trends on twenty-five businesses that had grown faster than P&G over that period. The teams then dug behind the numbers with additional research, including interviewing business executives, agency leaders, brand experts, and academics at Harvard, Duke and Columbia”. (p.25)

“We went in looking for superior financial growth, and only after that for whatever the top-ranked businesses were doing differently from the competition” (p26). Professor Philip Rosenzweig explains this classic sampling mistake as being like trying to learn about blood pressure by only looking at a small group of patients who all have high blood pressure.

Another very important mistake, that we have learnt about as various strategy researchers have made it over the years, is to look for causes of success by interviewing managers and ‘experts’ for their opinions on firms that have been doing well. Known as “the Halo effect” people tend to say that firms they know are performing well possess all sorts of desirable characteristics in terms of culture, leadership, values and more. No one describes a known winner as having “unfocused strategy”, or “weak leadership”, or “lack of customer focus”, or “lack of ideals”, or whatever the researchers choose to decide to ask opinions about. As Philip Rosensweig shows clearly in his book “many things we commonly claim drive business performance are simply attributions based on past performance”.

“Successful companies will almost always be described in terms of clear strategy, good organization strong corporate culture, and customer focus” (p.87). Rosenzweig dramatically shows how when successful companies falter experts abruptly change their assessment. Suddenly the previously described “strong culture” is now described as “rigid”, their previously declared “promising new initiatives” are now described as “straying”, their “careful planning” now in hindsight turns out to be “slow bureaucracy” and so on. In reality large businesses change very slowly, but opinions about them change quickly and are largely based on current financial performance (which is itself is largely due to environmental and competitor effects).

The Halo Effect is particularly strong for subjective, nebulous concepts such as ‘values’ and ‘ideals’. The ‘Stengel Study’ made no attempt to supplement their judgements with ‘hard’ objective measures.

In the Stengel study they ‘discovered’ that their chosen high-growth firms were ‘ideal driven’. The central finding therefore was that “businesses driven by a higher ideal, a higher purpose, outperform their competition by a wide margin”. Yet there is no mention of any systematic investigation of competitors, perhaps many of these lesser performers were also ‘ideal driven’? What we can expect is that because of the Halo Effect less successful performers would be less likely to have been described by interviewees as having a clear ideals well activated throughout the business – irrespective of reality.

Subjective concepts such as ‘ideals’ almost certainly introduce confirmation bias on the part of researchers – when there are no objective measures it’s near impossible for a researcher to stop themselves seeing what they want to see. The “unexpected discovery” of the causal effect of ideals, says Jim Stengel “corroborated what I had implicitly believed and acted upon throughout my career”. Hmm, of course it did.

With this ‘ideals’ hypothesis now firmly in place the full ‘Stengel Study’ was then done after Jim Stengel left P&G by selecting 50 brands based on their excellent recent financial performance over 10 years. As a whole this group (refered to as “The Stengel 50″) “grew three times faster over the 2000s than their competitors…individually some of the fastest-growing of the Stengal 50, such as Apple and Google, grew as much as ten times faster than their competition from 2001 to 2011.”

Promotional material for Stengel’s book says that “over the 2000s an investment in these companies—“The Stengel 50”—would have been 400 percent more profitable than an investment in the S&P 500″. The implication is that this proves Stengel’s ‘ideals’ thesis – but Stengel picked these companies for their financial growth!

If they have been picked purely based on some, ideally ‘hard’ (or intersubjectively certifiable), measure of being ‘ideals driven’ then correlations with financial performance might mean something. Especially if this were future, not past, performance. But as these companies were picked for their financial performance then their stock price performance over the same period shows nothing.

A team of four second-year MBA students being taught by Jim Stengel and Professor Sanjay Sood made the Stengal Study the subject of their required applied management research thesis; “this team crawled all over the Stengal 50 to test the role of ideals” conducting interviews with executives, academics and consultants”. No one should be surprised that they found what their instructors believed. They and the ‘Stengal Study’ both passed with flying colors reports Jim Stengel (page 34).

There is one addition to the Stengel Study which is different from previous similar (flawed) studies of business success. Stengel arranged his leading brands into “five fields of fundamental human values that improve people’s lives” by (1) eliciting joy, (2) enabling connection, (3) inspiring exploration, (4) evoking pride, or (5) impacting society (sic). Millward Brown then used implicit and explicit association measures and found that the Stengel 50 brands are perceived as more associated with their selected ideals than competitors.

Again this is a staggering piece of circular logic. First analyse a select group of brands for what particular ideals they represent, then take these ideals into market research and viola these brands turn out to be more associated with these particular ideals. This is not a test that these ideals drive performance, it is simply a test of the researchers’ judgement of brand image. It merely shows that the researchers live in the same culture as the market research respondents. Jim Stengel thinks Backberry ‘enables connection’ and so does the market, Jim Stengel thinks Mercedes Benz ‘evokes pride’ and so do normal people.

Now one might reasonably argue that there is advantage in FedEx and Blackberry being more associated with a category benefit such as “enables connection” than their competitors. However, leading brands always show higher associations because they have more users, who use them more often. Behaviour has a powerful effect on attitudes and memory, for evidence see BIRD, M. & EHRENBERG, A. 1972 “Consumer Attitudes and Brand Usage – Some Confirmations”. Journal of the Market Research Society, 14, 57. RIQUIER, C. & SHARP, B. 1997 “Image Measurement and the Problem of Usage Bias” in proceedings of 26th European Marketing Academy Conference, Warwick Business School, U.K., 1067-1083. ROMANIUK, J. & SHARP, B. 2000 “Using Known Patterns in Image Data to Determine Brand Positioning”, International Journal of Market Research, 42, 219-230.

There is no mention of controlling for this effect.

Conclusion

In summary, the ‘Stengel Study’ makes the same or similar mistakes as much earlier flawed studies that claimed to uncover the secret of sustained financial success. Jim Stengel, and none of his team appear to have read Philip Rosenzweig’s “The Halo Effect: … and the Eight Other Business Delusions That Deceive Managers” which turns out to be a great pity. As he writes “if the data aren’t of good quality, it doesn’t matter how much we have gathered or how sophisticated our research methods appear to be”. The Stengel Study is yet another study that is deeply flawed, it tries to look like science, but turns out to be merely a story, one that will appeal to many but tells us nothing reliable (or new) about the world.

A final note: Based on the track record of previous such studies I expect the financial performance of these ‘ideals driven’ companies to fall back in the near future. Some such as Blackberry, HP have already suffered very dramatic reversals of fortune.

Review of Jim Stengel’s disappointing book “Grow”

Research reveals the hidden secret to business success? No, sadly this is pseudoscience – that will only convince the most gullible of minds.

Jim Stengel seems a nice guy, he wants us to be passionate about our business and to feel that there is a greater purpose than simply making money.  Few would disagree.  But he also claims to have discovered the secret to sustained super profits – based on a flawed study dressed up as science.

Stengel is a marketing consultant, a famous one because he was formerly Chief Marketing Officer of Procter & Gamble 2001-08 until he surprisingly ‘retired’ to consult (and write this book). During the decade that Jim mostly presided over marketing at P&G the company was pretty successful, at least in comparison to the 3 year period immediately before he became CMO of unsuccessful restructuring and CEO turnover. However the success of the 2000s has been exaggerated; the reality is that during Jim’s decade P&G’s stock price doubled, though that is a misleading overstatement due to the brief dramatic dip in 2000 (the reasons why are discussed here). Without that dip the year before Jim took over as CMO the stock price only improved 20% over the full decade. That’s less impressive than the previous decade (90s) when stock price had increased 5-fold (or 3 fold when the brief dip of 2000 is considered), similar gains were also made in the prior decade (80s). So P&G’s performance during Jim’s tenure should perhaps more accurately described as a mild turnaround, or partial restoration. This chart shows the full history of the stock price.

In all fairness though, Jim Stengel doesn’t ask us to believe his amazing discovery just because he was (like millions of others) a successful practitioner, his claims are based on what he calls an unprecedented 10-year empirical study of highly successful firms and the brands they own. But his study does have precedent, it joins a growing list of books that claim to have discovered a few simple rules for business that near guarantee profit performance that will beat all rivals. Each of these books are based on severely flawed research that ‘proves’ just what the author wanted to say in the first place (which is the opposite of a surprising discovery). “In Search of Excellence” was one of the first of these books, which was largely discredited when the excellent companies went on to make poor financial returns in the years after the book came out.

Professor Phil Rosenzweig exposes these flaws in his 2007 book “The Halo Effect: … and the Eight Other Business Delusions That Deceive Managers“.

I describe and critique “The Stengel Study”, which is the basis of Stengel’s book, here. A quick summary is that to detect factors that might cause financial success then Stengel should at least compared very carefully matched samples of both successful and unsuccessful firms, and developed hard objective measures of strategy – not relied almost entirely on interviews with experts. Also, to avoid confirmation bias, the researchers who described the firms and their strategies should not have been aware of which were the successful and unsuccessful ones. And finally, any resulting theories should be tested against the future performance of the firms. Otherwise what looks like science turns out to be simply a story.

Tellingly the ‘research’ takes up small portion of Stengel’s book, the rest is a story: anecdote and assertion. Jim tells us what to do, but experienced marketers looking for strategic advice won’t find much new or particularly helpful. It’s pretty much the standard sort of consultant fare such as “deliver a near-ideal customer experience”.

It’s well meaning though, Stengel wants us to all be passionate about our business and to feel that there is a greater purpose than simply making money (even if finding out how to make money was the motivation of his ‘research’).  This is a nice sentiment, however, the success of brands (and the large corporations behind them) is far more complex than Stengel’s book and its predecessors claim.