What’s wrong in the house of academia, and a suggestion how to fix it

Presented to the Australia & NZ Marketing Academy Conference December 2012.

Marketing has a small ‘crisis literature’ where academics themselves bemoan the lack of real-world importance of academic research into marketing. For example, “Is Marketing Academia Losing Its Way”, Journal of Marketing, 2009.

Back in 1989 John Rossiter documented the growing gap between marketing scientists and consumer researchers even though they were supposed to be studying similar things. He warned the consumer researchers in particular that they were in danger of retreating into an Ivory Tower detached from the empirical findings regarding mass buying behaviour. Yet the trend continued unabated.

I myself, and colleagues, have had articles rejected from good journals when they chiefly documented a substantive finding about the world. My ‘favourite’ was when the editor of Marketing Science wrote to Jenni Romaniuk and myself about our work documenting the 60/20 law (ie it’s not 80/20). Effectively he said, “great stuff, I’m going to use this in my teaching, but we can’t publish it in Marketing Science because the journal tries to feature leading edge analysis whereas what you did was simple and transparent”. An open admission that Marketing Science is really a journal about engineering above science.

Yet around the same time the Nobel Prize for Physics was awarded to two Russian scientists who found a way of producing graphene, a single atom thick layer of carbon, this potentially extremely useful material had once been thought unlikely to exist in the real-world. They isolated graphene with a simple technique using common household sticky tape. Placing bulk graphite between two sheets of Scotch Tape they simply repeatedly pulled the tape apart removing layers of atoms until they achieved graphene. A colleague remarked that it showed you could still win a Nobel Prize “for mucking around in the lab”. In physics there is still respect for substantive discoveries.

The defence or excuse from marketing academia is that we have been placing our emphasis on rigour over relevance. But recent shocking findings in marketing academia, other social sciences, and even medical research have exposed the myth of improved quality. There have been been some high profile examples of scientists disgraced for falsifying results (including in marketing), while 10% of psychologists admit to falsifying data (but they presumably evaded discovery), and most admitted to sometimes practicing dubious practices like selectively reporting the studies “that worked” (and hiding those that did not support their hypotheses. Relatively higher rates of dubious practice were found among neuroscientists and cognitive & social psychologists. What do you think the rates would be in marketing?

A recent analysis (Wilhite and Fong 2012) of the dubious practice of journals encouraging (or bullying) authors to cite other articles from the same journal reported that the Journal of Retailing, Journal of Business Research, and Marketing Science were stand-outs at the very top of the suspect list – and that’s not a list of only marketing journals. Indeed marketing journals stood out from other disciplines as being into coercive citation to try to manipulate their citation impact scores.

In medical research standards are undoubtably higher. Yet when pharmaceutical companies seek to replicate findings reported in medical journals in most cases it can’t be done – even though they try hard, after all they are hoping to make money from the discovery. Many of the findings for cancer drugs are highly specific to particular circumstances (e.g. patients with particular genetic profiles) but the researchers didn’t explore these conditions, they just got lucky with their so far unrepeatable finding.

In marketing Hubbard & Armstrong (1994) documented that academics hardly ever try to replicate findings. We simply assume they are true (or perhaps not worth bothering with). Not surprising perhaps, when replications are done they usually are unable to repeat the original result. The same sort of scandal has hit a number of famous psychology experiments. “The conduct of subtle experiments has much in common with the direction of a theatre performance,” says Daniel Kahneman, a Nobel-prizewinning psychologist at Princeton University. Trivial details such as the day of the week or the colour of a room could affect the results, and these subtleties never make it into methods sections of research articles. Hmm, what’s the difference between a result that is so sensitive to many trivial, unknown and unpredictable details and no result at all? Why should we care about a finding that only occurs in high particular circumstances?

This is a bigger problem than fraud and dubious research practice. We need to stop publishing one-off flukes and explore the generalizability of findings – where and when does a result hold? How does it vary across product categories, brand size, brand age, different types of consumers, at different times, and so on.

Even large and varied data sets are being wasted in marketing when results presented as an average across many different conditions eg “marketing orientation is associated with higher financial performance r=0.28”. This tells us little about the real world; the average may even not actually apply in any of the major conditions.

We need to explore generalizability or otherwise our ‘discoveries’ tell us very little about the marketing world that we are supposed to be studying.

And we need to stop prematurely building shaky prescriptive theoretical edifices upon these doubtful, poorly documented findings.

If we don’t carefully and thoughtfully (call it ‘theory driven’ if you wish) examine a finding and how it varies (or not) across conditions then we are stuck with findings that probably were one-off events – with no way of telling. Currently we have to treat our findings as either applying to one historic data set covering one particular set of conditions that may never be seen again OR a result that generalises to all product categories, all countries, all seasons. Both views are preposterous, something in between is far more likely but there is a lot of land “in between”, it needs to be explored.

The dubious research practices discussed above come partly from ‘confirmation bias’, the fact that (marketing) scientists want to find evidence to support their hypotheses – and they want “a positive result” otherwise they lack the motivation to publish, or the belief that they will be accepted by any decent journal. Brodie and Armstrong (2001) suggested researchers adopt multiple competing hypotheses as a way of overcoming this bias. A worthy suggestion, but those implementing it tend to simply have their favoured hypothesis and the opposite – and they still obviously want to see their favoured hypothesis supported. So I would like to make a different suggestion. Let’s use research questions with the words “when”, “where” and “under what conditions”. Rather than black and white “does X cause Y” type hypotheses let’s ask “when does X cause Y?”, “does X cause Y in highly advertised categories?”, “is X more a cause of Y in developing economies?”. This is the basic work of science, documenting patterns in the real world. When do things vary and when to they not.

If researchers use “when”, “where” and “under what conditions” research questions they are aren’t trying to prove a proposition, so they don’t have to worry about failure, so they should hopefully be less likely to tweak data and pick findings. Also, very importantly, researchers will be documenting something useful about the world because they will be exploring generalizability.

PS The Nobel Prizes for Physics are awarded in line with Alfred Nobel’s criteria “to those who, during the preceding year, shall have conferred the greatest benefit on mankind” which explains the worthy emphasis on substantive fundings. Alexander Fleming’s accidental discovery of penicillin is another example of the Nobel prize committee valuing important discovery over display of academic prowess.

REFERENCES

ARMSTRONG, J. S., BRODIE, R. J. & PARSONS, A. G. “Hypotheses in marketing science: Literature review and publication audit.” Marketing Letters 12, 2 (2001): 171-187.

HUBBARD, R. & ARMSTRONG, J. S. “Replications and Extensions in Marketing: Rarely Published but Quite Contrary.” International Journal of Research in Marketing 11, (1994): 233-248.

REIBSTEIN, D. J., DAY, G. & WIND, J. “Guest editorial: is marketing academia losing its way?” Journal of Marketing 73, 4 (2009): 1-3.

ROSSITER, J. R. “Consumer Research and Marketing Science.” Advances in Consumer Research 16, (1989): 407-413.

WILHITE, A.W & FONG, E.A. “Coercive citation in academic publishing”. Science 335, (Feb 2012): 542-543.

Advertisement

2 thoughts on “What’s wrong in the house of academia, and a suggestion how to fix it

  1. Hello Byron. Just reading your book at moment. How might taking science cause and effects hypotheses and turn them into system dynamics causal loop diagrams help so that qualitative, context and quantitative variables can be made explicit. May be even then moving towards a simulation to explore how structure between variables effect each other. The SD perspective looks for regularities in patterns, so called archetypes to explain. I am not a math expert, but I wonder what Some of the laws you have identified would look like through this lens as it feels they have feedback causal loop characteristics. Enjoying your challenges to Marketing thinking.

  2. You wanna know what is wrong with Marketing academia? It’s filled with people good with words, but little substance, mostly from the Indian subcontinent. It is a soft discipline where everyone is scraping the bottom. It is all about who has the rhetoric skills to make pointless research sound relevant. And these people have a natural talent for it.
    They would brag all day long how much money they make for such little (and quite useless) work. They would also bully white female PhD students to the point of quitting and promptly bring in subservient replacements of their own ethnicity. Yup. You can delete my post after reading if you feel like it.

Please comment on this article if you wish

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s