Advertisers want great ads, ads that generate lots of sales that otherwise wouldn’t have happened. Exposure to a brand’s advertising should increase the viewer’s propensity (likelihood) to buy that brand.
Yet this isn’t what’s measured, hardly ever.
Advertisers commonly look to charts of their sales figures in the hope that they can see blips caused by advertising. But they seldom see anything convincing. It’s all straight lines; or, if you look at weekly or daily figures, they bounce around, usually due to price promotions (ours and our competitors’). The advertising effects aren’t there to be seen.
Many marketers understand that sales figures are a messy, noisy indicator of advertising’s sales power. So they employ proxy measures, like advertising awareness or perception shifts. But these measures are messy, noisy indicators too. In a fantasy world where people are only affected by your advertising, it still isn’t clear if they’re measuring the quality of the advertising, or the media placement, or whether the spend was appropriate. And proxy measures are just that; they’re not measures of the behavioural change in buying propensities.
Most of the sales we enjoy today come from advertising done long ago
Marketing mix modellers promise to find any signal in this cacophony of noise. But this is of little use, not just because the techniques are far from trustworthy but also because of a fundamental fact of advertising’s sales impact… Only a tiny bit of the effect shows up in this week’s figures because most of the consumers exposed to the advertising didn’t buy from the category this week. What do this week’s sales figures tell us about the total (long-term) effect of this bit of advertising? Not much.
Many of our sales this week came from buyers who weren’t recently exposed to the advertising we’re trying to measure. It seems rather odd to be looking for increased sales from people who we didn’t advertise to. The fact is that most of the sales we enjoy today come from advertising done long ago, while many of the people we did nudge with our advertising this week won’t buy for many weeks, or even months – ad effects that cannot show up in this week’s sales figures.
I hope I’ve convinced you that marketers, and market researchers, have largely been barking up the wrong tree for decades. The reason we know so little about the sales effects of advertising – and hence what is good advertising – is that we’ve been measuring the wrong things.
Behaviours (buying) are what we need to measure but aggregate-level sales receipts, like weekly/monthly sales figures, are a lousy measure of the full sales power of each of our ads. One solution is controlled experiments but these are difficult to organise. Another solution is single-source data capturing individuals’ repeat-buying over time, as well as their exposure to advertising over time.
With this data we can compare the brand’s sales share of the purchases made after exposure to advertising with those that were made without exposure to the advertising. So, like an experiment, we have a control baseline from which to judge the causal effect of the advertising with other factors now controlled for. This is only the tip of the iceberg of ad effects but it’s a standard size so ads can be compared in terms of their effectiveness. This measures the sales effect of the advertisement itself, its content, branding and creativity. Media effects (timing and number of exposures) can be measured by cutting out different groups from the data.
This sort of analysis was pioneered in the UK by Colin McDonald and it’s based on the simple breakthrough idea: to judge the sales effect of last night’s advertising we compare purchasers who were exposed to last night’s advertising with those that weren’t. Judging advertising by surveying people who didn’t see it just doesn’t make sense.