Since 2008 I have had the privilege of sitting on the judging panel of six different public sector communications awards. Typically the work involves sifting entries before the judging proper takes place, chiselling away at a great black slab of Lever Arch file in your spare time until you have revealed the shortlist.
Sifting is a particularly edifying process because you have an opportunity to see the good, the bad and the ugly. Sometimes, rather depressingly, the shortlist that you chisel out is very small indeed and you are left with a big dusty pile of rejects.
Which is an unfair descriptor, because entries can be sculpted around a sensible situation analysis, involve solid strategies and be iridescent with tactical brilliance – but they still don’t make the grade.
And very often they fail to do so for one reason – the evidence linking the communications activity with the outcome is either flawed or missing.
Entries of this type typically look like this:
- Our organisation faced (reputation, communications, marketing) Big Challenge
- We undertook some Robust Research to understand more about the problem
- From that Robust Research, we established a Clever Campaign – founded on Awesome Objectives in order to resolve the Big Challenge
- To meet those Objectives we devised and executed a Shrewd Strategy, underpinned by Terrific Tactics
- We achieved our Awesome Objectives and resolved the Big Challenge – all thanks to the Clever Campaign
In the context of, say, an education marketing campaign:
- We had struggled to recruit to certain degree programmes
- Primary research indicated that the majority of students who expressed an interest in studying those degrees with us (but eventually enrolled elsewhere) were heavily influenced by negative perceptions of the career prospects of those particular courses
- We devised a brilliant communications campaign targeted at applicants, potential applicants and their influencers to raise awareness of the diversity of rewarding and lucrative careers those courses lead to
- We met our recruitment targets to those courses
I’m sure you can see the cracks here. While there may be some in-depth research taking place at one end in order to design communications that will best suit a particular problem, the research needed to demonstrate that it was the campaign ‘wot won it’ is missing.
There is no attempt to identify clearly what drove the recruitment, nor to discount alternative causes.
In the context of education marketing, the solution can be as simple as a few questions in the enrolment process: How did you hear about us? Which of the following factors influenced your decision? Who, if anyone, influenced that decision? Etc.
Even if evaluating what is driving campaign outcomes is more complex and costly, cutting back on this kind of research is still a false economy. Because in the end you are going to have to present your case to a senior leadership team and they will, quite rightly, ask for robust evidence of cause and effect. They are as wary of hyperbole and the unsubstantiated as award judges.
And then there are the entries which include the line: “And our media coverage earned us £X thousands in equivalent advertising spend.” Which tend to be sifted into their own ugly pile quicker than you can say ‘Barcelona Principles’.