#PRstack – new ebook on PR (and other free tools)

I’m among the contributors to a new guide to modern PR tools published today, the My #PRstack ebook.

There are 18 contributors and 40+ practical examples of tools used in public relations, content marketing and search engine optimisation (SEO).

You can download the ebook for free.

The section I’ve written focuses on how PR practitioners can use the Government’s Nomis data tool to help define or understand the publics with whom they need to engage.

If you are interested in public relations research and evaluation, here’s something I wrote in 2013 about free online research resources for PR; it includes data sources on attitudes, media consumption, political opinion and trust.

Other free tools that I have come across since writing that piece include:

  • the fun YouGov profiler, a free to use app built to showcase (paid for) YouGov profiles, a segmentation and planning tool that allows users to build target profiles using the data from 200,000 YouGov members. (Just don’t mention the election).
  • the London datastore for demographic data from the capital

Very pleased to hear about other free demography, awareness, attitude, behaviour resources that practitioners find useful.

Creating online surveys that work – 15 quick tips

Most of our research work involves in-depth qualitative interviews – over the telephone or in person. But when the need arises we design and implement online surveys for clients.

These can be exceptionally good value research activities, particularly if a client wants an overview of opinions among a large group of respondents within a particular target group.

There are a wide range of really useful websites giving hints and tips about different elements of online surveying, and I’ve published a sample of these at the base of this article.

But I thought it might be useful for fellow communications, PR and marketing practitioners if I listed some tips that you won’t necessarily find anywhere else.

Our surveys tend to be quite complex affairs and part of a wider body of study, so I’d recommend approaching an agency if that’s your requirement (I would say that, wouldn’t I!) However, if you are doing something fairly straightforward in-house then these may come in handy….

1. Incentivisation – we find that mixed models of incentivisation work best, in which a respondent has the chance to gain something personally (i.e. enter a prize draw) as well as raise money for a charitable cause by completing the survey.

2. Charity support – spending time considering which charity to support is a worthwhile investment; if it resonates strongly with the respondent group, it can have a really positive impact on response rates.

3. Split testing – in particular, where a target group is particularly large, it’s worth split-testing different email introduction and subject heading types as part of a pilot stage, and running with the most successful for the bulk of the campaign.

4. Pilots – running a two-stage pilot process can add significant value to a project. Typically these involve an: initial test of the survey with a friendly group of respondents (and it’s important this is neither you nor the ‘client’, as you will be too close at this stage to see errors or issues); split testing of invitation types (see above).

5. Watch out for betas – the functionality of most of the current crop of paid-for online survey tools is impressive. However, take care when using ‘beta’ test versions of sites as there is (naturally) a higher risk of glitches that could impact your campaign if you use them. Current issues with fonts in the SurveyMonkey beta email tool is a good case in point.

6. Invitations – there are lots of useful blogs about what to include in an email invitation, some of which are listed below. In crude summary:

  • Personalisation [First Name] etc
  • Thank you
  • Why you’re doing it, who for and how results will be used
  • Length of time to complete
  • Incentivisation
  • Deadline
  • Confidentiality assurances
  • Contact details

7. Subject headings – again, hints and tips abound (see below). In the end it comes down to good copyrighting skills. Brevity, relevance and an answer to the inevitable ‘what’s in it for me?’ question.

8. Invitation timings – again, it’s worth splitting invitation email dispatch between different days and times of day.

9. Reminders – in-house email management or survey software email tools will often build in reminders for you, and send only to those who haven’t completed the survey. Reminder timings should be closely related to deadlines – work backwards from the deadline when setting them, rather than forward from the dispatch time.

10. Deadlines – Beware the distant survey closure deadline, a gift to procrastinators. The vast majority of responses will be collected in the two or three days after an invitation or reminder receipt.

11. In whose name should the email be sent? The person or organisation best known to the respondent group, even if you are using an agency like us, as this will boost response rates while minimising problems with spam or junk filters.

12. Don’t spam – not only is it poor practice, it’s a waste of everybody’s time. Only send survey invites to people who have agreed to receive communications from the organisation or, if you’ve bought a list, contact the recipients through your own email client to get their consent before sending the survey invite.

13. Beware contact lists provided by online survey companies – I have yet to hear a positive story about these. (Please do get in touch if you have a different tale to tell.)

14. Watch out for public links – on one occasion a client contact accidentally posted the link to an invitation-only, incentivised online survey in a public forum. It was very quickly attacked by spam bots and it took quite a while to clean up the results. Where posting links to surveys, it’s best to publish to a closed group.

15. Place the most important questions at the front of the survey. Sometimes this may involve pushing demographic data down the running order.

Some useful links:

How do I write an effective survey introduction?

5 key messages to an online survey introduction

6 simple tips to write perfect subject lines

 

Comms and marketing evaluation – demonstrating that you made the difference

Since 2008 I have had the privilege of sitting on the judging panel of six different public sector communications awards. Typically the work involves sifting entries before the judging proper takes place, chiselling away at a great black slab of Lever Arch file in your spare time until you have revealed the shortlist.

Sifting is a particularly edifying process because you have an opportunity to see the good, the bad and the ugly. Sometimes, rather depressingly, the shortlist that you chisel out is very small indeed and you are left with a big dusty pile of rejects.

Which is an unfair descriptor, because entries can be sculpted around a sensible situation analysis, involve solid strategies and be iridescent with tactical brilliance – but they still don’t make the grade.

And very often they fail to do so for one reason – the evidence linking the communications activity with the outcome is either flawed or missing.

Entries of this type typically look like this:

  • Our organisation faced (reputation, communications, marketing) Big Challenge
  • We undertook some Robust Research to understand more about the problem
  • From that Robust Research, we established a Clever Campaign – founded on Awesome Objectives in order to resolve the Big Challenge
  • To meet those Objectives we devised and executed a Shrewd Strategy, underpinned by Terrific Tactics
  • We achieved our Awesome Objectives and resolved the Big Challenge – all thanks to the Clever Campaign

In the context of, say, an education marketing campaign:

  • We had struggled to recruit to certain degree programmes
  • Primary research indicated that the majority of students who expressed an interest in studying those degrees with us (but eventually enrolled elsewhere) were heavily influenced by negative perceptions of the career prospects of those particular courses
  • We devised a brilliant communications campaign targeted at applicants, potential applicants and their influencers to raise awareness of the diversity of rewarding and lucrative careers those courses lead to
  • We met our recruitment targets to those courses

I’m sure you can see the cracks here. While there may be some in-depth research taking place at one end in order to design communications that will best suit a particular problem, the research needed to demonstrate that it was the campaign ‘wot won it’ is missing.

There is no attempt to identify clearly what drove the recruitment, nor to discount alternative causes.

In the context of education marketing, the solution can be as simple as a few questions in the enrolment process: How did you hear about us? Which of the following factors influenced your decision? Who, if anyone, influenced that decision? Etc.

Even if evaluating what is driving campaign outcomes is more complex and costly, cutting back on this kind of research is still a false economy. Because in the end you are going to have to present your case to a senior leadership team and they will, quite rightly, ask for robust evidence of cause and effect.  They are as wary of hyperbole and the unsubstantiated as award judges.

And then there are the entries which include the line: “And our media coverage earned us £X thousands in equivalent advertising spend.” Which tend to be sifted into their own ugly pile quicker than you can say ‘Barcelona Principles’.

FE isn’t a brand – and why that matters

Earlier this month the TES published a double-page spread (and splashed the story) about a six month study of further education reputation undertaken by Richard Gillingwater, of corporate communications agency Acrue Fulton.

In the article Richard ‘says FE’s national brand needs to be rebuilt, and unveils his plan to help the sector make people sit up and take notice’ (to quote the TES blurb).

While I applaud any media taking interest in further education and recognise Richard’s impeccable credentials, the available evidence suggests that it is impossible to rebuild the further education brand. That is because further education, with one important exception, is not a brand.

It is at best a sector and most probably a system.

There are numerous, occasionally conflicting, definitions of ‘brand’. It is one of those words, as Jerry McLaughlin delicately puts it, ‘that is widely used but unevenly understood’. Where academics and practitioners tend to agree is that a brand is a product, concept or service publically distinguished from other products, concepts or services. “A brand is what a firm, institution, or collection of products and services stands for in the hearts and minds of its target audience.”[1]

Brands, as the derivation from branding-iron suggests, are commonly expressed through the medium of a brand name, a trademark, a logo.

FE is not ‘publically distinguished’. It has no recognised logo, no trademark. More importantly, all but one of its target audiences (those who work in it) are insufficiently aware of it – who it serves, its constituent parts, its ‘key facts’ for want of a better phrase – for it to qualify as a brand.

In the past decade a handful of studies examining FE’s reputation have been commissioned. They all tell pretty much the same story – like this one from 2007. If you look under the bonnet of each of those studies, the respondents typically have some understanding of further education and FE as a concept[2]. Sometimes this is deliberate, as in the case of this 2012 study of FE employees.

To my knowledge (and according to reviews of available literature like this one from Anne Parfitt at Huddersfield Uni and this paper from David Roberts at the Knowledge Partnership) there has been no audit of further education’s reputation among a general population. By that I mean parents, students, prospective students, client and non-client employers. More basically, people who don’t work in organisations involved in the delivery or receipt of further education.

No such study has been commissioned, I’d suggest, because potential investors think it would be a waste of time and money. In 2011, the Association of Colleges and polling company ICM undertook a study of college reputations among such a general public. Two thirds of respondents thought Trinity College Cambridge was an FE college, and half said that colleges are still under local authority control and not inspected by Ofsted. In those other studies among ‘stakeholder’ audiences, respondents demonstrate a higher level of awareness of colleges than they do of FE. So it follows that a general public would demonstrate an even lower level of awareness of FE than they did of colleges in 2011.

You can undertake a completely unscientific test of this proposition yourself by asking three people who aren’t an FE lecturer, manager or service provider the question: “What is further education?” If any of the answers correspond, buy yourself a drink.

None of this is meant to detract from Richard Gillingwater’s research and points about FE reputation per se. It’s just that when it comes to branding, FE never made it onto the ranch. This matters, because if Government or its agencies (for instance) want to bolster reputations they should focus on FE’s constituent parts rather than the whole. And in doing so they should recognise that a strong brand depends on a minimal level of awareness – which, by the way, is why the continued, deliberate fragmentation of the term ‘college’ through the proliferation of new forms of institution is likely to prove so corrosive in the longer-term.

 

 

 

 

[1] A quote from Luc Speisser of Landon – whose 2012 blog entry on explaining a brand I would highly recommend.

[2] Take a look, for instance, at the list of respondents on page 3 of this 2007 study, commissioned from Ipsos Mori by then head of the Learning and Skills Council Mark Haysom (who, by the way, now writes critically acclaimed novels).

Who and what influences choice in further education?

In the past couple of years we have specialised in helping clients study attitude, awareness or behaviours among groups important to their organisation.

We also help clients adapt according to the results of the research.

Projects include studies for further education (FE) colleges – typically focusing on recruitment and seeking to help a client understand and respond to who and what influences student choice in their area.

We’ve found a number of patterns across our work in this field and thought that FE colleagues might find it useful if we set ten of them out here.

  1. The decline of the influencer. In 2012 a national study of students aged 11 to 21 and their parents (in which I was involved) indicated that parents exerted a high level of influence on student choice of institution[1]. In our subsequent studies on behalf of colleges – as the agency YouthSight suggests in relation to university applicants – the influence of other people on post-16 student choice of place of study appears to be in general decline. In our latest study (of a 3000+ population of higher education applicants to a large GFE) just under half of respondents said they had not been influenced by anyone. Where third parties do influence choice, mum and dad and family friends most commonly top the rankings.
  1. The rise of search. Online search is overtaking the prospectus as the channel applicants find the most useful for finding out about a prospective place of study. This shift and trend #1 are probably linked – rather than asking or expecting advice from friends or family on study options, students are more commonly actively searching online for institutions which fit their requirements. So colleges need to know what information potential applicants are looking for in order to make an informed decision, and ensure it is easy to find on their website. Online search is commonly also the most useful channel for applicants who have yet to commit to an institution and want more information – so keeping a website up to date may be the most effective ‘keep warm’ tactic for any college. Online search, by the way, dominates where full cost recovery provision is concerned. Social media discussions, adverts and newspaper articles are typically cited as the least useful sources of information about a prospective place of study. 
  1. The power of course. A good reputation for teaching is, typically, the third most important factor for 16+ students considering where to study. Locational factors – where a college is based and the transport network which feeds it – are commonly cited as the second most important factor. Course most regularly tops the rankings. Students may compromise on sports opportunities, on the time taken to travel, on the way buildings look or the facilities within them, but they are unlikely to make concessions on the subject and type of course they want to study. Which highlights the importance of teaching excellence and market research for colleges – while providing another depressing piece of evidence for those of us concerned about the black hole that is schools-based careers advice. 
  1. Gender differences in influence. Where we have explored this issue, we’ve seen notable differences in the way males and females make decisions about where to study. In crude summary, female applicants to further education courses are more discerning – they commonly take more factors into account than males when considering their options. They are also more likely to be informed in their institutional choice by school or college tutors than their male counterparts, who are more inclined to be influenced by friends. 
  1. Hedging bets. This phenomenon first came to light in a study we undertook of a population of 7000 students who applied to a college but enrolled elsewhere in early 2013. 20% of applicants considered the college as a ‘back-up’ choice. In the majority of cases, according to qualitative responses, they were encouraged by school tutors to apply to more than one institution. There appears to be a corresponding general growth in the number of institutions applied to – but we haven’t adequately tested that proposition to be sure. There are ramifications for conversion rates here, and related expectations about the effectiveness and performance of recruitment activities. 
  1. Last-minute change of mind. In the same piece of research, 10% of applicants changed their mind about the course they wanted to study in the period between applying to college/s and enrolling. This change of mind lead to a change of institution (because, as we have seen, course is the most important factor in choice, and in the case of this 10% they – rightly or wrongly – didn’t think the college in question delivered the course they had now settled on). Which means that colleges need to make applicants aware of the broad range of courses available (or at least of the mechanism for finding out), even if an applicant seems pretty sure about what she wants to do with the rest of her life. 
  1. Uncommon applications. Looking for ‘insurance’ offers may sound more like the behaviour of a university applicant than a prospective college student. Whereas university applicants have a system in place – UCAS – to standardise those applications, that is not the case for (non-HE) college applicants. The differences in the application processes between colleges and schools can be confusing, and the more students ‘shop around’ the more puzzling it can be. In one study among non-enrolled applicants, a significant minority expressed low levels of awareness of the particular hoops – application, assessment, interview or audition – that constituted the application process according to course type. Setting out the college processes – including what applicants can expect in terms of entry requirements, timings for interviews and communications from the college – in ways that are easy for applicants to understand is clearly important.
  2. Silence is goodbye.  We are sometimes asked to test (via mystery shopping or quantitative research) if open day, interview and enrolment practices are up to scratch. Where colleges most commonly ‘lose’ applicants it is in the period between application and interview, when the responsibility for a prospective student is passed from (say) a central recruitment or marketing department to administrators in a school or course area responsible for booking interviews. Where there is a delay in an application (say, prior to interview) most students do not follow this up – they assume they have not got a place. Similarly, where there is no response after an interview, most have been offered a place at another college or school, and they don’t chase the college in question either but apply elsewhere. The impact of poorly managed communications is clear.
  1. But goodbye may not be forever. In two separate studies this year (2014) we’ve asked non-enrolled applicants whether they would be interested in hearing about courses at the institution they rejected for another. In both cases a significant minority said they would. A majority of alumni, asked a similar question, were interested in further study. When applicants reject an institution in favour of another it does not necessarily mean they have a low opinion of that college – it may be a case of right place, wrong time. Or that the course they wanted to study was not available. These results also hint that FE alumni networks may be significant and (as yet) overlooked sources of recruitment.
  2. The timing of communications matter. The majority of our education research is undertaken among groups of students aged between 16 and 21. We have experimented with different methodologies depending on the client, the geography and types of students. Generally speaking, research is most fruitful when we’re contacting respondents by mobile phone between 5pm and 8pm. Where colleges are able to raise an expectation among student groups that they may be asked to take part in research, the response rate is (much) better. There are ramifications for data management and protection and communications planning here.

[1]Parent Power Dominates Education Choices’ – Chartered Institute of Public Relations Education and Skills Group.

When automated customer service goes very bad – a BT case study

I am usually wary of reputation management case studies borne out of a PR practitioners’ personal experience because they are (necessarily) anecdotal and, typically, anger masquerading as advice.

And this example, hypocritically, is no different – but independent of how livid I may be with the terrible service received from BT when moving house, it’s still a fine example of avoidable and expensive customer service failure.

On 15 May we moved six doors down the road, from a small cottage that we had outgrown. BT were due to reconnect the telephone line on 27 May but the deadline passed and the line was dead. There was, we were told after being diverted to a call centre somewhere in India, a fault.

A week later and the line is still dead. BT, it emerges, have been sending engineers to the wrong address (our old house) to find and fix faults that don’t exist. Quite why is a mystery as we have only ever registered the new address with them. So far I’ve spent over 130 minutes either on hold – regularly to be cut off – or talking to call centre representatives who assure me, by rote, that everything will be fine.

At 2pm yesterday, as another deadline passes for fixing the fault, I walk outside to see if I can spot a BT van. There it is, parked outside my old house.

“Are you Ben?” says the engineer as he steps down from behind the wheel.

“Yes. I was told you were coming between 8am and 1pm.”

“Don’t blame me. I was only booked at 1pm.”

I am reminded of the works of Franz Kafka.

“You’ve not been booked for Number 32 have you?” I ask.

“Yes.”

“Well I live at number 20.”

There is nothing the engineer can do to help me, even though he is standing on my street, a few metres from my house. “We can only visit properties that we’ve been booked for.” He drives away. A sound, like the death rattle of a thousand small beetles, leaves my mouth.

This morning another engineer calls. His name is Pete and he is chirpy.

“We appear to have the wrong address for you. To book a visit by an engineer for the right address you need to call customer services.”

“Can you do it?”

“No, we’re not BT.”

“But you’re called BT Openreach.”

“We’re a BT company, but we’re not BT.”

“Do you have any idea how silly that sounds?”

“Erm….yes.”

What’s the lesson here (beyond ‘avoid BT’) for anyone involved in customer services and reputation management?

It could be the failure to integrate standard checks into sales. At no point in the three weeks prior to the deadline for transferring the line did BT check it for faults, and yet it took only a couple of minutes for a man in another continent to determine that the line was, indeed, faulty. One simple check and none of this would have happened.

It could be the failure to tailor the customer service journey to the customer. Instead, it is designed for the company’s benefit in order to maximise the efficiency of UK call centres. As a result, anyone wanting to talk to the faults department has to spend a long time on hold while someone in the north of England explains their predicament to someone in India. Calls are often cut off.

It could be the failure to integrate the domestic and international operation. Every time I speak to a UK operator I repeat my address as part of the security procedure. Yet the faults team spent a week sending engineers to the wrong address. They also made promises in relation to compensation and timings that their UK colleagues told me were simply not true.

It could be the muddling of brand between BT and Openreach that, through buck-passing, inevitably exacerbates customer services issues like mine.

It could be the operating costs that are far higher than the required investment to fix these issues. Engineers sent to the wrong addresses, fixing faults that don’t exist. Call centres tied up in hours of checks and explanations to one customer alone. Every one of the company’s representatives has tried their best at all points, but none of them is empowered to fix the problem.

No, what makes this a classic case study of failure to manage reputation is that, as all of this has played out, I have received the following emails and texts:

6436412/05/14 11:54 AM

Hello, BT here. Just to confirm you’ve set up online billing with BT. If you didn’t, please let us know at bt.com/letusknow.

6436422/05/14 8:50 AM

6436427/05/14 9:15 AM

BT here. Your broadband service is ready for you. If you’re expecting new kit, it should be with you by now, so just follow its user guide to set it up.  If it hasn’t arrived yet, find out where it is atbt.com/ordertracking. Thanks for choosing BT.

6436427/05/14 9:29 AM

BT here. Your phone service is ready for you. You can find out a phone’s number by calling 17070 from it. Thanks for choosing BT.

6436402/06/14 2:16 PM

Hello, BT here.

Sorry about your fault. Your phone should be back to normal now. If you set up Call Diversion, you can cancel it by dialling #21# from your landline. If you’ve got broadband, you might need to restart your hub and wait up to three days for your broadband speed to get back to normal. If you need any more help, go to bt.com/help

6436402/06/14 3:09 PM

Hello, BT here.

Sorry about your fault. Your phone should be back to normal now. If you set up Call Diversion, you can cancel it by dialling #21# from your landline. If you’ve got broadband, you might need to restart your hub and wait up to three days for your broadband speed to get back to normal. If you need any more help, go to bt.com/help

6436403/06/14 8:53 AM

Hello, BT here.

Sorry about your fault. Your phone should be back to normal now. If you set up Call Diversion, you can cancel it by dialling #21# from your landline. If you’ve got broadband, you might need to restart your hub and wait up to three days for your broadband speed to get back to normal. If you need any more help, go to bt.com/help

Of course, the phone service was never ready. And the fault never tested, let alone fixed. It will be another 48 hours before an engineer is sent to the right address.

Problems like mine are, apparently, common. Yet at no point has anyone thought to introduce a check which turns off automated emails when a fault is reported.

Reputation is often described as the difference between expectation and delivery. When customer service fails, automated communications that do not take this into account stretch the gulf between these even further. They gall, so smartly, because they are untrue. An automatic lie, if you will. And nothing corrodes trust quicker than an inability or failure to tell the truth.

Given the monopolistic position of BT Openreach I very much doubt any number of blog posts would change these behaviours. But if you work in customer services or reputation management and are now feeling smug (or chastened), then at least something came of this shambles.

 

 

Science PR and communication – headlines from our background research

I am shamefully late in posting to this blog – we have been extremely busy with a number of research projects, but that’s a poor excuse.

Those projects include a study of science public relations and communication that we are conducting for the Department for Business, Innovation and Skills (BIS) and Chartered Institute of Public Relations (CIPR).

In this predominantly qualitative piece we’re looking to understand more about the dominant themes in science PR and communication through the experiences, attitudes and behaviours of practitioners.

At the time of writing, we are particularly keen to find in-house private sector practitioners in order to better represent that population in the study – do please get in touch if you fit that bill and you would like to volunteer for a telephone interview (or recommend a colleague or contact).

We are currently sifting and analysing the study data and hope to publish in May.

In the meantime, here’s a selection of headlines from our background research to chew over:

  1. Television news and factual programming continue to drive public understanding of science.

However, where we get our information will depend, in part, on whether we are actively seeking it and information sources do differ by age group, according to the BIS/Ipsos MORI Public Attitudes to Science 2014. (There’s an astute summary of this research by Alice Bell in the Guardian’s science blog).

  1. Research and public commentary of science PR is dominated by analysis of media relations.

But PR practitioners working in or for organisations involved in natural, applied or formal sciences are no more likely to be engaged in media relations work than other types of practitioners, according to a break-down of the CIPR’s 2014 State of the Profession Survey.

  1. The Cardiff University School of Psychology is investigating ‘the potential role of press releases in creating misleading reports of science in the press’.

The team studied 2011 Russell Group press releases, the associated peer-reviewed journal articles that the instigated the PRs, and in turn, the news stories that arose. Results are due to be published soon.

  1. If you want to understand the impact on specialist journalists as a result of changes in the print and broadcast media business, then take a look at this 2009 Nature survey of 493 science journalists.

In particular the answers to the question: Do you have any other comments or thoughts that you would like to share regarding science journalism? (in the ‘Open ends’ tab). Often beleaguered, coruscating, sad.

  1. The most prolific sources of science and technology stories in the UK media are publically funded science or medical research.

Followed by ‘industry’, non-Governmental organisations (NGOs) and other civic groups, then the UK Government, according to a study published in 2007 by Cardiff University’s School of Journalism.

Hearty thanks to the individuals and organisations that have helped us with the study so far – a proper set of thank yous to come.