Successful HE in FE recruitment – what can research tell us?

Our work includes research and consultancy projects that help further education colleges boost their recruitment to higher education courses.

In the past three years we have worked with a range of clients on HE in FE research projects and below we set out five conclusions that are common across all of the studies and which resonate with the wider literature.

  1. Focus on internal progression – but only where progress is possible

It is a well-known maxim that marketing to an unfamiliar group or audience requires around ten times as much effort to achieve the same results as when marketing to a group that is familiar. Which is one reason why colleges are wise to focus their HE recruitment efforts towards their current Level 3 cohort. (Another being the large number of Level 3 students served by most colleges that also deliver higher education). However, communicating HE opportunities to students for whom there is no natural progression route from Level 3 to Level 4 is typically a waste of time and energy. This is a truism, but colleges still commonly fail to differentiate or focus their efforts. Worse still, there are institutions where Level 3 tutors actively criticise the quality of the Level 4 provision and enthusiastically recommend alternatives to their students even where internal progression pathways exist. No marketing activity will resolve the quality issues at play here.

  1. Course is the dominant factor in institution selection

As the excellent research underpinning the Government’s 2012 ‘higher education in further education’ paper demonstrates, students who study a higher education course in a college setting are primarily motivated to do so by considerations of career[1]. Their choice of course is determined, most commonly, by a mix of personal interest in the subject matter and judgments about how helpful this particular course will be in furthering that career. (The mix differs according to age and mode of study). In our research and the literature, choice of institution is dominated by course, followed by location. Which is why marketing to internal or external students for whom no progression pathway exists is so pointless.

No matter how much someone likes a college, they won’t spend two or three years studying there if they do not think the course is right for them.

It is also why researching the higher education ambitions of a Level 3 cohort is such a valuable exercise, as it can provide immediate, robust evidence as to how a college might enhance its HE curriculum and who constitutes the internal market.

  1. Taking into account the different types of HE in FE students – gender and mode of study

There exist significant differences within the HE in FE cohort. Those taking Foundation degrees are most likely to be female, aged under 20 and studying full-time. Conversely HNC/HND students are most likely to be male, to be studying part-time while in full-time employment. Course preferences differ by gender. Students sponsored by their employer are typically differently motivated and face different barriers to those whose course is funded by a loan. The mix of student types differs by institution.

According to our own studies, what students want and expect from a college higher education course, and how their view is formed before they arrive, differs according to gender and the type of course on which they are enrolling. For instance, female applicants typically take more factors into account, making a more ‘considered’ assessment when choosing HE institutions, than males. In some cases, attitudes differ according to where students lived when they applied to study.

  1. Barriers vs motivating factors

In our studies and in the BIS research, over two-thirds of HE in FE students do not apply anywhere else apart from to the college where they are studying. Where choices are constrained, they are largely limited by students’ unwillingness or inability to study away from home because of the living and travel costs (rather than the tuition fees) or because of work and domestic commitments. In other words, there are typically two key factors driving choice for this group, one of them positive and the other negative:

‘This is the right course for me and I cannot afford to study it away from home’.

Older students, married students, white students, students with low entry-level qualifications, are less likely to have made other applications, generally because they are less likely to have also applied to universities.  Those most likely to also apply to universities are aged under 20, single, white, and come from families where at least one parent has had some experience of higher education. For these students, what attracts them to a college over a university is most commonly the smaller college class size.[2]

In order for their HE in FE campaigns to succeed, colleges need to understand the composition of their cohort and their target market, shaping their campaigns around their particular motivators, influencers, barriers and preferred channels.

  1. Validation matters

Our research, and the literature, shows that a significant proportion of HE in FE students value the ‘university’ brand associated with their Foundation or Bachelor’s degree.

Indeed the BIS research suggests, rather alarmingly, that 10% of HE in FE students actually think are studying at a university.

This phenomenon has ramifications for colleges looking to gain foundation or teaching degree awarding powers and for the development of a central validation service. It is also, presumably, the reason why colleges seek validation agreements with universities, even though the latter have a habit of pulling out of those agreements at short notice, to the considerable inconvenience of the former.

[1] Department for Business, Innovation and Skills, ‘Understanding higher education in further education colleges, BIS Research Paper Number 69’, 2012, Chapter 5.

[2] ibid, p.135.

Four findings from research on employers and apprenticeships

Over the summer we conducted three separate research projects for different clients on the subject of apprenticeships.

While each project was tailored to the client and included different elements – such as a major catchment area review in one and lead generation in another – they all included detailed qualitative research among (different types of) employers.

What I personally find most interesting are the similarities between projects. Here are four trends, and why we think they are important:

  1. Creating a typology of organisations most likely to take on apprentices is difficult – for a good reason

Pages 18 to 27 of  BIS research paper 204  from December 2014 provide a detailed profile of the types of apprenticeship employer; the paper also includes data on what employers will look like in the future (as does the UKCES Employer Perspectives Survey 2014). It is tempting for individual providers to think that their local or regional profile will match this national data.

That is very unlikely because:

  1. Individual catchment areas don’t typically mirror England in terms of industrial sectors and it is (the patterns of) these sectors that most heavily influence what apprenticeship provision looks like.
  2. When you are researching at a smaller scale it becomes quickly very clear that organisational culture, personal experiences of apprenticeships, the frequency of junior vacancies and preconceptions of how young people behave are as important as organisational structure when it comes to predicting propensity to take on an apprentice.

To put it another way, among supporters of apprentices there’s typically a commitment that extends beyond the practical. While (in our research and the literature) businesses are most commonly hiring apprentices in order to fill specific current or projected skills gaps (and generally not, by the way, in order to get hold of cheap labour) the decision-makers very often want to get involved because they or the wider business has an affinity with youth development or workforce diversity as social issues – and they believe apprenticeships are a positive force in this regard.

There are two reasons why we would argue that it is imperative providers understand this distinction. Firstly, it’s vital to understand how practical drivers for taking on an apprentice differ from the emotive in order to tailor marketing communications and your broader employer engagement. Ultimately people make decisions, not business units. Secondly, this passion that apprentice employers display provides a major opportunity for providers; which of these might volunteer to help improve your tutor CPD or provide equipment in kind in order to ensure their apprentices are getting the highest quality training?

  1. Apprentices (and providers) are ambassadors in more ways than you might think

We were particularly struck during all three projects by the extent to which the views of decision-makers were shaped by their previous experience of individual apprentices. This included businesses who had never employed an apprentice but whose HR directors, say, had worked in one who had. More surprising still was the influence of training provider sales teams and the frequency across all three projects in which employers said that their view of apprenticeships as a concept had been negatively affected by pushy, poorly-informed sales teams. In this respect, all providers are in this together and your competitors are shaping your reputation.

  1. Understanding the word does not mean businesses understand the concept

A simple view of Google trends will demonstrate the growth in public awareness of apprenticeships as a term. This does not necessarily translate into understanding in any detail. Confusion among businesses not currently employing apprentices (85% of organisations nationally) was widespread in our studies, with a particularly noticeable propensity to conflate apprenticeships and internships. Awareness of higher apprenticeships was markedly low and again this is borne out by the literature. There is obviously still much work to be done in this regard despite the very laudable efforts of Government, agencies and PR consultancies over the past decade. This point is related to our findings at #1 – if first-hand experience of an apprentice is a major driver of awareness and opinion and the vast majority of organisations do not yet employ an apprentice, it’s not much of a surprise.

  1. Population change as a short-term threat

There is a lot of talk in HE circles at the moment about the impact of the big drop in the youth population up to 2020 on university recruitment – the population of UK 18-year-olds is set to fall by around 80,000 (11%) by that time. There appears to be less noise in relation to apprenticeships but it’s just as big an issue in terms of under 19 and, later on, 19+ apprenticeship recruitment. If you are a college or training provider and not taking population change into account in your planning you’re probably making a big mistake.

Creating online surveys that work – 15 quick tips

Most of our research work involves in-depth qualitative interviews – over the telephone or in person. But when the need arises we design and implement online surveys for clients.

These can be exceptionally good value research activities, particularly if a client wants an overview of opinions among a large group of respondents within a particular target group.

There are a wide range of really useful websites giving hints and tips about different elements of online surveying, and I’ve published a sample of these at the base of this article.

But I thought it might be useful for fellow communications, PR and marketing practitioners if I listed some tips that you won’t necessarily find anywhere else.

Our surveys tend to be quite complex affairs and part of a wider body of study, so I’d recommend approaching an agency if that’s your requirement (I would say that, wouldn’t I!) However, if you are doing something fairly straightforward in-house then these may come in handy….

1. Incentivisation – we find that mixed models of incentivisation work best, in which a respondent has the chance to gain something personally (i.e. enter a prize draw) as well as raise money for a charitable cause by completing the survey.

2. Charity support – spending time considering which charity to support is a worthwhile investment; if it resonates strongly with the respondent group, it can have a really positive impact on response rates.

3. Split testing – in particular, where a target group is particularly large, it’s worth split-testing different email introduction and subject heading types as part of a pilot stage, and running with the most successful for the bulk of the campaign.

4. Pilots – running a two-stage pilot process can add significant value to a project. Typically these involve an: initial test of the survey with a friendly group of respondents (and it’s important this is neither you nor the ‘client’, as you will be too close at this stage to see errors or issues); split testing of invitation types (see above).

5. Watch out for betas – the functionality of most of the current crop of paid-for online survey tools is impressive. However, take care when using ‘beta’ test versions of sites as there is (naturally) a higher risk of glitches that could impact your campaign if you use them. Current issues with fonts in the SurveyMonkey beta email tool is a good case in point.

6. Invitations – there are lots of useful blogs about what to include in an email invitation, some of which are listed below. In crude summary:

  • Personalisation [First Name] etc
  • Thank you
  • Why you’re doing it, who for and how results will be used
  • Length of time to complete
  • Incentivisation
  • Deadline
  • Confidentiality assurances
  • Contact details

7. Subject headings – again, hints and tips abound (see below). In the end it comes down to good copyrighting skills. Brevity, relevance and an answer to the inevitable ‘what’s in it for me?’ question.

8. Invitation timings – again, it’s worth splitting invitation email dispatch between different days and times of day.

9. Reminders – in-house email management or survey software email tools will often build in reminders for you, and send only to those who haven’t completed the survey. Reminder timings should be closely related to deadlines – work backwards from the deadline when setting them, rather than forward from the dispatch time.

10. Deadlines – Beware the distant survey closure deadline, a gift to procrastinators. The vast majority of responses will be collected in the two or three days after an invitation or reminder receipt.

11. In whose name should the email be sent? The person or organisation best known to the respondent group, even if you are using an agency like us, as this will boost response rates while minimising problems with spam or junk filters.

12. Don’t spam – not only is it poor practice, it’s a waste of everybody’s time. Only send survey invites to people who have agreed to receive communications from the organisation or, if you’ve bought a list, contact the recipients through your own email client to get their consent before sending the survey invite.

13. Beware contact lists provided by online survey companies – I have yet to hear a positive story about these. (Please do get in touch if you have a different tale to tell.)

14. Watch out for public links – on one occasion a client contact accidentally posted the link to an invitation-only, incentivised online survey in a public forum. It was very quickly attacked by spam bots and it took quite a while to clean up the results. Where posting links to surveys, it’s best to publish to a closed group.

15. Place the most important questions at the front of the survey. Sometimes this may involve pushing demographic data down the running order.

Some useful links:

How do I write an effective survey introduction?

5 key messages to an online survey introduction

6 simple tips to write perfect subject lines

 

Comms and marketing evaluation – demonstrating that you made the difference

Since 2008 I have had the privilege of sitting on the judging panel of six different public sector communications awards. Typically the work involves sifting entries before the judging proper takes place, chiselling away at a great black slab of Lever Arch file in your spare time until you have revealed the shortlist.

Sifting is a particularly edifying process because you have an opportunity to see the good, the bad and the ugly. Sometimes, rather depressingly, the shortlist that you chisel out is very small indeed and you are left with a big dusty pile of rejects.

Which is an unfair descriptor, because entries can be sculpted around a sensible situation analysis, involve solid strategies and be iridescent with tactical brilliance – but they still don’t make the grade.

And very often they fail to do so for one reason – the evidence linking the communications activity with the outcome is either flawed or missing.

Entries of this type typically look like this:

  • Our organisation faced (reputation, communications, marketing) Big Challenge
  • We undertook some Robust Research to understand more about the problem
  • From that Robust Research, we established a Clever Campaign – founded on Awesome Objectives in order to resolve the Big Challenge
  • To meet those Objectives we devised and executed a Shrewd Strategy, underpinned by Terrific Tactics
  • We achieved our Awesome Objectives and resolved the Big Challenge – all thanks to the Clever Campaign

In the context of, say, an education marketing campaign:

  • We had struggled to recruit to certain degree programmes
  • Primary research indicated that the majority of students who expressed an interest in studying those degrees with us (but eventually enrolled elsewhere) were heavily influenced by negative perceptions of the career prospects of those particular courses
  • We devised a brilliant communications campaign targeted at applicants, potential applicants and their influencers to raise awareness of the diversity of rewarding and lucrative careers those courses lead to
  • We met our recruitment targets to those courses

I’m sure you can see the cracks here. While there may be some in-depth research taking place at one end in order to design communications that will best suit a particular problem, the research needed to demonstrate that it was the campaign ‘wot won it’ is missing.

There is no attempt to identify clearly what drove the recruitment, nor to discount alternative causes.

In the context of education marketing, the solution can be as simple as a few questions in the enrolment process: How did you hear about us? Which of the following factors influenced your decision? Who, if anyone, influenced that decision? Etc.

Even if evaluating what is driving campaign outcomes is more complex and costly, cutting back on this kind of research is still a false economy. Because in the end you are going to have to present your case to a senior leadership team and they will, quite rightly, ask for robust evidence of cause and effect.  They are as wary of hyperbole and the unsubstantiated as award judges.

And then there are the entries which include the line: “And our media coverage earned us £X thousands in equivalent advertising spend.” Which tend to be sifted into their own ugly pile quicker than you can say ‘Barcelona Principles’.

FE isn’t a brand – and why that matters

Earlier this month the TES published a double-page spread (and splashed the story) about a six month study of further education reputation undertaken by Richard Gillingwater, of corporate communications agency Acrue Fulton.

In the article Richard ‘says FE’s national brand needs to be rebuilt, and unveils his plan to help the sector make people sit up and take notice’ (to quote the TES blurb).

While I applaud any media taking interest in further education and recognise Richard’s impeccable credentials, the available evidence suggests that it is impossible to rebuild the further education brand. That is because further education, with one important exception, is not a brand.

It is at best a sector and most probably a system.

There are numerous, occasionally conflicting, definitions of ‘brand’. It is one of those words, as Jerry McLaughlin delicately puts it, ‘that is widely used but unevenly understood’. Where academics and practitioners tend to agree is that a brand is a product, concept or service publically distinguished from other products, concepts or services. “A brand is what a firm, institution, or collection of products and services stands for in the hearts and minds of its target audience.”[1]

Brands, as the derivation from branding-iron suggests, are commonly expressed through the medium of a brand name, a trademark, a logo.

FE is not ‘publically distinguished’. It has no recognised logo, no trademark. More importantly, all but one of its target audiences (those who work in it) are insufficiently aware of it – who it serves, its constituent parts, its ‘key facts’ for want of a better phrase – for it to qualify as a brand.

In the past decade a handful of studies examining FE’s reputation have been commissioned. They all tell pretty much the same story – like this one from 2007. If you look under the bonnet of each of those studies, the respondents typically have some understanding of further education and FE as a concept[2]. Sometimes this is deliberate, as in the case of this 2012 study of FE employees.

To my knowledge (and according to reviews of available literature like this one from Anne Parfitt at Huddersfield Uni and this paper from David Roberts at the Knowledge Partnership) there has been no audit of further education’s reputation among a general population. By that I mean parents, students, prospective students, client and non-client employers. More basically, people who don’t work in organisations involved in the delivery or receipt of further education.

No such study has been commissioned, I’d suggest, because potential investors think it would be a waste of time and money. In 2011, the Association of Colleges and polling company ICM undertook a study of college reputations among such a general public. Two thirds of respondents thought Trinity College Cambridge was an FE college, and half said that colleges are still under local authority control and not inspected by Ofsted. In those other studies among ‘stakeholder’ audiences, respondents demonstrate a higher level of awareness of colleges than they do of FE. So it follows that a general public would demonstrate an even lower level of awareness of FE than they did of colleges in 2011.

You can undertake a completely unscientific test of this proposition yourself by asking three people who aren’t an FE lecturer, manager or service provider the question: “What is further education?” If any of the answers correspond, buy yourself a drink.

None of this is meant to detract from Richard Gillingwater’s research and points about FE reputation per se. It’s just that when it comes to branding, FE never made it onto the ranch. This matters, because if Government or its agencies (for instance) want to bolster reputations they should focus on FE’s constituent parts rather than the whole. And in doing so they should recognise that a strong brand depends on a minimal level of awareness – which, by the way, is why the continued, deliberate fragmentation of the term ‘college’ through the proliferation of new forms of institution is likely to prove so corrosive in the longer-term.

 

 

 

 

[1] A quote from Luc Speisser of Landon – whose 2012 blog entry on explaining a brand I would highly recommend.

[2] Take a look, for instance, at the list of respondents on page 3 of this 2007 study, commissioned from Ipsos Mori by then head of the Learning and Skills Council Mark Haysom (who, by the way, now writes critically acclaimed novels).

Who and what influences choice in further education?

In the past couple of years we have specialised in helping clients study attitude, awareness or behaviours among groups important to their organisation.

We also help clients adapt according to the results of the research.

Projects include studies for further education (FE) colleges – typically focusing on recruitment and seeking to help a client understand and respond to who and what influences student choice in their area.

We’ve found a number of patterns across our work in this field and thought that FE colleagues might find it useful if we set ten of them out here.

  1. The decline of the influencer. In 2012 a national study of students aged 11 to 21 and their parents (in which I was involved) indicated that parents exerted a high level of influence on student choice of institution[1]. In our subsequent studies on behalf of colleges – as the agency YouthSight suggests in relation to university applicants – the influence of other people on post-16 student choice of place of study appears to be in general decline. In our latest study (of a 3000+ population of higher education applicants to a large GFE) just under half of respondents said they had not been influenced by anyone. Where third parties do influence choice, mum and dad and family friends most commonly top the rankings.
  1. The rise of search. Online search is overtaking the prospectus as the channel applicants find the most useful for finding out about a prospective place of study. This shift and trend #1 are probably linked – rather than asking or expecting advice from friends or family on study options, students are more commonly actively searching online for institutions which fit their requirements. So colleges need to know what information potential applicants are looking for in order to make an informed decision, and ensure it is easy to find on their website. Online search is commonly also the most useful channel for applicants who have yet to commit to an institution and want more information – so keeping a website up to date may be the most effective ‘keep warm’ tactic for any college. Online search, by the way, dominates where full cost recovery provision is concerned. Social media discussions, adverts and newspaper articles are typically cited as the least useful sources of information about a prospective place of study. 
  1. The power of course. A good reputation for teaching is, typically, the third most important factor for 16+ students considering where to study. Locational factors – where a college is based and the transport network which feeds it – are commonly cited as the second most important factor. Course most regularly tops the rankings. Students may compromise on sports opportunities, on the time taken to travel, on the way buildings look or the facilities within them, but they are unlikely to make concessions on the subject and type of course they want to study. Which highlights the importance of teaching excellence and market research for colleges – while providing another depressing piece of evidence for those of us concerned about the black hole that is schools-based careers advice. 
  1. Gender differences in influence. Where we have explored this issue, we’ve seen notable differences in the way males and females make decisions about where to study. In crude summary, female applicants to further education courses are more discerning – they commonly take more factors into account than males when considering their options. They are also more likely to be informed in their institutional choice by school or college tutors than their male counterparts, who are more inclined to be influenced by friends. 
  1. Hedging bets. This phenomenon first came to light in a study we undertook of a population of 7000 students who applied to a college but enrolled elsewhere in early 2013. 20% of applicants considered the college as a ‘back-up’ choice. In the majority of cases, according to qualitative responses, they were encouraged by school tutors to apply to more than one institution. There appears to be a corresponding general growth in the number of institutions applied to – but we haven’t adequately tested that proposition to be sure. There are ramifications for conversion rates here, and related expectations about the effectiveness and performance of recruitment activities. 
  1. Last-minute change of mind. In the same piece of research, 10% of applicants changed their mind about the course they wanted to study in the period between applying to college/s and enrolling. This change of mind lead to a change of institution (because, as we have seen, course is the most important factor in choice, and in the case of this 10% they – rightly or wrongly – didn’t think the college in question delivered the course they had now settled on). Which means that colleges need to make applicants aware of the broad range of courses available (or at least of the mechanism for finding out), even if an applicant seems pretty sure about what she wants to do with the rest of her life. 
  1. Uncommon applications. Looking for ‘insurance’ offers may sound more like the behaviour of a university applicant than a prospective college student. Whereas university applicants have a system in place – UCAS – to standardise those applications, that is not the case for (non-HE) college applicants. The differences in the application processes between colleges and schools can be confusing, and the more students ‘shop around’ the more puzzling it can be. In one study among non-enrolled applicants, a significant minority expressed low levels of awareness of the particular hoops – application, assessment, interview or audition – that constituted the application process according to course type. Setting out the college processes – including what applicants can expect in terms of entry requirements, timings for interviews and communications from the college – in ways that are easy for applicants to understand is clearly important.
  2. Silence is goodbye.  We are sometimes asked to test (via mystery shopping or quantitative research) if open day, interview and enrolment practices are up to scratch. Where colleges most commonly ‘lose’ applicants it is in the period between application and interview, when the responsibility for a prospective student is passed from (say) a central recruitment or marketing department to administrators in a school or course area responsible for booking interviews. Where there is a delay in an application (say, prior to interview) most students do not follow this up – they assume they have not got a place. Similarly, where there is no response after an interview, most have been offered a place at another college or school, and they don’t chase the college in question either but apply elsewhere. The impact of poorly managed communications is clear.
  1. But goodbye may not be forever. In two separate studies this year (2014) we’ve asked non-enrolled applicants whether they would be interested in hearing about courses at the institution they rejected for another. In both cases a significant minority said they would. A majority of alumni, asked a similar question, were interested in further study. When applicants reject an institution in favour of another it does not necessarily mean they have a low opinion of that college – it may be a case of right place, wrong time. Or that the course they wanted to study was not available. These results also hint that FE alumni networks may be significant and (as yet) overlooked sources of recruitment.
  2. The timing of communications matter. The majority of our education research is undertaken among groups of students aged between 16 and 21. We have experimented with different methodologies depending on the client, the geography and types of students. Generally speaking, research is most fruitful when we’re contacting respondents by mobile phone between 5pm and 8pm. Where colleges are able to raise an expectation among student groups that they may be asked to take part in research, the response rate is (much) better. There are ramifications for data management and protection and communications planning here.

[1]Parent Power Dominates Education Choices’ – Chartered Institute of Public Relations Education and Skills Group.

(Pretty) new research into public opinion and attitude – a digest of free sources

I’ve been asked to sketch out a plan for a couple of workshops in communications, reputation and public relations research.

As part of the prep, I’ve been researching free sources of data on public opinion and attitude (as opposed to stats on our behaviours) that might help PR or communications practitioners:

  • Benchmark their own studies against a larger or different population
  • Improve research design (as a source of sample questions)
  • Craft campaigns

During the workshops (as they stand) I will be taking students through some of these in much more detail, but there are so many that I thought it might be useful to list some here.

Attitudes – general

Every year the British Social Attitudes Survey asks 3,000+ people what it’s like to live in Britain, exploring their social, political and moral attitudes. It’s a big study with a long history that looks at attitudes to issues as diverse and important as health, welfare, immigration and transport.

The Office for National Statistics is a treasure trove of public opinion surveys, but only a minority study attitudes (distinct from behaviours).  These include our attitudes to policing and the communities we live in over time. 

Media consumption

Ofcom’s annual communications market reports examine trends in media consumption and attitudes as well as industry revenue and market share data. It is usefully split into different age groups. Their consumer experience reports measure the choice, price and range of products available to consumers, take-up and awareness of media use, as well as attitudes to comparing, switching and protection.

Political opinion – general

Quite a few of the bigger polling companies publish research archives online and these can include recent studies with respectable respondent numbers, intelligent sampling and sound analysis. The topics range from royal babies to housing costs but the most common are studies into our political predilections. They include:

Ipsos-Mori
Yougov
Comres
ICM
Survation
Angus Reid Global (UK research)
Opinium
TNS-BRMB

I think I’ve commissioned research from all of these at some point in the past. They are all members of the British Polling Council, incidentally. (Which has a handy FAQ on public polling on its website.)

Trust

The Edelman Trust Barometer is an international instrument that examines the trust placed in politicians, media and business (to name a few) by 31,000 respondents from 27 countries. Most of the analysis focuses on global results but Edelman’s UK site includes a press release on in-country results.

Energy and Climate

The Department of Energy and Climate Change set up a tracking survey in March 2012 to better understand the UK public’s attitude to energy and environmental issues. There have been five waves of research so far. The research forms part of the TNS omnibus survey, which uses a random location sampling methodology and results are weighted. Roughly 2000 UK adults are surveyed each week.

Social Mobility

The Social Mobility and Child Poverty Commission published new research into public attitudes to social mobility issues in June 2013. This was another TNS/BMRB omnibus piece of work.

Labour market information

Even though this isn’t strictly an attitudinal study, I’ve thrown this one into the pot because I lecture on it from time to time and it’s so incredibly useful.  For anyone interested in UK demographic information the Nomis wizard tool is, excuse the pun, magic. It includes the detailed breakdown of Census 2011 data, down to ward and postcode level (in most cases), allowing researchers to answer such esoteric questions as ‘How many people born in Poland have access to a van in this parish?’. It’s also the portal which allows researchers to mine the Annual Population Survey (a residential labour market survey focusing on qualifications and economic activity, among other things) and the annual survey of hours and earnings.

Please do comment or share if you know of other national (or regional?) studies that might be useful for comms bods. Thanks in advance.