Skip to main content

The Benefits of Random Probability Sampling: The 2015 BES Face-to-Face

The British Election Study Team
12/11/2015

This post reveals the BES 2015 reported vote figures for the face-to-face survey and discusses them in the context of representativeness achieved via random probability sampling and efforts to interview hard-to-reach respondents.

The face-to-face survey is an address-based random probability sample of eligible voters living in 600 wards in 300 Parliamentary Constituencies in England, Scotland, and Wales. 2,987 people completed the face-to-face survey. The fieldwork for the survey was conducted by GfK between May 8th 2015 and September 13th 2015 and achieved an overall response rate of 55.9%. The face-to-face dataset also includes a self-completion Comparative Study of Electoral Systems (CSES) module answered by 1,567 respondents (52.4% of survey respondents). Full details of the methodology and fieldwork are available in the full technical report that accompanies the data release, the release note, and the questionnaire can also be found here.

The face-to-face survey was designed with two primary aims: first, to achieve a representative sample of the British population – which online and phone polling methods are struggling to achieve – and second, to provide continuity with the long running series of British Election Study post-election face-to-face surveys, which have been running since 1964.

In order to maximise the representativeness of the survey all available resources were directed towards achieving the highest possible response rate from the original sample of 6,072 addresses rather than simply obtaining a larger sample by issuing a reserve sample of addresses. A slightly longer fieldwork period was used and targeted incentives to respondents and interviewers to increase the response rate amongst those who are generally harder to contact, including respondents in London and the South East.

The end result made the extra effort worthwhile. Unlike almost every other pre- and post-election survey (including our own internet panel), the reported vote in the BES face-to-face survey reflects the 2015 lead of the Conservatives over Labour and the full result more accurately, as the table below shows:

Table 1: Party share of the vote at the 2015 General Election and in the 2015 BES face-to-face survey

f2f release blog 1

It is particularly worth noting that the weighting makes very little difference to the overall results, suggesting that the underlying sample is representative of the electorate. The results of the survey are not perfect: we actually overestimate the Conservative-Labour lead. We are still in the process of conducting a vote validation exercise – checking the turnout of our respondents against the marked electoral registers – and we will withhold final judgement on how close our survey gets to the results of the election until we have the results of that process.

Asking people about what they have done in the past might seem easier than asking about what they might do in the future. However this does not explain why our face-to-face survey is more accurate than the pre-election polls. Vote recall is not necessarily more reliable than vote intention – sometimes people forget who they voted for. There is no evidence that there was a late swing in the polls, which might otherwise explain why a post-election survey is more accurate than pre-election surveys. Other surveys, including our own internet panel, were equally wrong before and after the election. We are confident that our face-to-face survey gets closer to the result of the election because it is more representative, not because it is conducted after the event.

Another possibility is that our post-election survey gets different results because people know the election result, and give different answers accordingly (the bandwagon effect). We were in the field for a four-month period. The long fieldwork period was necessary to maximise the representativeness of the sample, but it runs the risk of exaggerating the effect of people forgetting who they voted for (something not unique to this BES face to face, but all post-election BES face to face surveys). Our data reveals an increasing Conservative lead in the face-to-face survey as fieldwork progressed. The green line in Figure 1 below plots this lead against the respondents in our survey in the order they were surveyed and shows that after about 200 voters (this analysis excludes non-voters) were surveyed, the lead moves in a linear fashion from a 4% Labour lead over the Conservatives to an almost 8% lead of the Conservatives over Labour. However this trend, which appears to be driven by time, is actually an artefact of geography. Even if we had a perfectly representative survey and all respondents remembered their votes accurately, if different areas were not sampled simultaneously then the fact that different parties have different levels of support in different places may give the appearance of over time differences, even when the differences are geographic. The orange line in Figure 1 illustrates this point by showing what the Conservative-Labour lead looks like using the constituency results of the respondents, rather than their reported vote, by the order in which respondents were surveyed. Again it shows an increasing Conservative lead over time. Of course our survey does not match this line exactly, but it is remarkably close and the longitudinal trend for an increasing Conservative lead is very similar in both lines. What this means is that given the increasing Conservative lead over time is not an issue of recall, but an artefact of when people in different constituencies were surveyed.

 

Figure 1: Trend in Conservative-Labour lead in reported vote and constituency results by the order respondents were surveyed

f2f blog fig1

 

From our perspective, one of the most important achievements of the face-to-face survey is that we managed to secure responses from large numbers of non-voters. The reported turnout in the face-to-face is 73.3%, higher than the actual turnout at the election, but considerably lower than the 91.2% reported turnout in our internet panel (again we will wait until the vote validation exercise is complete before we make our final judgement about turnout in our survey). Achieving a balanced and representative sample including a large number of non-voters in the survey was a primary goal. We think this response provides a marker of the success of our strategy for targeting hard to reach groups. As Figure 2 below shows (where the reported turnout is presented for respondents interviewed after one interviewer call, two interviewer calls, and so on), had we not followed up those that were difficult to reach, and instead topped up our survey with an easier to reach reserve sample, the proportion of non-voters in our survey might have been much lower.

 

Figure 2: Overall reported turnout by number of attempts to contact respondent before they completed the survey

f2f blog fig2

 

Interviewing a large number of non-voters is important for research into political disengagement. We can only know why people don’t vote if we actually talk to them. Other survey methods, such as our internet panel and phone polls (particularly in an era of rapidly declining response rates) are not very good at contacting non-voters. They may use some form of quota sampling and weighting and so different surveys may appear to be representative when under the surface they are not. An example of this is the age distribution between our face-to-face and panel surveys. As the figure below shows, when we look at the distribution of age in the full samples of the face-to-face and panel surveys they have very similar proportions of young people. However when we only look at the voters in each survey, we can see that the face-to-face has considerably fewer young voters than the panel survey, and that the proportion of young voters in the panel is very similar to the proportion of young people in the full panel sample. We know that young people vote less than older people – there should be proportionally fewer young voters than young people, as we see in the face-to-face. Although we have the right proportion of young people in the panel, those young people are not actually representative of young people in general – there are not enough of them that do not vote.

 

Figure 3: Distribution of age in the full sample and amongst only those who said they voted in the face-to-face and internet panel surveys.

f2f blog fig3

We are not suggesting that our internet surveys are not hugely valuable. Our ability to track the same respondents over time, over different elections, with large sample sizes and an almost unparalleled number of waves, means these online BES surveys offer outstanding opportunities to examine the evolution of political choices over time. But we need a representative survey to calibrate the online panel (a process we will be working on over the coming months) and to better understand the implications of sampling on the quality of social surveys in general, not least with implications for the UK polling industry. See Jon Mellon and Chris Prosser’s blog on the implications of the BES face-to-face for understanding the polling miss.

High quality representative survey data is vital for world-class research, and, as the polling miss showed, this can have wider implications for British politics. The 2015 face-to-face survey continues the long tradition of delivering a high quality nationally representative random probability sample that allows direct comparison with earlier (and future) BES surveys. We have great confidence in the quality of our face-to-face survey and look forward to the insights the British Election Study research community draw from our data.