Skip to main content

BES Vote Validation Variable added to Face to Face Post-Election Survey

The British Election Study Team
12/02/2016

It may be a truism to say that elections are decided by those who turn out to vote but from the impact of local campaigns, to understanding why the polls went wrong, knowing who votes and who doesn’t is key to understanding British elections. The easiest way to find out who votes and who doesn’t is to ask people in surveys. Whilst the vast majority of people will tell you accurately whether or not they vote, a small but substantial minority will say they voted when they did not and an even smaller number will say they did not vote when they actually did. This misreporting has important consequences for research into political participation and can lead to erroneous conclusions about why people vote. We therefore conducted a ‘vote validation’ exercise – to check whether our respondents voted in the 2015 general election. All information is then re-anonymised in the survey data.

Today we are releasing a new version of the BES Face to Face Post-Election Survey containing a validated turnout variable. The data were collected and linked to the BES face-to-face survey as part of a partnership between the BES and the Electoral Commission.

The name and address information of face-to-face respondents who had given their permission for their information to be linked with the electoral registers was matched against the marked electoral registers by our team of BES Research Assistants (Arthur Hunter, Jac Larner and Jessica Smith). The project involved around substantial investment of programming time by senior BES Researchers Jon Mellon and Chris Prosser, and the project was overseen by Ed Fieldhouse and Jane Green. We are very grateful to Emma Noyes and Phil Thompson from the Electoral Commission who requested and gathered the registers, and to Nick Moon at GfK who provided information for linking survey respondents. The validated turnout variable provides information about all those survey respondents who were successfully linked to a record in the marked electoral registers.

In order to assess the reliability of the validation process a subset of respondents was coded by a second research assistant. This process suggests the reliability is very high. In the double coded cases, the two coders report the same outcome 94.8% of the time.

We can also assess the extent to which the validation suffers from false positive rates. It is plausible to assume that most of our false positives (people who didn’t vote, but who we validate as actually having voted) will report not having voted (as the vast majority of non-voters report their not having voted accurately). Within the group of respondents we have coded as voting, 1.5% report having not voted. Given that some of these people may have actually mis-recalled in their self-reported turnout, we feel confident that the false positive rate in vote validation is very low.

The false negative rate (those who we code as not having voted but actually did vote) is harder to quantify because vote validation is done precisely because we do not fully trust the recall of respondents who claim to have voted. Our coders reported very few problems with determining whether someone on the register had voted (i.e. the marks were generally very clear). We therefore feel confident that there are very few false negatives among those who we confirmed as being registered.

Despite the heroic efforts of the Electoral Commission many Local Authorities did not supply their marked electoral registers. In all we were missing information from around 15% of the face-to-face respondents who agreed to be matched. The areas we are missing registers for are disproportionately Conservative (based on the 2015 Election constituency results) which introduces a slight Labour bias to the reported vote amongst those who had their vote validated.

The Validated Vote Variable

The vote validation information can be found in the validatedTurnout variable.  We classify those respondents that we had the necessary information to find on the electoral register into one of three categories:

  • Voted: The respondents appeared on the electoral register and was marked as having voted.
  • Not voted: The respondent appeared on the electoral register and was eligible to vote but was not marked as having voted.
  • Ineligible: The respondent appeared on the electoral register but was marked ineligible to vote in the general election.

The distribution of the validation outcome for respondents by how they reported voting is shown in the table below.

vote validation table

We could not match the remaining respondents to the electoral register. This does not necessarily mean these respondents did not vote or were not registered to vote – it also includes respondents where we had insufficient information in the marked registers (missing pages or polling stations). It is important to remember that those respondents whose records we could not match in the appropriate location in the marked registers may have been less (or more) likely to vote than those that were successfully matched.

Future releases of the data will include additional information about those respondents that we think we have enough information to say they are not on the electoral register. This work is still ongoing.