Skip to main content

Using the British Election Study to Understand the Great Polling Miss

Jon Mellon
11/06/2015

Using the British Election Study to Understand the Great Polling Miss

Competing explanations have been put forward for the large polling errors on May 7th. In this post, we examine five relatively plausible explanations of the polling errors, looking at the existing evidence for each of these possible explanations and what the British Election Study (BES) might be able to tell us in the future.

In the next few months, we will have three sources of BES data to draw on:

  1. Online Campaign wave: we interviewed ~1000 respondents each day allowing us to track opinion across the course of the campaign. This data is available here.
  2. Online Post-election wave: we are recontacting all respondents who were interviewed during the campaign wave, giving us a picture of how they ended up voting.
  3. Face-to-face post-election survey: this provides a gold standard survey to compare with other polls and our online survey waves.

So far, we only have the data from the campaign wave so this post outlines what we have learnt from that data, as well as what we will be able to learn from the post-election waves.

1: Late swing

The “late swing” hypothesis is that the underestimation of the Conservative vote occurred because voters changed their minds right at the end of the campaign, after pollsters had finished contacting them.

What we know

To test this we look at BES respondent’s vote intentions on each day of the campaign. The figure below shows a possible upswing in voters intending to vote Conservative on the last 2 days of the campaign. This data is based on the vote intentions of the BES respondents who were interviewed on each day of the election campaign.

Jon Mellon Polling ProblemsWhile this data is consistent with late swing, it is not conclusive. It doesn’t seem to be the case that previous supporters of one party were more willing to take our survey on some days (e.g. when their party has had a good day in the campaign) than others (which could exaggerate changes in support).

Nonetheless, with only around 1000 respondents each day, these estimates still have substantial sampling error.

The BES evidence for a late swing also needs to be set against the lack of evidence of a late swing among other pollsters (including the recontact survey conducted by YouGov and the day by day breakdown at ICM). If there was a late swing, we would need to explain why the BES was the only survey that saw it.

What BES data will tell us

Once we have data from the BES post-election survey, we should be able to tell whether there was a last minute change of heart among British voters. If the uptick is real, there should be substantial numbers of our respondents who switched from intending to vote Labour in the campaign wave to actually voting Conservative in the post-election wave.

 2: Differential turnout/registration

There are mountains of evidence that voters systematically overstate their likelihood of turning out to vote. Consequently, pollsters have to predict which respondents will actually go to the polls and which will stay at home. Pollsters appear to have been expecting a rise in turnout this election meaning their models may have been overweighting non-voters. For instance Ipsos-Mori said that they expected a turnout of 72-74%, compared to the actual figure of 66.1%.

This election also saw a substantial shift in the voter registration system which may have led some people to think they were registered when they were not. This probably isn’t a large enough effect to swing the polls by itself, but it could be a contributing factor.

What we know

When we restrict the BES campaign survey to just those respondents who say they are “very likely to vote”, the sample become more Conservative leaning, especially towards the end of the campaign. It remains to be seen whether pollsters correctly accounted for this bias in 2015.

What BES data will tell us

In the BES face-to-face surveys we will be matching respondents to the marked electoral register showing whether or not they actually voted on Election Day. Because of the high quality sample and vote validation, this will be the most in-depth data on whether there was differential turnout among party supporters and also on whether certain supporters were more likely to overstate their likelihood of voting.

We are also in the process of matching all online BES respondents with registration records before and after the change in the registration system to check whether they are actually registered or not and whether they were deregistered under the new system. This matching will let us see whether deregistration contributed to the polling gap.

3: Biased samples or weighting

Pollsters start off with very unrepresentative samples from phone or Internet and have to apply a lot of weighting. Weighting of samples has become more important than ever as response rates for phone polls have declined and the use of non-randomly selected Internet panels have increased.

In this elections poll weighting may well turn out to have been miscalibrated in some way.

Since our Internet panel is conducted by YouGov, weighting problems that affect public opinion polls would also affect our sample. If this is the case, we will update our weights accordingly.

What we know

So far, we do not know much about whether sampling and weighting was the primary cause of the polling errors, although ICM have indicated that biased samples and weighting are their top guess at what went wrong in their polls.

What BES data will tell us

The BES face-to-face survey will be much more representative of the general population and will require less weighting. If it shows approximately the right vote choice figures while the post-election wave of the BES Internet panel still under-represents the Tories, it would be a strong indication that sampling and weighting are to blame for the polling miss.

The difficulty of correcting the weighting and sampling (both in polls and our Internet survey waves) will depend on what exactly went wrong with the sample calibration. It may be as simple as adding an extra variable to the weighting scheme or could be a more difficult task.

4: Differential “don’t knows”

Another possible cause of the polling miss is that respondents who said that they “don’t know” who they will vote for chose very differently on Election Day to respondents who had made up their minds.

What we know:

In our campaign wave sample, 7% of people of people said that they don’t know who they would vote for. However, there are conflicting signals about which party these respondents were nearest to.

In some ways these voters seem closer to the Conservatives. Among “don’t knows”, David Cameron was somewhat more popular than Miliband (3.8 compared to 3.3 on a 0 to 10 scale). “Don’t knows” are also more likely to name the Conservatives (10%) as the best party on their most important issue than Labour (5%). Many of the “don’t knows” in the campaign were also UKIP supporters in previous waves and our previous research has found these voters are more likely to have Conservatives as a second preference.

On the other hand, more “don’t knows” stated their party identification as Labour (17%) than as Conservative (10%), so we might expect that more of these voters would end up with Labour.

Overall, the evidence so far is murky about the likely behaviour of “don’t knows” and we have the additional difficulty that many “don’t knows” will not have voted at all.

What BES data will tell us

Once we have the vote choices of our panel respondents in the post-election wave, we will be able to see what “don’t knows” actually ended up doing. If they systematically switched in one direction, the “don’t know” shift may end up looking like a more plausible hypothesis.

 5: “Shy Tories”

The final explanation is simply that respondents said one thing to pollsters while intending to do something else. This effect is often called the “Shy Tory effect” (unwilling to admit you plan to vote Conservative), but it could also be a more complicated series of mistruths.

What we know:

One possible indication that respondents have been lying to pollsters is that the BES campaign wave saw a possible late swing towards the Conservatives that did not appear in the polls. YouGov president Peter Kellner suggests that this difference may be because the BES asks for a respondent’s vote intention after asking about party leaders and issues. In standard YouGov polls, substantial numbers of “inconsistent” respondents say that they will vote Labour but rate the Conservatives higher on the economy. There are fewer of these “inconsistent” voters in the BES campaign survey. The hypothesis is that some of the voters who were “inconsistent” during the campaign switched their votes to the Conservatives on Election Day. For this explanation to be true, we would have to assume that voters are not inconsistent when they actually vote.

There is also weak evidence against shy Tories in the BES. Respondents who score high on the socially desirability scale (which measures the tendency to give answers that make the respondent look good) show higher levels of support for the Conservatives: the opposite of what we would expect if people are lying about supporting them.

What BES data will tell us

The face-to-face survey will help us determine whether voters are actually “inconsistent”. If we find that the levels of “inconsistency” in the face-to-face survey more closely match the BES campaign data than the polls, it would suggest that asking vote intention after issues is a better way of getting truthful vote intentions from respondents.

Conclusions so far

With the existing BES data, we only have initial indications about what might have caused the polling errors. Our data provide some support for late swing and perhaps differential turnout but contradictory evidence about untruthful voters and “don’t know” swings. Once the BES face-to-face survey, post-election online wave and validated turnout matching are completed, we will know a lot more about what did and didn’t go wrong with the polls.