Skip to main content

Why the polls got it wrong and the British Election Study face-to-face survey got it (almost) right (by Jon Mellon and Chris Prosser)

The British Election Study Team
12/11/2015

The release of the British Election Study post-election face-to-face survey  allows us to revisit the question of why the polls went wrong before the 2015 General Election. Based on our internet panel, we previously examined five possible explanations for why the polls went wrong and came to the conclusion that the main culprits for the polling miss were differential turnout and the representativeness of polling samples. We stand by our previous conclusions but based on our new data we now conclude that the representativeness of samples is likely to be even more important than we previously concluded. We now think that the primary reason why the polls went wrong before the election is because they were drawn from unrepresentative samples. In this blog we focus on the representativeness of samples, you can find an updated version of our full paper – including further evidence against the ‘Shy Tory’ theory – here.

The BES face-to-face survey is a random sample face-to-face survey – the gold standard survey research method – that was designed to try and include respondents who are usually hard to reach in surveys and maximise representativeness (for more details on the design and representativeness of the survey, see here). By comparing the face-to-face survey with our internet panel (which shows very similar results to the pre-election polls) we can assess where the polls failed to achieve representativeness and how this affected their results.

To start with we can see that the face-to-face survey was much closer to the actual result than internet panel. Unlike the panel survey, which underestimated the Conservative-Labour lead by more than 6 points, the face-to-face survey actually overestimates the Conservative lead by 1.47. The face-to-face survey is not perfect and overestimates the Conservative and Labour share of the vote and underestimates the Liberal Democrat and UKIP shares. We will have to wait until we have finished our vote validation process before we can say whether this represents systematic error, people misreporting turnout or simply sampling error. Nonetheless the errors are considerably smaller than the internet panel.

f2f pollng blog 1

Why does the face-to-face do a better job of estimating the vote shares than the panel? In short we think it is because it is much better at getting people who did not vote in the election to answer our survey: the reported turnout in the face-to-face is 73.3%, higher than the actual turnout at the election, but considerably lower than the 91.2% reported turnout in the panel. It may seem counter-intuitive that having more people who did not vote affects the apparent differences in party support amongst those who did but the reason it does so is actually quite simple.

All surveys, whether they are conducted in person, by phone, or on the internet, aim to achieve a representative sample. They might do this in a number of ways, such as by using a random sampling design or using sampling quotas to ensure there are the correct numbers of certain types of respondents. Once the survey has been conducted any deficiencies in representativeness can be corrected by weighting the data.

Demographic targets for quotas and weighting are usually taken from information about the British population as a whole, from sources like the census. However those who turn out to vote are not actually representative of the population as a whole – they tend to be older, more affluent, and better educated. The problem for political polling is that those who turnout to vote are also more likely to answer surveys. This may sound like an odd ‘problem’ but when combined with survey targets based on the population, rather than the electorate, it can lead to a distortion in the polls. For example, if young people who vote are more likely to answer surveys than young people who do not vote, then a survey might end up with too many young voters, even when it appears to have the right number of young people. Given that young people tend to be more Labour leaning, this might inflate the Labour share of the vote.

The table below illustrates this problem by comparing the distribution of age in the face-to-face and panel surveys. When looking at the full sample (voters and non-voters), both surveys are very similar and representative of the population. However if we only look at those who said they voted, large differences emerge between the two surveys. In particularly there are more young voters in the panel and fewer older voters.

f2f pollng blog 2

If we compare the full and voter only columns for each survey it is easy to see why: the full and voter only columns for the face-to-face are much more different from each other than the same columns are for the panel. The sum of the absolute differences between the full and voter only samples is 14.4 for the face-to-face and only 3.9. Given that older people vote much more than young people there should be the differences between the full sample and the voter only sample that the face-to-face survey shows. There are not enough non-voting respondents in the panel and so these differences do not emerge.

In our full paper we conduct the same analysis on other demographic groups including subjective social class, income, and working status, and find similar results. These all point to the same conclusion – because surveys under sample those who do not vote, by making the survey look representative of the population, pollsters may actually be making their polls less representative of those who turn out to vote.

  • Gareth Davies

    Jon and Chris, is there a danger here in creating a methodology that is good for a past event (the 2015 election) but not so good for future events (the 2019/20 election)? The turnout level of younger voters disabled Labour but, surely, there is nothing fixed or innate about low turnout and younger voters. The lesson Labour has taken from defeat is the need to mobilise hitherto unreachable parts of the electorate. How would a potentially new ferment be captured by your approach? Presumably your approach will just result in more sophisticated post survey adjustment of the same old telephone, internet and panel samples because face to face surveying is too expensive for the fast food polling culture.

    • Jon Mellon

      Thanks for your comment. There’s definitely a danger in any fix, and if the underlying patterns in the electorate change, then a fix that might be good for one election could work less well in future.

      The up weighting of non-voters is an important fix that doesn’t assume anything about the composition of non-voters. The tricky part comes in identifying who these non-voters are, which requires prediction prior to elections. This is the one part where we currently do make assumptions that the demographics of non-voters will be somewhat similar across elections.

      We would therefore suggest that pollsters run diagnostic checks. If engagement hugely increases among the young, we’ll get signs of this in polling and in second-order elections in the run-up to 2020. If that was the case, then we would probably want to remove age as a factor in turnout and would want to check that any weighting correction that included turnout is robust to a higher overall turnout.

      However, I’m skeptical that we would see as large a change as that in a single election. Previous work suggests that peoples’ turnout is fairly consistent across elections, so while we could get a very engaged first time electorate in 2020 (although I’m not saying that we will) it would be unprecedented that the turnout of other young voters would increase as much. Therefore it’s still very likely that young voters would be less engaged overall in 2020 even if Corbyn’s approach has some success.

      • Gareth Davies

        Thanks Jon. Your explanation is really helpful.

        Is there an increased acceptance amongst psephologists and pollsters that, at least in the 2015 election, the misleading/incorrect pre-election polls may have impacted the actual election result?

        By consistently suggesting that the contest would be too close to call, did the polls stimulate a greater voter turnout for the Tories (leveraging more engaged groups of voters) and also reduce turnout in respect of Labour support (amplifying complacency amongst less engaged groups of voters)?

        Can the survey data clarify whether or not the pre-election polls did have this feedback effect? It’s certainly generally recognised that how the polls were received by the media – as well as by politicians and the public – did powerfully impact the nature of the public debate in the weeks before the election, possibly to the detriment to a balanced engagement with the issues that mattered.

        Could, moving forward, polling methodology contain some sort of voter intention impact measure – questions that try to capture the likelihood to vote on the basis of complacent acceptance or active rejection of the predicted result.

    • carlos jones

      Whether or not the young will be energised by the very 20th century socialist dogmas advocated by corbyn is very open to doubt.