Skip to main content

Youthquake – a reply to our critics

The British Election Study Team
12/02/2018

In the wake of the release of the 2017 BES face-to-face data, and our analysis that showed no evidence of a surge in youth turnout at the 2017 election, some commentators have challenged our findings.  

The key message of our original article (published on BBC online and also as a working paper here) was simply that the best evidence that we have to date on the age-turnout relationship comes from the British Election Study face-to-face validated sample, and this shows no surge in youth turnout. Indeed, as might be expected with a small change in overall turnout, this was fairly evenly spread across groups. The 2017 age turnout relationship looked much like it has for many years. We also showed that there was a notable surge in Labour voting amongst the younger sections of the electorate that did vote (and Conservative voting among the older sections of the electorate that voted). When we embarked on this exercise we expected to see evidence of a youthquake but the data did not support it. As social scientists, our duty is to report what we see. In their criticism of our findings, some colleagues seem to have misunderstood various aspects of our analysis, and others seem to have not noticed key aspects that we emphasized.

There are three key concerns:

  1. That we ignore other polling evidence or wider evidence of a youthquake.
  2. That our sample size is too small to say anything specific about turnout amongst young people.
  3. That we ignore the fact that Labour voting went up dramatically amongst young people.

We respond to each in turn.

 

1) Other evidence of youthquake

As far as we are aware, there are three survey-based claims of a surge in youth turnout, all of which we refer to in our paper: 1) An NME ‘exit poll’ that claimed a 12 point rise in 18 to 24 year old turnout. 2) An Ipsos MORI estimate of a 16 point rise in 18 to 24 year old turnout. 3) An estimate of a 19 point rise in 18 to 29 turnout from the Essex Continuous Monitoring Survey (CMS).

How reliable are each of these sources?

NME reports that its figures were ‘obtained by The Stream… a nationally representative panel of millennials, surveying 1,354 respondents in total, all aged between 18-34.’ Very little information is available about The Stream, though it appears to be a commercial audience research panel launched in 2016. We cannot find any methodological detail about how these surveys are conducted and so we have no basis on which to judge their reliability, but naturally some major questions arise. The most pertinent question is how a panel that seems to have begun in 2016 is comparing 2017 turnout with what happened in 2015. It is also clear that it cannot be an ‘exit’ poll as described, because such polls are conducted as people exit polling stations having voted. By design, exit poll respondents are only those that have turned out to vote and cannot be used to measure turnout.

The Ipsos MORI data—which we note is the only source showing a rise in youth turnout to release detailed information about their methodology and findings and should be applauded for doing so—is not from post-election polling. Ipsos MORI make clear ‘these are estimates only, based on people’s answers to pre-election surveys during the campaign.’ Ipsos MORI themselves are very cautious with what they say about turnout, noting that:

‘…estimating turnout is one of the hardest challenges when relying solely on survey data… polls may still be more likely to interview politically engaged people than those who are disengaged, people may over-estimate their likelihood of voting, and they may think they are registered when in fact they are not. For post-election analysis we have the advantage of knowing how many people turned out in the end, but we still have to identify them in our data on the basis of the answers they gave before the election. This means that the turnout estimates given below should be treated with particular caution, including taking into account the voter validation results from the British Election Study when these are published.’

Without intending any criticism of Ipsos MORI—who have been admirably cautious in what they have said about turnout—the one thing we know about their pre-election polls is that their polls, along with almost all others, were wide of the mark in both 2015 and 2017.

Ipsos MORI also report a 21 point rise, which is used by James Sloam and Rakib Ehsan in their report which argues there was a youthquake. This 21 point figure is the estimate of turnout amongst registered voters. Ipsos MORI themselves say that the 16 point rise they report as occurring amongst all resident 18 to 24 year olds ‘is both more reliable and more meaningful’.  Our turnout estimates are also made as a proportion of the eligible population, not just those who are registered. We do not know what type of turnout the NME exit poll or the Essex CMS are reporting.

The data from the Essex CMS are not publicly available. We know that their surveys are conducted by YouGov (as is our own BES internet panel) and are a continuation of the CMS that formed part of the BES prior to 2014 (when run from Essex). This sort of data is invaluable for examining the evolution of attitudes over time. Our own experience suggests, however, that one thing it is not as good at is measuring turnout. Over-sampling of voters is a very common problem with both non-probability internet and telephone polls, and indeed was one of the main reasons behind the 2015 polling miss. This is why we do not analyse turnout using our own 30,000 person BES internet panel despite its considerably larger sample size. The BES CMS data are available online. In the June 2010 wave of the CMS—the last available dataset available to us to examine conducted after an election—91% of the sample claim to have voted.

Peter Kellner, Marianne Stewart, Harold Clarke, Matthew Goodwin and Paul Whiteley (Stewart et al.) discuss the high quality of YouGov data, which was used to successfully forecast the outcome of the 2017 election. YouGov made considerable efforts to improve their sampling after the 2015 polling miss. Although this might improve estimates of turnout in 2017, it still makes the task of comparing turnout with the 2015 data challenging and prone to error. Although YouGov published a breakdown of turnout by age in 2017, they did not do so in 2015, meaning we cannot assess what they say about the change in turnout. We should note, however, that the 2017 age/turnout relationship in the YouGov data is very different from that in the Ipsos MORI data.

YouGov’s multilevel regression and poststratification (MRP) model did indeed perform remarkably well in 2017 (unlike YouGov’s conventional polls, which had errors like many others). One important feature of the MRP model is that it estimated turnout for particular demographic groups based on 2010 and 2015 BES face-to-face data and assumed that they would not change much in 2017. In other words, the only successful 2017 election forecast was made on the assumption of no youthquake.

YouGov’s success with MRP further demonstrates that the future of research probably lies in combining high quality traditional surveys with cheaper large scale surveys. The large scale online surveys are most useful when they can be calibrated against representative data.

 

2) Sample size and uncertainty

It has been argued that our sample size is too small. As Kellner puts it ‘we can say nothing sure about the change in turnout among under 25s in these two elections.’ There is indeed a large degree of uncertainty around our data, which is precisely why we avoid making specific claims about the level and change of turnout in either year. In our BBC article we write:

‘Among the youngest voters, the margin of error means that we cannot rule out a small increase – or decrease – in 2017. In both years, turnout among the youngest voters was between 40% and 50%.’

Similarly, our graph of the estimated relationship between age and turnout in our paper and its reproduction on the BBC is drawn with confidence intervals, making it clear that there is considerable uncertainty in our turnout estimates. This is rather different to claims of a surge in youth turnout which have so far been reported without discussion of uncertainty or error.

Our critics have honed in on the weakest part of our analysis—the pairwise comparison of turnout amongst 18-24 year olds in 2015 and 2017. The sample size for this comparison is indeed small, and the uncertainty high—something we make no effort to hide. Were our conclusions based on this single comparison, this would indeed be a problem. But this is just one of many ways we analyse our data.

We compare turnout using broader age bands and we analyse the overall age-turnout gradient using non-parametric and regression-based statistical methods. We run all of these comparisons using both our self-reported vote data and our validated vote data. All of these comparisons were made to increase the power of our statistical analysis. In all of these, the answer is the same: there is no evidence of a substantial change in the relationship between age and turnout in 2017.

It is not true that the margin of error means that anything that lies within the margin of error is equally likely to be true. The statistical properties of sampling error are well-defined and allow us to test specific hypotheses. We can, for example, estimate the probability of observing the data we do in the 2015 and 2017 face-to-face surveys if the proposed 19 point increase in turnout amongst 18 to 29 year olds was true.

The answer is 0.03%.

We transparently acknowledge that there is a wide degree of uncertainty due to sampling error, but we think it is better practice to report less biased data with uncertainty than very biased data with a high level of precision. In other words it is better to be approximately correct than precisely wrong. Otherwise, we might well have reported turnout from our large panel study, which would have a tiny margin of error, but the BES face-to-face is designed specifically to maximise representation of those less interested in politics, as well as the politically engaged. It is a random probability sample collected over almost 300 constituencies with a highly respectable response rate of 46% despite the snap election, and included vote validation.

 

3) Age and vote choice

Perhaps the most perplexing criticism we have encountered is that we are ignoring the importance of age at the 2017 election.

Rakib Ehsan, James Sloam, and Matt Henn write that ‘To consider (and dismiss) the idea of a youthquake solely on the basis of turnout is rather narrow.’ Stewart et al. say ‘Of course, increased turnout is only one part of Youthquake advocates’ claim. Another consideration concerns massive age differences in Labour (and Conservative) support in 2017.’

By examining turnout we are not trying to narrow the definition of youthquake but following claims that have been made by others. Sloam and Ehsan write that ‘The 2017 general election result was described as a ‘youthquake’ – a shock result founded on an unexpected surge in youth turnout.’

Furthermore, far from ignoring the importance of age to vote choice in 2017, the increasing relationship of age and vote is something we reported. Figure 3 in our paper and two of the three charts in our BBC article and blog report the relationship between age and vote. We have clearly highlighted this relationship and made it central to our writing and analysis, and so it is a pity that this has been overlooked.

Our data suggest that Labour’s share of the vote increased amongst all voters under the age of 70. Additionally, there was a sharp increase in Conservative voting amongst the over 60s. The idea that the result of the 2017 election can be explained simply by looking at the voting behaviour of the youngest group of voters misses most of the story.

 

Turnout and youthquake

The size of the proposed increase in turnout claimed by proponents of ‘youthquake’ is enormous. Such claims need good evidence to back them up. What is the evidence for a large surge in youth turnout? 1) There is a small relationship at the aggregate level between the number of young adults living in a constituency and the increase in turnout in 2017. As we show in our analysis, we should be incredibly cautious in drawing conclusion about individual behaviour from aggregate data. There is a stronger relationship between the number of toddlers living in a constituency and turnout change than there is with young adults. Once you control for population density, the apparent relationship between the number of young adults and turnout change disappears. 2) Reweighted pre-election polls that are known to have been wrong, to have particular problems over-sampling voters, and with methodological changes between elections that will likely influence the comparison between years. 3) Two post-election polls whose methodology is unclear and for which analysis by others is not yet possible.

Turnout only increased by a few points between 2015 and 2017. A youthquake suggests that this increase was almost entirely concentrated amongst young voters, perhaps energised by the pledge to abolish tuition fees (although only one third of 18 to 24 year olds are university students) or opposition to Brexit. If this were the case it would imply that almost all of the Leave supporting voters who were mobilised to vote for the first time by the referendum were happy to stay at home again on Election Day. Alternatively, following the EU referendum—which saw a much larger increase in turnout compared to 2015 than the 2017 election—voters from a wide range of ages and with different political views remained mobilised and voted again in 2017 because Brexit was an important issue in the election. This would fit the vast academic literature which shows that voting is habit forming: people who vote once tend to go on voting in later elections.

There are other reasons to be sceptical about assuming that Labour increased its vote so dramatically in 2017 because of a sharp increase in youth turnout. In our paper we highlight the fact that people aged 18 to 24 make up 11% of the electorate, about 5.2 million people in total. Labour won 3.5 million more votes in 2017 than in 2015. If we are wrong about the level of turnout amongst young people and it was in fact as high as 72% (a level higher than our critics suggest, that was claimed on election night without any apparent basis in evidence) this would be about 1.2 million more young voters than 2015. Even if every single one of these new voters voted Labour (an absurdly strong assumption) this would only account for about one third of Labour’s vote gains in 2017, and does not take into account countervailing flows away from Labour. This represents the upper bound of possible youthquake effects. Even if there was a massive increase in youth turnout that our data has not picked up, it would still leave most of the increase in Labour’s vote unexplained.

Where all sides agree is that there was a dramatic change in the relationship between age and vote choice. This is reflected both in non-probability samples such as the BES Internet Panel and the Essex CMS, and confirmed by the BES face-to-face survey. This is an important change that is vital to study, and we look forward to reading the work of our colleagues to help understand this phenomenon.