The polls: getting it right (or not)

Ipsos MORI had a mixed election day. Our exit poll for the broadcasters (which we conducted in conjunction with GfK NOP and a team of academic political scientists who make the predictions from the data we collect) was a jaw-dropping success. Once again, almost nobody believed it. Paddy Ashdown, former Liberal Democrat leader, announced he would eat his hat, and by the next morning was doing so on live TV. But our pre-election poll in the Evening Standard, like all the other pre-election polls, was off the mark: we had the Conservatives on 36% instead of 37.7%. It was not a complete failure: we were perfectly right about the Liberal Democrats, UKIP and the SNP in Scotland, all things that we worried about before the election.

But on the gap between Labour and the Tories, which is the most important thing to get right, we were too high on the Labour share in particular, putting them on 35% instead of 31%. We take little pleasure in being the least inaccurate pollsters in Britain.

03_the_polls_chart_1

If we can get the exit poll right, we are sometimes asked, why can’t we get the ’normal’ polls right? But they are entirely different types of exercise, facing different challenges. Exit polls have difficulties of their own, but by interviewing voters as they come out of polling stations they avoid the biggest difficulty we face in a national poll, which is distinguishing between those people who are going to vote and those who are not.

So what did go wrong with the pre-election polls, including our final poll for the Evening Standard? Many of the ‘explanations’ offered – small sample size, not using mixed telephone with internet samples, or not including candidate names in the poll questions – do not stand up to much scrutiny. The total sample size of the various polls during the campaign ran well into six figures, all finding the same results; there was no significant difference between the telephone and internet poll findings; and the constituency polls that did include candidate names were, on the whole, even worse than those that didn’t!

Nor does it make much sense to suggest that our participants were lying to us about their plans to vote Labour – after all, if they were doing that in England why weren’t they doing it in Scotland, where, if anything the polls underestimated Labour’s vote? Although we would like to be able to say that there was a last-minute swing to the Conservatives and that the polls were right at the moment they were taken, which would give us some excuse for being wrong, the evidence seems to be against that – the polls that did look for movement in the last 24 hours didn’t find any.

03_the_polls_chart_2

Looking at our final poll, it is fairly easy to see what went wrong. Contrary to the general supposition, we did not underestimate the number of Conservative voters: we overestimated the number of Labour voters and, crucially, underestimated the number who would not vote at all.

Why? We overstated the Labour vote because we overestimated the proportion of Labour supporters who would vote; and that was because we relied on how certain they said they were to vote. At the last three elections, and at the Scottish Referendum of 2014, on average we have seen c.10% more claimed turnout in our final poll on the eve of polling day than actually happened, leading us to believe it was a fairly constant ratio. This time we had 82% the day before telling us they were certain to vote – some 84% for the Conservatives, 86% for Labour. That was the only substantial change between our polls in 2010 and 2015.

A close election, we thought, with enthused Labour supporters. But we were wrong – turnout wasn’t up. In the end it barely rose – up one percentage point to 66%. For some reason Labour voters were much more likely to overstate their likelihood of voting than usual. The over-claim rose from 11% in 2010 to 16% – and was concentrated among Labour supporters. So we are fairly sure that the problem with our polls in 2015 was that three people in 100 told us they were going to vote Labour and didn’t vote at all.

Some of the other pollsters have come to similar conclusions from their own data, which is supported by initial analysis of the British Election Study sample, although we are waiting for the conclusions of the British Polling Council investigation which will report next year.

Unfortunately understanding what the problem was does not offer an immediate solution. But we are working on it. The challenge in redesigning our polls for the next election is probably to find better ways of distinguishing people who vote from people who don’t – we have already found that tightening the criteria on which participants get included in our final poll would have dealt with the problem – not only being certain to vote but also telling us they “always” vote would have given us very accurate figures in 2015. All in all, it’s a reminder that polling is as much art as science, and a salient one too as we prepare for the next big test of polling – the EU Referendum.