At the time of writing, Labour is projected to win more than the 61 seats needed to govern alone. Statistician Peter Ellis calculates a 0.1 per cent chance that National can form the next government. These numbers may sound fanciful, whatever your politics, but they are based on highly credible data from the country's two most successful polling companies.
In the past nine months, 1News/Colmar Brunton and Newshub/Reid Research have released a total of seven polls between them. They have told more or less the same story. In the aftermath of the first lockdown, support for Labour reached historic levels, while National collapsed to under 30 per cent. Act has surged, the Greens are perilously close to the threshold, and NZ First languishes around 3 per cent.
With Labour ahead by such a wide margin, it appears that the election is more or less a foregone conclusion. But is it really? In 2017, the final Reid Research poll had an average discrepancy of just 0.7 percentage points when it came to estimating support for the main parties, compared to the final result. Colmar Brunton and Roy Morgan were out by an average 1.4 and 2.7 points respectively.
While these differences are usually within the reported margins of sampling error, a percentage point or two can be crucial. If, for example, National had maintained its election night support of 46 per cent in the final count it is quite possible Bill English would still be the Prime Minister. That is why polls are more useful for reading trends than making predictions.
In 2020, commentators and journalists have dismissed the possibility of a National victory. The received wisdom is that most voters have now made up their minds and the next month is unlikely to see much change in public opinion. But this overlooks the number of undecided and wavering voters. In the 2017 NZ Election Study, for example, around 20 per cent reported making up their minds during the final week (including election day itself).
In the last Colmar Brunton poll, 10 per cent of the respondents said they were undecided and 4 per cent refused to answer. The headline results (e.g. Labour 53 per cent) are calculated by excluding those respondents who either "don't know" or refuse to say. If we did include the undecideds in the base of the calculation for party support then Labour would be on 47 per cent. Those undecided voters could at least determine whether or not Labour governs alone. Furthermore, it is impossible to know how committed individual respondents are to voting a particular way – or even voting at all.
Although respondents are asked "how likely" they are to vote, neither Colmar Brunton nor Reid Research take into account the effect of non-voting. In other words, no assumption is made about the probability someone will vote based on their demographic profile. This means that while their samples are representative of the general population, it is difficult to know how representative they are of the voting public.
Some are a lot more likely to vote than others. For example, over-70s had a turnout rate of 86 per cent in the last election compared to only 69 per cent for 18-24-year-olds. It is possible that unrepresentative sampling of certain age groups might explain historic discrepancies between polling and real support for NZ First and the Greens. Last time, Colmar Brunton underestimated support for NZ First by a significant 2.3 points, while Roy Morgan overestimated Green support by 2.7 points.
The reported margin of sampling error typically means we can be 95 per cent confident a poll is no more than "plus or minus" a few percentage points from true public opinion. However, that figure refers to a result of 50 per cent. In the Colmar Brunton example above, the margin of error for NZ First was about 1.4 percentage points. In other words, the poll was dodgy. This is said to happen five times out of a hundred.
But the margin of sampling error does not measure other possible sources of error such as interviewer effects and question wording. There is also the problem of how reliable those surveyed are. In 1992, after polls failed to predict a Conservative victory in Britain, an inquiry found that some respondents had probably lied about their voting intention ("the shy Tory factor"). Such effects are impossible to quantify.
However, more recent experience from Britain (2015) and the United States (2016) suggests that systematic polling error is most likely to result from assumptions regarding turnout. To a large extent, polling for the 2016 presidential election failed to register Trump support in the so-called "Rust Belt" states because pollsters did not sample enough non-college-educated white voters.
After the 2015 British general election, an independent review determined that pollsters had significantly undersampled over-70s. This was at least in part down to the use of online panels such as that employed by Reid Research to supplement its telephone sample. Interestingly, some evidence was also found that those people most likely to answer the phone were much less inclined to vote Conservative.
The fact that Colmar Brunton and Reid Research make no assumptions about turnout could be a strength. But in the end, polling is not an exact science. No survey design can fully capture all the complexities of human psychology and voting behaviour. There will always be a degree of uncertainty. The extent to which any given poll is right or wrong may in fact come down to how it is reported and framed by the media.
To better inform the public, TVNZ and Newshub should report the estimated range of party support rather than a single figure. They could also disclose the response rate (likely to be under 30 per cent), and provide a full disclaimer about the limitations of polling. But that would mean less sensationalism.
So, can we trust the polls? The answer will just have to wait until election night.
• Josh Van Veen is former member of NZ First and worked as a parliamentary researcher to Winton Peters from 2011 to 2013. He has a Masters in Politics from the University of Auckland. His thesis examined class voting in Britain and New Zealand.
This column was originally published by the Democracy Project