Over the weekend I was asked by the Election Watch team at the University of Melbourne to comment on the polling conducted for the Brexit referendum.
The article that came out of this can be found somewhere on their blog, I’m sure. Here are the comments I provided in full (questions are bolded):
1. What do the Brexit polls tell us about the accuracy of polling generally?
Not a lot (about polling for elections in Australia, at least).
There were two complications pollsters trying to understand public attitudes towards Brexit that are generally not faced by those trying to understand vote intention in Australia.
First, voting in Britain is not compulsory, as it is in Australia. Therefore, not only do organisations conducting public opinion surveys have to try and get a representative sample of voters, they need to work out what a representative sample is. This can be quite difficult to do, as turnout can be quite volatile.
In Australia, the Australian Electoral Commission collects data on the age and gender breakdown of everyone on the electoral rolls of each division. Since approximately 95 per cent of voters actually vote, pollsters in Australia just need to make sure their sample of respondents matches the electoral roll and they can be fairly confident it will be close to the makeup of the electorate that actually turns out to vote (this is harder than it sounds, but much easier than the context faced by those operating in Britain).
The difficulties encountered by pollsters in a country with voluntary voting are compounded in a referendum. With elections we have a number of reference points to work from (past elections). Pollsters know which groups of voters are more likely to turn out and vote in a generic election, and even roughly how those groups tend to vote.
Britain has never conducted a referendum on leaving the EU before, so knowing which groups — rural or urban, university graduates or those that had not finished high school, or the young and the old — would turn out and vote, is difficult. Nor have they had a chance to check their result with actual outcomes, as is the case with elections.
2. What do they tell us about voter volatility?
Public opinion on Brexit, at least as it was measured by surveys, appeared to move around a lot, with a fair amount of volatility. This may be the result of a lack of fixed opinion on the question. Topics like health and education spending, and taxes, for instance, tend to be real and immediate concerns for people. As a result they tend to have pretty fixed opinions on these kinds of policy areas, and their opinions are more difficult to change. Brexit was probably a little more abstract, and as a result a strong argument either way could have probably changed some voters’ position
3. Do the Brexit polls, or voting patterns, tell us anything useful about how voters in Australia can be expected to behave?
Close to the vote the polls indicated the result would be roughly 50 per cent for both leave and stay, with a slightly larger number voting stay than leave. However, several per cent were undecided, and it appears most of these voters broke towards leave at the last minute. This to me suggests the polls did reasonably well under difficult conditions, but in a close race where we have little relevant comparisons with which to calibrate our data, it can be difficult to know what the eventual outcome will be. We could expect that polling on similar issues here would be comparably difficult (with the benefit of compulsory voting though).
4. Any other observations about what it could mean for Australia?
We saw a similar process at work during the republican referendum in 1999. For years surveys had said a majority of Australians were in favour of a republic, but then they voted to stick with the monarchy. A possible reason may have been the issue was relatively abstract for most people, so they could be swayed either way. The pro-monarchy campaign in that case was able to argue the republican model being voted on (with a president elected by the parliament) was not a good outcome. They were successful. We could imagine similar tactics could be used in future referenda here, for instance the same-sex marriage plebiscite, or for Indigenous recognition, with those without strong opinions either way swayed by a convincing argument for one side or the other.
As a follow up I was also asked The vote was 48-52. If I it had been that result in favour of Stay, it would be accurate to say it was close. However, referenda (in Australia anyway) almost always fail – that is, voters decide to stick with the status quo. Do you therefore think it’s fair to say that actually the Stay vote suffered a huge defeat, and that in that sense, since the polls predicted a win for Stay, they’ve really got it wrong.
I would characterise the outcome as reasonably close. If two per cent of voters who chose leave had chosen stay instead, the latter would have won. The polls in aggregate generally had leave and stay neck and neck all month (with leave ahead for most of June). Right before the vote the polling aggregators generally had leave at 44 per cent and stay at 46 per cent, and around 10 per cent undecided. While many commentators may have concluded this meant stay was ahead, I would never have characterised the polls as saying this. The polls suggested it was too close to call (with perhaps a slightly higher chance of stay winning).
An interesting anecdote to help make my case. Apparently there were quite severe storms and flooding across much of the south, the Midlands and the east, with London and some other pro-stay areas hit at peak voting times. Some polling stations were flooded and closed for hours as a result. This may have hampered turnout in these areas and, with two percentage points in the result, changed the outcome. I’m not saying it did change the outcome (I don’t know who chose not to vote as a result, but it may have been young people living in urban areas without a car who were more likely to vote stay and may have been more likely not to brave the storm as a result) but it was close enough that it could have.
Additionally, after I commented to Election Watch I found this blog post on the same topic by Andrew Gelman. He probably did a better job on this than I did (no surprise).