Polls


William Bowe’s Bludgertrack and Poll Bludger

At the end of last week Labor’s TPP lead in Bludgertrack was 51.9:48.1. By Friday this week it has extended to 53.0:47.0. Labor’s support is slowly rising, while the Coalition’s support is falling quickly: it seems to be shedding support in all directions, including to the far right.

In TPP terms Labor is now one percent ahead of its 2022 election outcome, but even if this lead holds for the next seven days, it does not necessarily translate into a majority government outcome. The world is more complex than it was in the days of two-party dominance.

You can keep up with a flood of polling, updated daily, on Bowe’s Poll Bludger. You may also find his post on preference deals of interest. His basic message is that we should not get too excited by preference deals because in the House of Representatives the preferences of the main parties rarely get distributed, and voters for minor parties don’t necessarily pay much attention to how-to-vote cards. Most independents have the good sense to avoid making any preference deals or suggestions. Nevertheless, as Bowe explains, preference deals can be consequential in some seats where support for independents is close to support for major party candidates.

Also preference deals can say a lot about parties’ values. The Coalition is preferencing One Nation in the vast majority of seats. You can hear Pauline Hanson discussing her ideas on preferences and her complaints about Labor and Clive Palmer on ABC News Radio.

We are yet to see any polling on candidates for the papacy.


For whom the polls tell – a guide to polling

Perhaps the most famous polling blunder was in the 1948 US presidential election, when pollsters confidently predicted that Republican Thomas Dewey would defeat the incumbent Democrat Harry S Truman. The Chicago Tribune even had a print run with the headline “Dewey defeats Truman”.

Truman won with a comfortable margin.

Emory University in Atlanta covers this, and other polling failures, on its site Famous statistical blunders in history. The main problem in the Dewey-Truman poll was that it was conducted by telephone, and there were telephones only in more affluent households, where people were more likely to vote Republican. Polls were based on an unrepresentative sample.

Here in Australia in the 2019 federal election pollsters incorrectly predicted a Labor victory, because their sample was unrepresentative. They had oversampled educated and politically engaged voters, who tend to vote against the Coalition. The 51.5:48.5 two-party outcome was almost the exact opposite of pollsters’ predictions. While statisticians searched for the source of the bias, Scott Morrison attributed his victory to divine intervention.

In response to this failure Australia’s pollsters came together to establish the Australian Polling Council, to which 6 of our 7 regular pollsters belong (the pollsters reported on William Bowe’s Poll Bludger site). The Council does not prescribe polling methods, but it requires disclosure about sample size, dates, and information about the organizations requesting the polls. Pollsters claim to have improved their methods, drawing attention to their much better performance in 2022.

Guiding us through all this the ABC’s Maani Tuui has a post: Inside the political polling machine: How pollsters capture the federal election mood.

She covers the basics of sampling error and biases, and the ways pollsters use weightings to adjust for their samples’ known unrepresentativeness – all basic Stats 101. She also goes into specific issues similar to those in the 1948 US election. Who responds to messages on cellphones? Can a poll conducted on the Internet be re-weighted to become representative of a random sample? How can support for minor parties and independents be estimated?

The answer to that last question is “not very well”, because there is an unavoidable mathematical bias in polling to be less accurate in estimating support for smaller parties.

Maani Tuui, inadvertently perhaps, shows that polling has what physicists call an “observer effect”. Polling and reporting on polling are likely to have an influence on political opinion.

Around the start of this year polls started to turn away from the Coalition. That was possibly a “tipping point” – a point at which a system starts to change from one state to another, to use the terminology of Thomas Schelling who developed the idea of “tipping”. Tuui, like so many journalists, gets the terminology wrong, because the tipping point occurs a long time before the system actually switches, but she is right in saying that about three weeks ago “the narrative switched”. We usually don’t notice that a system has changed state until long after it has tipped.

There is a plethora of views about how polls influence voting behaviour and they point in different directions – in this case the bandwagon effect would favour Labor, but that would be offset by people’s desire not to give a government too large a majority. In this election there is the complication that a sizeable group of voters may prefer a Labor minority government over a Labor majority government. (Also in countries with voluntary voting opinion polls can have strong effects on voter turnout.)

For these reasons there is an argument, based on defensible logic, to ban political opinion polling . Former UK Conservative MP Peter Heaton-Jones outlines the arguments in a Constitutional Society post: Opinion polls during elections: to ban or not to ban?. He acknowledges that a ban would be unenforceable, and would be in violation of rights to political activity and free speech.

A ban is unlikely to happen. Polls carry valuable information, but we should read them with scepticism.