With still two months before the 2023 landmark presidential election in Nigeria, a string of pollsters has consistently put Mr. Peter Obi of the Labour Party in the lead, generating excitement and condemnation on different sides of the political divide.
For those who celebrate these pre-election polls, Mr. Obi’s consistent lead aptly captures the current mood in the country and is proof-positive that Nigeria is certainly headed for a historic, third-party electoral upset early next year.
But there are others who strongly disagree with these polls, querying not just their accuracy as measures of the political reality of the country or as predictors of voting patterns next year but also questioning the methods and the motives of the pollsters.
Both sides cannot be right. However, we will not know which side is right for sure until the votes are cast and counted in February. Ahead of the clarity that late February will offer, it is important to provide some caveats that some of the pollsters and the partisans are not obliging the rest of us.
The first is that there is nothing wrong in being a partisan pollster, contrary to accusations by a side of the divide. Most of the professional pollsters in the United States of America have political leanings. This does not detract from the rigour of their methods or of the robustness of their findings.
In any case, all serious parties and candidates will have their internal polls which they use in making critical decisions about where to allocate time and resources so as to either maintain their lead or give themselves a fighting chance in the race.
When a poll is meant for public consumption, however, the political leaning or preference of the pollster should be clearly disclosed. This does not necessarily take anything away from the poll itself but it affords the end users the opportunity to know what discounts to apply.
The second caveat is that even the most accurate pre-election polls are time-bound: they only attempt to approximate voters’ expressed preference if the elections were held that day. But things can change quickly in politics.
Even a poll undertaken a few days before the election may not align with the outcome of the election. And this may not necessarily be because the pollster’s methodology or motive is suspect or because the election is manipulated. It may just be because a day is a long time in politics. Some voters change their minds often, some voters don’t decide until late in the day, and some consequential event may swing a sizeable portion of the electorate.
The third caveat is that election polling is not just science but also art. Pollsters try to answer two key questions: who is likely to vote and who they are likely to vote for. Accurately gauging voters’ turnout can be tricky, more than what the most robust sampling methods can predict.
Sophisticated pollsters deploy patterns and even models to arrive at considered decisions on expected turnout. They may be right or wrong. This is more art than science. An unanticipated change in the turnout of some demographic groups can significantly alter the eventual outcome of the election.
Experienced politicians know that voter turnout is a critical success factor. That is why they invest a lot in get-out-the-voters drives, stylised as GOTV campaigns. This is also the reason some politicians deploy the dark tactic of vote repression.
The fourth caveat is that pre-election opinion polls can get it wrong for so many technical/non-technical and deliberate/non-deliberate reasons. Let’s start with the obvious: some polls are thinly-disguised campaign materials merely dressed up as scientific polling, designed to shape or influence the outcome of the elections. The intention here is signalling to create a bandwagon effect, especially where voters don’t like wasting their votes.
But not all the polls that get it wrong set out to manipulate. Some just get their samples wrong. Sampling is what differentiates a poll from a census, and having a sample that is representative of the polled population is very critical to the accuracy of polls. This in turn is a function of having an up-t-date database of likely voters, the sample size and the sampling method.
This is why a poll conducted online or with an app may strain to pass the representativeness test as this by default leads to the over-representation and the under-representation of some demographic groups. This is the same issue with using respondents from a state as the sample for a politically and religiously diverse zone.
The willingness of respondents to participate in pre-election polls may also affect the representativeness of the sample and may skew the outcome.
The ways questions are framed and arranged and the perception of the expectations of the questioner/pollster may colour the responses. This is called the framing effect. Leading questions may elicit predictable answers and respondents may tell pollsters what they think they want to hear.
Pollsters may thus get it wrong not because of their methodological error or evident bias but simply because they are misled by the respondents. Sometimes, this happens when respondents feel the need to save face.
For example, someone who doesn’t have a voter’s card and doesn’t intend to vote may lie that they intend to vote because that answer aligns with what is expected of a responsible citizen. Respondents may also deliberately mislead so as not to appear racist or tribalistic. This is called the Bradley effect or the Wilder effect.
Also, when there is a social or economic cost to disclosing their preference, some likely voters may say they are yet to decide even though they had made up their minds. In this case, they are masking their preference to avoid being shamed, being ‘dragged’ or being harmed.
The last and fifth caveat I want to share is that the record of pre-election polls has been spotty lately. Most of the pre-election polls in the 2022 presidential election in Kenya gave the lead to Raila Odinga who eventually lost to William Ruto. In the 2016 Brexit vote, the pollsters got it wrong too.
Also, pollsters were blindsided about the victory of Donald Trump in the US presidential election in 2016 and they overstated the support for Joe Biden while understating the support for Trump in the 2020 election. These errors are significant because the US is the gold standard on public opinion polling.
These back-to-back errors on presidential elections elicited a lot of handwringing and soul-searching within and outside the polling community in the US. For example, Gallup Polls decided to stop doing pre-election polling and concentrate fully on public opinion polls.
Writing in her column in the Washington Post of 4th November 2020, Margaret Sullivan stated that “we should never again put as much stock in public opinion polls, and those who interpret them, as we’ve grown accustomed to doing. Polling seems to be irrevocably broken, or at least our understanding of how seriously to take it is.”
In The Politico on the same day, Jake Sherman and Anna Palmer declared, rather dramatically, that: “The polling industry is a wreck, and should be blown up.”
Josh Clinton, a professor who co-leads the polling service of Vanderbilt University, was more tempered. There is need for more sophistication from pollsters and users of polls, stated Clinton who served as the chair a professional taskforce on the errors in the 2020 pre-election polls in the US.
“The reality is that there are a lot of errors that can accumulate in a single poll, based upon small decisions about what you assume about voters—which can actually have enormous consequences within a polarised electorate,” he said.
“A bunch of small errors can end up producing consequential polling errors… One of the takeaways I hope people get from this report is that there are a lot of complexities that go into polling that are quite variable.”
According to Joseph Campbell, author of ‘Lost in a Gallup: Polling Failures in US Presidential Elections,’ the recent polling errors pale in comparison with the epic polling failures of 1936 and 1948. In 1936, the Literary Digest predicted a comfortable victory for Alf Landon over Franklin Roosevelt. Landon lost the presidential election by 24%.
The polling industry became more scientific after this fiasco, learning from the magnificent stumble by the Literary Digest which incidentally had successfully predicted election results since 1924.
But the ‘scientific’ pollsters (including Crossley, Gallup, Roper) produced another polling fiasco in 1948 when they predicted that incumbent Harry Truman would lose to challenger Thomas E. Dewey. Truman went on to win the 1948 presidential election, and celebrated his victory by cheekily holding aloft a copy of the Chicago Daily Tribune newspaper with the screaming headline: “Dewey Defeats Truman.”
The point here is that pre-election polls do not have 100% accuracy record, even in a place with a long tradition of polling like the US and even with refinements to the science and art of polling over time. The only difference between America and a place like Nigeria is that polling errors pose different levels of risk.
In the US, the risk is the credibility of the pollsters and the polling industry. In Nigeria, the risk may be the credibility of the election, the legitimacy of the eventual winner and the possibility of breakdown of law and order after the elections. These are no mean risks.
But as stated above, not all polling errors, if and when they occur, arise from mischief or the desperation by some partisans to force the issue.
So, while it is expected that pollsters will be taken to task by partisans and others on their methodologies and motives, the best way to deal with possible collateral risks to the country from possible polling errors, whatever their provenance, is to ensure that the election is so clean that the real choice of the majority of actual voters will not be in doubt and the room for mischief is significantly narrowed or eliminated.
Source: First published in Thisday Newspaper