Do poll survey results influence voters and election outcomes?

The reputable Social Weather Stations released an analysis of the effect of its survey results on the voting plans of the electorate for the 2007 Senatorial elections and came out with this conclusion: 

‘In the last Senatorial elections, surveys affected the plans of very few voters, and moved these very few slightly towards underdog candidates, according to analysis of the SWS May 2-4, 2007 pre-election survey.’

Thus the effects are minimal.  But is this really so?

A closer look at the breakdown by socio-economic class indicates the influence to be most significant for the ABC group.

_____________________________________

VOTING EFFECTS OF ELECTION SURVEY NEWS

In percent

RP

ABC

D

E

Aware of election survey news

48

77

49

39

     no effect on vote

32

55

33

26

     some effect on vote

16

22

16

13

Unaware of election survey news

52

23

51

61

_______________________________________

In coming up with the summaries or obtaining averages, the ABC class has less than 10 percent weight.  The D class has over 60 percent and the E class about 30 percent and thus both classes dictate heavily on the results.  However it should be noted that while their numbers are limited, those in the ABC class possess most of the nation’s wealth and earn most of the income generated in the economy. 

In the 2003 Family Income and Expenditures Survey conducted by the National Statistics Office, the top 1 percent (about 165 thousand families) of the income distribution earned more income (PhP 235 billion) than nearly a third of the lowest income earning families combined (about 5.3 million families) who earned some PhP 227 billion.  I hope you agree with me that this is a graphic picture of the lopsided income distribution in the Philippines.  It is also not difficult to imagine the immense amount of socio-economic-political power at the hands of the richest.

The SWS statistics are correct but its analysis did not sufficiently point out that the effect of its results on the electorate mattered most where it should be, – on the ABC class.  Pity that the survey weights were based on the distribution of population (as is the standard  practice) and not on income.  I am certain that the overall results would have been different. 

Note that the the final count depends on the number of votes, but the campaign to get these votes depends to a large part on the resources that a candidate can muster and have at hand to carry out the campaign strategy.  These include, among others, the capability to cover the country wherever necessary, to support campaigners’ needs, to touch base with people and parties of clout and influence, to produce and place effective campaign ads in the tri-media to get awareness and commitment, to get the right profiles and sufficient exposure in the media, and to protect votes cast during the canvassing.

It would be safe to assume that those in (and with) power scan the results and based on these, tend to choose whom to support and give assistance and contribute to the cause of the ‘winnable’ candidates.

Another group affected by the results are the candidates’ supporters, – the army of volunteers (“pure” and those working below market rates).  I remember that when our Senatorial candidate moved out of the magic circle of 12 and landed 15th in the survey a week before the elections, many of them were disheartened.  Needless to say, their energy diminished and the morale of the ranks could be felt at headquarters.

Let me also pursue the issue of total survey error, and not only sampling error, in exit polls and the possibility of its effect on the results.

Watch out for Total Survey error.

Total Survey Error includes Sampling Error and three other types of errors that you should be aware of when interpreting poll results: Coverage Error, Measurement Error, and Non-Response Error.  See http://www.ropercenter.uconn.edu/education/polling_fundamentals_error.html

Sampling Error is the calculated statistical imprecision due to interviewing a random sample instead of the entire population. The margin of error provides an estimate of how much the results of the sample may differ due to chance when compared to what would have been found if the entire population was interviewed.

Coverage Error…is the error associated with the inability to contact portions of the population. For example, telephone surveys usually exclude people who do not have landline telephones in their household, the homeless, and institutionalized populations. This error includes people who are not home at the time of attempted contact because they are on vacation, …, along with a variety of other reasons that they are unreachable-for the period the interviewing (with call backs) takes place.

Measurement Error is error or bias that occurs when surveys do not survey what they intended to measure. This type of error results from flaws in the instrument, question wording, question order, interviewer error, timing, question response options, etc. This is perhaps the most common and most problematic collection of errors faced by the polling industry.

Non-response Error results from not being able to interview people who would be eligible to take the survey. For example in a telephone survey, many households now have answering machines and caller ID that prevent easy contact; other people simply do not want to respond to calls sometimes because the endless stream of telemarketing appeals make them wary of answering. Non-response bias is the difference in responses of those people who complete the survey vs. those who refuse to for any reason. While the error itself cannot be calculated, response rates can be calculated and there are countless ways to do so. The American Association for Public Opinion Research (AAPOR web site) provides recommended procedures for calculating response rates along with helpful tools and related definitions to assist interested researchers.

In the SWS Exit Poll of 10 May 2004, it was able to call the eventual results within its 2 percent sampling error. (See http://www.sws.org.ph/pr051904.htm).  Below is its comparison with the congressional canvass and the partial NAMFREL (National Movement for Free Elections, a ‘non-partisan’ citizen quick count) figures.

________________________________________

Comparative Results of Counted Votes for President (in %)

Choices

Congress

SWS

NAMFREL

Macapagal-Arroyo, Gloria 

40.0

41

39.0

Poe, Fernando, Jr. 

36.5

32

37.0

Lacson, Panfilo

10.9

9

10.8

Roco, Raul

6.5

5

7.0

Villanueva, Eduardo

6.2

5

6.2

No answer

8

_______________________________________

If the SWS results were compared with the congressional canvass, the 8 percent ‘no-answers’ in the results of its exit polls imply that SWS underestimated the votes for Poe (4.5 percent), Lacson (1.9 percent), Roco (1.5 percent) and Villanueva (1.2 percent) and overestimated that for Arroyo (1 percent).  Probabilities had Poe in the worst luck with the survey results.

But knowing that total survey error is more than the sampling error, the amount of outright refusals in its exit polls is disconcerting to this writer.  From the distribution of survey respondents below and from the quoted SWS release, the refusals amounted to 20 percent. 

Total personally encountered by field staff: 7425
Less: Refusals: 1462
Equals: Consented to be Interviewed: 5963
Less: Did not vote: 1139
Equals: Voted: 4824
Less: Invalid votes: 379
Equals: Valid votes: 4445

So this particular exit poll had a sampling error of 2 percent, and a non-response error of 19.7 percent.  There was a targetted set of 10,620 respondents; of these only 5963 interviews were conducted.  The coverage error would then be the 4657 respondents, almost 44 percent, who were not available for interview from 3-6 p.m on May 10, after polling places closed.  In Metro Manila, many did not return home after voting and preferred to go malling and avoid the heavy downpour that occurred.  We can assume, though, that measurement error is negligible per the findings of experts. 

A review committee organized at the initiative of SWS and composed of four members from the academe and three from the Marketing and Opinion Research of the Philippines released last October 2004 its findings from the review on the exit polls.  It found that the sample selection and interview instruments (ballots followed by questionnaires) of the Exit Poll are deemed methodologically sound.  In addition, an audit of the field operations indicated that the design was faithfully executed, and that encoding errors for candidate choices were minimal and not deliberate.

I will not go further into an overall quantification of error because I doubt that the above errors are additive and the aggregation may become too complex.   

Pulse Asia also had an exit poll conducted for ABS-CBN for the 14 May 2007 Senatorial elections.  However similar information as provided above are not available, perhaps because this is the property of the broadcast corporation.  Personal information on the response rates indicate a 75 percent response rate.  16 percent of the respondents were not available and another 9 percent refused to be interviewed.  Thus it expressed guarded reservations on the results. 

From a posting on inquirer.net on 16 May 2007, “Pulse Asia said last night that there was a statistical chance that the last four candidates on the list could be replaced by other senatorial candidates.”   

And together with the concern over COMELEC counts, are we now sure that the true winners of the 2004 presidential elections and the 2007 Senatorial elections really were the ones sworn into office?  It is difficult for me to accept that these election surveys have little effect on the voting patterns of the electorate, both directly and indirectly.  Pollsters, you know that these do, and it is no coincidence that your peak business happens on the weeks and months leading to the elections.

However the above analysis does not diminish my view of the professionalism of these outfits.  This piece would not have been possible were it not for their adherence to professional ethics and their providing adequate technical information on their surveys.  I acknowledge the important role of the professional pollsters, such as those in SWS and Pulse Asia, in the exercise of suffrage and do not call for a halt to these kinds of research.  But we can be more discerning in reading into the results instead of being drawn in by the screaming headlines that usually accompany these releases. 

I hope that results can be released more objectively and soberly; let the media put the spin on these.

My dwindling dollar’s take on the issue.  

  

Advertisements

Leave a comment

Filed under exit polls, Philippines, statistics

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s