Personal experience – Susan Horner
I supervised a postgraduate dissertation when I was working at César Ritz
Hotel School in Switzerland. The student had planned to go to India to
interview a sample of key managers in a 4* hotel that he had already
gained permission from. When he got there (at his own expense) and had
arrived at the hotel, the managers refused to see him, giving the excuse
that their director had overruled them on company participation. Luckily,
the student had a back-up hotel to talk to and the research proceeded,
although a comparative study that he had planned was no longer possible
within the time frame.
The lesson here is always send your aims and objectives, the details of
your study and your questions in advance of the research and get written
permission from the sample respondents and their managers, even if they
are your friends. Do not use your friends as respondents unless you have
gained permission from their managers or directors!
There are undoubtedly examples of studies where insufficient attention is given to
these considerations and worthless information is produced from ‘biased samples’
(that is, those samples which differ in a fundamental way from the population from
which they are drawn). However, it should be remembered that although sampling is
often referred to as a problematical area in research, it is not the only area where bias
or error can occur. In the preceding chapters, errors in the selection of methods of
data collection and poor planning were shown to produce poor results. In the
following chapters, it will be demonstrated that weaknesses in questionnaire and data
analysis can also result in useless information being produced.
There are some good examples of dramatically biased samples. Perhaps the best
known of these is the public opinion poll of the 1936 presidential election in the
United States (Young, 1966; Moser and Kalton, 1993; Frankfort-Nachmias and
Nachmias, 1996). Here, an incorrect result was predicted because of a major error in
the sampling frame. Ten million people were identified from sources such as
telephone directories. In 1936 few poor people had telephones and hence these voters
were excluded from the survey. On election day, whilst the prediction had suggested
a victory for Landon, the poor individual voted for Roosevelt. Hence, the sample was
not representative of the voting population. Similar issues were observed with the
opinion polls that were carried out during the 2015 General Election in the UK, as
you can see in
Illustration 4.2
. For a number of reasons, the polls did not predict the
correct outcome. A similar phenomenon happened at the recent EU referendum, in
June 2016, when opinion polls seemed to suggest a remain result, whilst the majority
of citizens voted for the UK to leave the EU (for ‘Brexit’).
Illustration 4.2 The polls were wrong!
When the exit poll dropped at 10 pm, the numbers seemed unbelievable.
A few hours later and that forecast was looking overly cautious.
The initial figures, which predicted the Tories would win 316 seats,
ended up underestimating the party. David Cameron won an overall
majority – an outcome deemed near impossible based on pre-election
polling.
It is clear that the polls and, as sure as night follows day, the forecasts
modelled on polling, had a bad election. The question is: why did the
polls get it wrong?
In the end, the debate over whether online or phone polls are better, and
discussions about different methodologies to weight undecided voters
and filter for certainty to vote, all proved irrelevant. Although phone polls
during the course of the campaign had shown several Tory leads, the final
crop of polls were roughly anticipating a tie.
Across the polls, there appear to have been at least three errors:
1. Labour significantly underperformed compared with expectations
set by the polls. Support for Miliband’s party averaged 34% in the
final polls, 3.5 points above the actual result. The figures for UKIP
(12.5%), the Lib Dems (8%) and the Greens (4%) were within the
polls’ margins of error. Although the Conservatives’ average in the
final pre-election polls (34%) was also roughly three and a half
points shy of the party’s actual result, several companies – including
Ipsos Mori, Opinium and ComRes – had the party’s share at 35–
36%.
2. The Lib Dems’ result was catastrophic, even in their strongholds.
The party held on to only eight of their 57 seats, which is in stark
contrast to the snapshots provided by constituency polling.
3. Although turnout saw a one-point increase on 2010, the level (66%)
was significantly lower than that implied in most polls, meaning that
the opinion of non-voters weighed on polling numbers.
The net effect of these trends was that Labour only gained 10 seats from
the Tories, a quarter less than expected, and even lost eight constituencies
to Cameron’s party.
The collapse of the Lib Dems, which lost 26 seats to the Tories (more
than double the expected number) and 12 to Labour (which, on the other
hand, was in line with expectations), provided the Conservatives with the
final push they needed to get over the line.
Source: Adapted from The Guardian online, 9 May 2015
In analysing the reasons for the errors, it is true that a time component is applicable
to any survey and external factors may change the result. For example, a holiday
company may have strongly favourable results for a new product, but a change in
exchange rates, terrorism, pollution or some other external factor not easily predicted
could reverse this. The fact that people may have lied is something which is perhaps
a researcher’s worst nightmare. However, as previously mentioned, a range of
different methods of data collection and careful questioning can reduce this problem.
For opinion pollsters to be guilty of sampling error would appear surprising given
the reputations of the organisations involved. What this example stresses is the need
to select a representative sample from a population of an appropriate size which is
not biased; how this ideal can be achieved will now be considered.
Do'stlaringiz bilan baham: |