A-14
Suppose you were a psychologist who was interested in whether there is a relation-
ship between smoking and anxiety. Would it be reasonable to simply look at a group
of smokers and measure their anxiety using some rating scale? Probably not. It
clearly would be more informative if you compared their anxiety with the anxiety
exhibited by a group of nonsmokers.
Once you decided to observe anxiety in two groups of people, you would have
to determine just who would be your subjects. In an ideal world with unlimited
resources, you might contact every smoker and nonsmoker because these are the
two populations with which you are concerned. A
population consists of all the
members of a group of interest. Obviously, however, this would be impossible
because of the all-encompassing size of the two groups; instead, you would limit
your subjects to a sample of smokers and nonsmokers. A
sample, in formal statis-
tical terms, is a subgroup of a population of interest that is intended to be repre-
sentative of the larger population. Once you had identifi ed samples representative
of the population of interest to you, it would be possible to carry out your study
that would yield two distributions of scores—one from the smokers and one from
the nonsmokers.
The obvious question is whether the two samples differ in the degree of anxiety
their members display. The statistical procedures that we discussed earlier are help-
ful in answering this question because each of the two samples can be examined in
terms of central tendency and variability. The more important question, though, is
whether the magnitude of difference between the two distributions is suffi cient to
conclude that the distributions truly differ from one another, or if, instead, the dif-
ferences are attributable merely to chance.
To answer the question of whether samples are truly different from one another,
psychologists use inferential statistics.
Inferential statistics is the branch of statistics
that uses data from samples to make predictions about a larger population, permit-
ting generalizations to be drawn. To take a simple example, suppose you had two
coins that both were fl ipped 100 times. Suppose further that one coin came up heads
41 times, and the other came up heads 65 times. Are both coins fair? We know that
a fair coin should come up heads about 50 times in 100 fl ips. But a little thought
would also suggest it is unlikely that even a fair coin would come up heads exactly
50 times in 100 fl ips. The question is, then, how far a coin could deviate from 50 heads
before that coin would be considered unfair.
Questions such as this—as well as whether the results found are due to chance
or represent unexpected, nonchance fi ndings—revolve around how “probable”
certain events are. Using coin fl ipping as an example, 53 heads in 100 fl ips would
be a highly probable outcome because it departs only slightly from the expected
outcome of 50 heads. In contrast, if a coin was fl ipped 100 times and 90 of those
times it came up heads, that would be a highly improbable outcome. In fact, 90 heads
out of 100 fl ips should occur by chance only once in 2 million trials of 100 fl ips
of a fair coin. Ninety heads in 100 fl ips, then, is an extremely improbable outcome;
Do'stlaringiz bilan baham: