Quite deliberately the design of the study was a test rather than an experiment.
There were two reasons for this. The first was that the
tests were being used in a
functionally appropriate way and would be viewed by the candidates as tests rather
than as experiments. Artificiality was therefore somewhat controlled. The second
reason was that these ELTS tests are ‘social facts’. They are (or were in the mid-1980s)
used as part means of determining adequacy in English proficiency levels for overseas
students seeking admission to UK universities. Whether or not they are in them selves
adequate statements about distinct registers of English, they were used as if they
were. What the researchers did was straightforward, as so often good research is.
They administered the tests of different ESPs to different
groups of candidates with
different kinds of ‘background knowledge’ who would normally have taken one of
the modules.
Drawing together their results from all three studies the researchers concluded
that ‘academic background can play an important role in test performance. However
the effect has not been shown to be consistent … the studies have also shown the
need to take account of other factors such as linguistic consistency’ (ibid: 202).
But the most interesting conclusion they reach is the distinction they make
between direct and overview questions with relation to accessing the content area
under test. They report:
when these students were familiar with the content area,
they were able to answer
direct and overview questions with equal ease; when this familiarity with the
content area was lacking they could still answer direct questions, but their ability
to answer overview questions was greatly reduced.
(ibid: 202)
Although they do not say so the implication of this finding is that background
knowledge matters: direct questions do not in themselves probe sufficiently into
background knowledge whereas overview questions do. That is why in some of the
comparisons they make there was no distinction on test
results between groups who
had background knowledge and those who did not, because what was at stake was a
preponderance of direct questions.
This finding, properly muted though it is by the researchers, aware of the in -
adequacy of the tests themselves as valid representations of their content areas, does
in fact match the earlier research finding we referred to, that is that the unitary
competence hypothesis could not be supported. Similarly here what was indicated
by this research was that there are indeed real differences between language varieties.
The researchers carefully point to the need to distinguish
between linguistic pro -
ficiency (which is the subject of their study) and linguistic competence. On the basis
of their study (and perhaps too in the unitary competence research) what is estab -
lished is that there are different proficiencies not that there are different competences.
As Alderson and Urquhart say: ‘the part played … by linguistic competence …
remains unknown’ (1985).
Doing being applied linguists 31
02 pages 001-202:Layout 1 31/5/07 09:30 Page 31