Indeed, it is implicit in much of our analysis in this book (cf. Chapters
2
and
10
)
that judgements can only adequately be made when local occurrences are related
to global requirements and global trends are seen to be reflected in local items.
Perception of an ironic intention, for example, may not be assessable by
attaching a symbol to an individual word or phrase in the test taker’s target text
since these local occurrences will merely support an overall pragmatic action. In
such cases, it is beyond-the-sentence appropriateness which must be assessed
(cf.
Chapter 10
). A crucial addition to the set of symbols used in marking scripts
will therefore be a means of indicating the portion (item/phrase/ sequence/text)
of the entire response to which the symbol refers.
From our perspective, a flaw in each of the systems of assessment reviewed so
far is their use of the term ‘error’ or (French)
faute. As suggested earlier, this is
not a helpful description for the majority of instances in which some measurable
distinguishing feature might occur in a test response. For example, in judging the
extent to which the source text values of reference-switching were or were not
relayed in four published translations of
Sample 7.1
, there is no sense in which
‘error’ would have been an appropriate term to use. Rather, translators’ choices
may be seen as more or less appropriate for the particular purposes to be served.
6
The term error may then be reserved for two categories of actual mistake made
by translators and referred to by House (1981) as ‘overt errors’, namely (1)
significant (unmotivated) mismatches of denotational meaning between source
and target text (subdivided into omissions, additions and substitutions); and (2)
breaches of the target-language system (e.g. orthography, grammar). In all other
cases, it is a matter of making judgements about the relative acceptability of the
range of options from which the translator chooses.
7
Such judgements can, of
course, never be completely objectivized. But those who are professionally
involved in translating might expect to achieve a considerable degree of
consensus in assessing the relative adequacy of variant translations—especially
if, as suggested earlier, a well-defined focus is provided for each translation task
set as a test. This might involve, for example, specifying an initiator and an end-
use or status for the resulting translation.
8
Thus, in the case of text
samples 9.4
–6
quoted in
Chapter 9
, where significant divergence between source and target
texts (Le Roy Ladurie’s
Montaillou) was noted, the translator’s decisions can
only be judged against whatever brief the translator was given, including the
need to produce a selective reduction of the source text,
9
suitable for publication
in paperback for the British market. In this sense,
skopos (Reiss and Vermeer
1984) includes both specification of task and what we have referred to (cf.
Chapters
4
and
5
) as
audience design.
Do'stlaringiz bilan baham: