the various stages static. Rather, what is involved in stages one, two and three
should be viewed as a set of organizing principles which generally help us to avoid
the randomness inherent in some approaches to curriculum design in translator
training.
SUMMARY
In this chapter, we have explored possible applications of text linguistics to
translator training. Syllabus design, with the advanced translator trainee in mind,
was the main theme of the discussion and the basic question raised was: on what
basis could the selection, grading and presentation of materials for the training of
translators be carried out most effectively? It was argued that one way of tackling
the issues involved in this area of translator training would be to adopt a text
linguistic approach to the classification of texts.
The notion of ‘rhetorical purpose’ was used as the basis
of a typology yielding
a set of text types (e.g. argumentation), a number of major sub-types (e.g. the
counter-argument) and a suggested list of text forms to illustrate the various
categories and sub-categories (e.g. the objective counter-argument). To
complement this primary categorization with a set of materials graded according
to degree of evaluativeness, another scale was introduced to account for the
degree of markedness envisaged primarily in terms of departures from norms.
This approach to curriculum design was essentially informed by a basic
hypothesis, namely that different text types seem to place different demands on
the translator, with certain types and forms being more demanding than others.
The notion of ‘demand’ was defined in terms of the different translation
procedures employed to meet different criteria of adequacy demanded by
different text types.
CURRICULUM DESIGN 163
Chapter 12
Assessing
performance
The assessment of translator performance is an activity which, despite being
widespread, is under-researched and under-discussed. Universities, specialized
university schools of translating and interpreting, selectors of translators and
interpreters for government service and international institutions, all set tests or
competitions in which performance is measured in some way. Yet, in comparison
with the proliferation of publications on the teaching of translating —and an
emergent literature on interpreter training—little is published on the ubiquitous
activity of testing and evaluation. Even within what has been published on the
subject of evaluation, one must distinguish between the activities of assessing the
quality of translations (e.g. House 1981), translation criticism and translation
quality control on the one hand and those of assessing performance (e.g. Nord
1991:160–3) on the other. But while all of these areas deserve greater attention,
it is not helpful to treat them as being the same or even similar to each other
since each has its own specific objectives (and consequences).
In this chapter, we shall concern ourselves only with issues relating to the
evaluation of performance and, because of the vastness of the subject, we shall
orientate our discussion mainly to the implications for performance evaluation of
the hypotheses advanced in this book. For example, it will be apparent to the
reader that some important issues in translating and interpreting, such as
specialized terminology and documentation, have not been among our
preoccupations. They are adequately covered in other publications.
Correspondingly, we do not propose, in what follows, to consider methods for
testing these particular translator/interpreter skills. But in each of Chapters
3
to
9
above, we have applied to some particular mode or field of translating activity an
aspect or aspects of the model of communication presented in
Chapter 2
. In
doing so, we have implicitly raised questions which are of relevance to the
business of assessment. Moreover,
Chapter 10
has shown how important it will
be to incorporate beyond-the-sentence ‘errors’ into any scheme for assessment.
Before we can consider these questions and make proposals in response to them,
we need to have an appreciation of (1) what is unsatisfactory about the current
situation of translator (and interpreter) testing; (2) what insights and principles
from general theories of testing (including language testing in particular) need to
be brought to bear on the design and implementation of tests; and (3) what
proposals have been made from the perspective of translation studies for
imposing some kind of order and systematicity on assessment procedures. In the
light of these considerations, we shall then make some (necessarily tentative)
suggestions for moving translator performance assessment in the direction of
greater reliability and validity.
1
Do'stlaringiz bilan baham: