2.
Deviations from normal (upwards and downwards): observe historical trends over
3–18 months depending on the activity.
3.
Cluster-based: a jump in data may constitute a natural threshold.
4.
Gradually reaching an ideal objective (e.g. control effectiveness): when a control
fails, say 30% of the time, setting up gradual quality criteria may be more realistic
than demanding high performance from day one.
In my experience, the reason firms struggle with KRI thresholds is often linked
to the definition of the metric itself rather than the choice of threshold. Many KRIs
Trim Size: 152mm x 229mm
Chapelle549048
c14.tex
V1 - 10/30/2018
2:54pm
Page 152
k
k
k
k
152
cause/sub
risk
Measures required
KRI
Thresholds
Actual score KRI
Comments/action
Legal_risk/legal_complexity'>Legal
risk/late
involvement
of
legal
1
2
3
4
5
6
7
8
9
10
Process stage at which
legal is contacted by
credit depart
m
ent
Credit process structure,
ti
m
e of contact between
legal and credit depart
m
ent
Stage 2
Stage 3
A
Plan a resolution
Legal
risk/legal
complexity
Multiple jurisdictions
per deal
Group structure and
geographical spread
of clients
3
5
G
Legal
risk/exposure/pace
of
change
# updates in rules and
regulations i
m
pacting
Dpt per quarter
Legal and co
m
pliance
update log
TBD with legal
G
If elevated:
reinforce resources
in legal tea
m
Legal
risk/familiarity
and
expertise
Nu
m
ber of si
m
ilar deals
previously executed
Deals categorization and
recording
2 (if 0: yellow)
A
Elevate controls
Legal
risk/ missing
documentations
# of
m
issing docu
m
ents
in file
Sa
m
pling
R
Investigate and
solve for the future
Process/data
capture
error
Control testing – sa
m
pling
A
Plan a resolution
Process/data
capture
error
Control results
2%
3%
5%
0
15%
20%
25%
A
I
m
prove processes
and training
Human
error/competency
(mistakes)
Years’ of activity in the dpt,
per staff
m
e
m
ber
<25% drop
G
Human
error/competency
(mistakes)
Key client identification,
account
m
anager,
years of activity
0 (red: 3)
A
Provide support/
m
entoring
Human error/overload
(slips)
# of key clients per
account
m
anager
# of key client to
inexperienced (<1yr)
account
m
anager
Drop in average # years’
experience in the business
# reconciliation breaks
between processing stages
#
m
issing reconciliations
between processing stages
Key client identification,
account
m
anager
10 (red: 20)
Stable
1
10
G
Key risk errors in credit handling process
F I G U R E 1 4 . 3
Example of a KRI dashboard – errors in credit handling
Key Risk Indicators
153
are defined in numbers: number of sick days, turnover ratio, number of vulnerabilities,
number of administrative rights, and so on. So where do you begin and where do you
end? When KRIs are defined across such a wide spectrum, thresholds are very hard
to define. The rule of thumb I recommend in these instances is “Your metric is a KRI
when even one is an issue.” Let’s take IT vulnerabilities: is 10 the right number? What
about 100, or 1,000? I know an organization with 50,000 IT vulnerabilities. Even if
50,000 seems a bit much, KRI thresholds on absolute numbers of vulnerabilities seems
like shooting in the dark. A better KRI would be: number of critical vulnerabilities
unpatched within the required deadline. Even one is an issue: it is a control breach and
a breach of policy. Equifax’s internal policy is to patch critical vulnerabilities within 48
hours, which is pretty standard in the industry. However, it identified a vulnerability that
was then left unpatched for
two months
, which allowed hackers to access the personal
details of 145 million customers.
A U.S. regulator on one of my courses in New York was investigating KRIs
around IT administrator rights. An absolute number of those rights would be unhelp-
ful. Instead, as a KRI, we considered using the number of rights beyond what the
organization might need – say one or two per department. KRIs related to access
management, a key control in IT security, are very relevant, such as overdue access
revision or unchanged access following a change of job, to mention just two examples.
Table 14.2 shows some of the ways KRI metrics could be transformed to make the
definition of threshold easier and more linked to risk appetite.
G o v e r n a n c e
Governance around indicators is common and simple, and broadly aligned with the
response to colors in the RCSA matrix (green: do nothing; amber: monitor; red: act).
Some firms have four colors and some act on amber. The practice has moved away from
the shades of red concept, where the severity was judged according to the department
or the type of metric considered. Nowadays, red means red for all KRIs: there are no
levels of severity – at least in best practice. This underlines the importance of selecting
the right thresholds for indicators. Thresholds can vary per department or business unit
when risk appetite varies, but governance must be uniform across the firm; a red is
a red.
Like all directives, governance and control must be defined ahead of time. It’s
not when an indicator turns red that firms should wonder who is in charge of what.
Typically, KRIs are identified and designed in collaboration with the business and the
risk function, and thresholds are signed off by the business. Indicators have an owner,
in charge of taking actions when the value of the indicator enters a risky zone – which
will be amber or red, depending on the firm. To avoid conflicts of interest, KRI values
ideally should be captured automatically, or be directly and objectively observable, so
that the indicator owner is not tempted to report a value slightly under the threshold,
to avoid having to take action.
154
RISK MONITORING
T A B L E 1 4 . 2
Examples of more risk-sensitive KRIs for easier thresholds
Do'stlaringiz bilan baham: |