3.
KRIs and issue monitoring: these are dashboard tables with thresholds and corre-
sponding status and colors.
4.
Risk appetite KRIs: now a general practice in firms, risk appetite statements are
complemented by KRI metrics, as detailed in the previous chapter.
5.
Emerging risks/horizon scanning: this practice became general 3–5 years ago. It is
commonly applied to regulatory risk and changes in the compliance and regulatory
environment but should not be limited to that.
6.
Action plans and follow-up: tracking mitigating action plans, following large
incidents or for risks assessed above appetite, is essential in risk monitoring and
reporting.
Best practice includes clear accountability and time frames for each action
and action owner, with regular documenting and tracking, typically monthly. Most
disciplined firms have a “big zero” objective: zero overdue action plans and zero over-
due audit recommendations. Overdue ratios are rightly called “discipline indicators.”
Examples of poor practice include firms that commonly miss deadlines or continually
postpone planned dates.
R I S K R E P O R T I N G C H A L L E N G E S
1
Management reporting is challenging – risk reporting even more so. It is often hard
to find the right balance between too much information and too little. When you have
too much, you may end up with 200-page risk documents that are seldom read, which
means important insights are lost. If you have too little, information becomes so thin
that it is meaningless.
One of the challenges of risk reporting is how to filter risk information upwards
and what form it should take. Group management and different departments
and business units don’t all need the same type, or amount, of risk information.
Reporting information to the next level of management implies choosing what
to escalate and what to aggregate. Some key information deserves to be communicated
to the next decision level without alteration, when the rest could be summarized
1
A previous version of this section was published in Chapelle, Oct 2015, “Have your cake
and eat it”, risk.net, reproduced in Chapelle, 2017,
Reflections on Operational Risk Management
,
Risk Books.
Risk Reporting
159
in aggregated statements. High risks, near misses, defective key controls will typ-
ically be escalated without alteration, while other information will be summarized
in aggregated reporting.
S e p a r a t i n g M o n i t o r i n g a n d R e p o r t i n g
There is a difference between risk monitoring and risk reporting: not everything that is
monitored needs to be reported. Monitoring typically remains at operations level and
only alerts to be escalated, alongside summary data, are reported to the next level of
management.
Monitoring applies to risk assessment units (RAUs), either per business line or per
process. Increasingly, institutions try to operate process-based risk assessments, even
though they are organized vertically, in order to analyze the risks at the touch point
of these processes. Best practice for operational risk monitoring focuses on controls
(design, effectiveness and control gap) more than on risks. Controls are observable
while risks are not. Mature banks require evidence of control testing to rate a control
“effective” and do not solely rely on unevidenced self-assessment. In some banks, risk
control self-assessment is simply called risk control assessment (RCA).
Risk registers containing risk assessments are not necessarily included in the cen-
tralized reporting. The risk function maintains the risk register for all risks at inherent
levels, controls and residual levels. Best practice in reporting follows clear risk taxon-
omy, e.g. categories of risk, causes, impact and controls to assess and report against
this taxonomy. Risk taxonomy does not have to follow the Basel categories, which are
now fairly outdated given the evolution of the sector. The only requirement, from a
regulatory perspective, is for it to be possible to map the firm’s risk categories with the
Basel categories.
Best practice in mature firms is to select and report risk information on a “need to
know” basis, depending on the audience and the level of management:
■
Process management and risk management levels: “All you need to know to
monitor.”
■
Process management and risk management access a full set of metrics to
perform their day-to-day tasks of monitoring activities and risks. Only alerts
needing escalation and synthetic data are reported to the next management
level.
■
Department heads: “All you need to know to act.”
■
Out of the full set of monitoring metrics, division heads receive only the infor-
mation that requires them to take action, such as process halts needing interven-
tion or incidents needing early mitigation. The rest is periodically summarized
to show the global picture, but this will not raise any particular concerns or
require any particular action.
■
Executive committee: “All you need to know to decide.”
■
Executive directors and board members take the decisions that will influence
the direction of the firm. They need the most appropriate type of information to
160
RISK MONITORING
help them fulfill their mission – for example, a set of leading indicators telling
top management where to focus more attention and resources; trends show-
ing deteriorating performance; information on progress against plans; and any
unexpected good or bad results reflecting the materialization of risks against
objectives. Where top-down environment screening impacts the firm’s strategy,
it should also be part of the risk reporting to executives.
I would advise against reporting solely on red flags without the balance of a
summary report, as it can give a biased, overly pessimistic view of the firm’s risk
level. If 80% of the indicators are green, this should be reported, alongside the more
problematic issues, to give a balanced view of the situation. Most firms now adopt
this approach.
A g g r e g a t i n g R i s k D a t a
Unlike financial risks, operational risk reporting faces the additional challenge of
aggregating qualitative data. Risk scores, red-amber-green ratings and other indicators
are discrete, qualitative and completely unfit for any arithmetic manipulation. A risk
rated “5” (extreme) alongside a risk rated “1” (low) is not at all equivalent to two risks
rated “3” (moderate). Calculating sums on ratings brings to mind the old joke about
recording a fine average body temperature when your head is in the oven and your
feet are in the refrigerator. Even when expressed in numbers, risk ratings are no more
quantitative or additive than colors or adjectives.
Three options are worth considering for aggregating qualitative data:
■
Conversion and addition: qualitative metrics are converted into a common
monetary unit, which can then be quantitatively manipulated. Some large banks
convert non-financial impacts results of their RCSA (reputation, regulatory, etc.)
into financial data to be able to sum the impacts and aggregate risks. It is the
approach followed by a number of firms which prefer converting impacts into
monetary units, for additivity. A variant of this approach is presented in the
case study, where KRIs are converted into percentage score above risk appetite.
This approach requires a number of assumptions and approximations that some
firms find uncomfortable, while others happily apply it.
■
Worst-case: the worst score of a dataset, such as a group of key risk indicators, is
reported as the aggregated value, e.g., all is red if one item is red. It is the most con-
servative form of reporting. This is appropriate when tolerance to risk is minimal,
when data are reasonably reliable and when indicators are strong predictors of a
risk. This approach has the advantage of being prudent but the disadvantage of
being potentially over-alarming and even unsafe if it generates so many alerts that
management simply disregards red alerts or is unable to distinguish the signal from
the noise.
Do'stlaringiz bilan baham: |