The Second Way:
The Principles of Feedback
While the First Way describes the principles that enable
the fast flow of work
from left to right, the Second Way describes the principles that enable the
reciprocal fast and constant feedback from right to left at all stages of the
value stream. Our goal is to create an ever safer and more resilient system of
work.
This is especially important when working in complex systems, when the
earliest opportunity to detect and correct errors is typically when a catastrophic
event is underway, such as a manufacturing worker being hurt on the job or
a nuclear reactor meltdown in progress.
In
technology, our work happens almost entirely within complex systems
with a high risk of catastrophic consequences. As in manufacturing, we often
discover problems only when large failures are underway, such as a massive
production outage or a security breach resulting in the theft of customer data.
We make our system of work safer by creating fast, frequent, high quality
information flow throughout our value
stream and our organization, which
includes feedback and feedforward loops. This allows us to detect and reme-
diate problems while they are smaller, cheaper, and easier to fix; avert
problems before they cause catastrophe; and create organizational learning
that we integrate into future work. When failures and accidents occur, we
treat them as opportunities for learning, as opposed to a cause for punishment
and blame.
To achieve all of the above, let us first explore the nature of complex
systems and how they can be made safer.
WORKING SAFELY WITHIN COMPLEX SYSTEMS
One of the defining characteristics of a complex system is that it defies any
single person’s ability to see the system as a whole and understand how all
Promo
- Not
for
distribution
or
sale
28 • Part I
the pieces fit together. Complex systems typically have a high degree of in-
terconnectedness of tightly coupled components, and system-level behavior
cannot be explained merely in terms of the behavior of the system components.
Dr. Charles Perrow studied the Three Mile Island crisis and observed that it
was impossible for anyone to understand how the
reactor would behave in all
circumstances and how it might fail. When a problem was underway in one
component, it was difficult to isolate from the other components, quickly
flowing through the paths of least resistance in unpredictable ways.
Dr. Sidney Dekker, who also codified some of the key elements of safety
culture, observed another characteristic of complex systems: doing the same
thing twice will not predictably or necessarily lead to the same result. It is this
characteristic that makes static checklists and best practices, while valuable,
insufficient to prevent catastrophes from occurring. (See Appendix 5.)
Therefore, because failure is inherent and inevitable in complex systems, we
must design
a safe system of work, whether in manufacturing or technology,
where we can perform work without fear, confident that any errors will be
detected quickly, long before they cause catastrophic outcomes, such as
worker injury, product defects, or negative customer impact.
After he decoded the causal mechanism behind the Toyota Product System
as part of his doctoral thesis at Harvard Business School, Dr.
Steven Spear
stated that designing perfectly safe systems is likely beyond our abilities, but
we can make it safer to work in complex systems when the four following
conditions are met:
†
Do'stlaringiz bilan baham: