That left NASA rewarding luck and repeating problematic practices, failing
to rethink what qualified as an acceptable risk.
It wasn’t for a lack of ability.
After all, these were rocket scientists. As Ellen Ochoa observes, “When you
are dealing with people’s lives hanging in the balance,
you rely on
following the procedures you already have. This can be the best approach in
a time-critical situation, but it’s problematic if it prevents a thorough
assessment in the aftermath.”
Focusing on results might be good for short-term performance, but it
can be an obstacle to long-term learning. Sure enough, social scientists find
that when people are held accountable only for whether the outcome was a
success or failure, they are more likely to continue with ill-fated courses of
action. Exclusively praising and rewarding results
is dangerous because it
breeds overconfidence in poor strategies, incentivizing people to keep doing
things the way they’ve always done them. It isn’t until a high-stakes
decision goes horribly wrong that people pause to reexamine their practices.
We shouldn’t have to wait until a space shuttle explodes or an astronaut
nearly drowns to determine whether a decision was successful. Along with
outcome accountability, we can create process accountability by evaluating
how carefully different options are considered as people make decisions. A
bad decision process is based on shallow thinking. A good process is
grounded in deep thinking and rethinking, enabling people to form and
express independent opinions. Research
shows that when we have to
explain the procedures behind our decisions in real time, we think more
critically and process the possibilities more thoroughly.
Process accountability might sound like the opposite of psychological
safety, but they’re actually independent. Amy Edmondson finds that when
psychological safety exists without accountability, people tend to stay
within their comfort zone, and when there’s accountability but not safety,
people tend to stay silent in an anxiety zone. When we combine the two, we
create a learning zone. People feel free to experiment—and to poke holes in
one another’s experiments in service of making them better. They become a
challenge network.
One of the most effective steps toward process accountability that I’ve
seen
is at Amazon, where important decisions aren’t made based on simple
PowerPoint presentations. They’re informed by a six-page memo that lays
out a problem, the different approaches that have been considered in the
past, and how the proposed solutions serve the customer. At the start of the
meeting, to avoid groupthink, everyone reads the memo silently. This isn’t
practical in every situation, but it’s paramount when choices are both
consequential and irreversible. Long before the results of the decision are
known, the quality of the process can be evaluated based on the rigor and
creativity of the author’s thinking in the memo and in the thoroughness of
the discussion that ensues in the meeting.
In
learning cultures, people don’t stop keeping score. They expand the
scorecard to consider processes as well as outcomes:
Even if the outcome of a decision is positive, it doesn’t necessarily
qualify as a success. If the process was shallow, you were lucky. If the
decision process was deep, you can count it as an improvement: you’ve
discovered a better practice. If the outcome is negative, it’s a failure only if
the decision process was shallow. If the result was negative but you
evaluated the decision thoroughly, you’ve run a smart experiment.
The ideal time to run those experiments is when
decisions are relatively
inconsequential or reversible. In too many organizations, leaders look for
guarantees that the results will be favorable before testing or investing in
something new. It’s the equivalent of telling Gutenberg you’d only bankroll
his printing press once he had a long line of satisfied customers—or
announcing to a group of HIV researchers that you’d only fund their
clinical trials after their treatments worked.
Requiring proof is an enemy of progress. This is why companies like
Amazon use a principle of disagree and commit. As Jeff Bezos explained it
in an annual shareholder letter, instead of demanding convincing results,
experiments start with asking people to make bets. “Look, I know we
disagree on this but will you gamble with me on it?”
The goal in a learning
culture is to welcome these kinds of experiments, to make rethinking so
familiar that it becomes routine.
Process accountability isn’t just a matter of rewards and punishments.
It’s also about who has decision authority. In a study of California banks,
executives often kept approving additional loans to customers who’d
already defaulted on a previous one. Since the bankers had signed off on the
first loan, they were motivated to justify their initial decision. Interestingly,
banks were more likely to identify and write off problem loans when they
had high rates of executive turnover. If you’re not the person who greenlit
the initial loan, you have every incentive to rethink the previous assessment
of that customer.
If they’ve defaulted on the past nineteen loans, it’s
Do'stlaringiz bilan baham: