Just back from an amazing week spent with thinkers, do-ers and executives considering culture at Novartis. Amy Edmondson, my dear friend, was there and both of us remain stunned that people are still confused about failure, after we have spent literally decades writing about it. For the record – there are intelligent failures, stupid failures and complex systems failures, all of which have different implications.
I should mention – we’re not dinosaurs, even though there is one in the background!
Intelligent failures – Here’s what I wrote in Harvard Business Review in – ahem – 2010!
Despite widespread recognition that challenging times place unpredictable demands on people and businesses, I still run across many managers who would prefer to avoid the logical conclusion that stems from this: failure is a lot more common in highly uncertain environments than it is in better-understood situations. Instead of learning from failures, many executives seek to keep them hidden or to pretend that they were all part of a master plan and no big deal. To those executives, let me argue that an extraordinarily valuable corporate resource is being wasted if learning from failures is inhibited.
Naturally, to an executive raised on the concept of “management by exception,” any failure at all seems intolerable. This world view is reinforced by the widespread adoption of various quality techniques, for instance, six sigma, in which the goal is to stamp out variations (by definition, failures) in the pursuit of quality. Managers are supposed to be right, aren’t they? And having the right answer is just as valuable in management as it was in grade school, right?
For many years, scholars such as my esteemed colleague Sim Sitkin of Duke (see his article “Learning through failure: The strategy of small losses,” ) have been studying how organizations learn, and they have come to the conclusion that intelligent failures are crucial to the process of organizational learning and sense-making. Failures show you where your assumptions are wrong. Failures demonstrate where future investment would be wasted. And failures can help you identify those among your team with the mettle to persevere and creatively change direction as opposed to charging blindly ahead. Further, failures are about the only way in which an organization can re-set its expectations for the future in any meaningful way.
Not all failures, of course, are going to be useful from a learning point of view. The concept of intelligent failure makes a difference here. Sitkin’s criteria for intelligent failures are:
- They are carefully planned, so that when things go wrong you know why
- They are genuinely uncertain, so the outcome cannot be known ahead of time
- They are modest in scale, so that a catastrophe does not result
- They are managed quickly, so that not too much time elapses between outcome and interpretation
- Something about what is learned is familiar enough to inform other parts of the business.
I would add a couple of other criteria:
- Underlying assumptions are explicitly declared
- These can be tested at specific checkpoints, identified in advance, since planned results may not be equivalent to outcomes.
If your organization can approach uncertain decisions as experiments and adopt the idea of intelligently failing, so much more can be learned (so much more quickly) than if failures or disappointments are covered up.
So ask yourself: are we genuinely reaping the benefit of the investments we’ve made in learning under uncertain conditions? Do we have mechanisms in place to benefit from our intelligent failures? And, if not, who might be taking advantage of the knowledge we are depriving ourselves of?
But we’re still hearing about how fear of failure is depriving organizations of valuable learning
And yet, even in organizations that are determined to embrace learning under uncertainty, fear of failure persists.
Perhaps, and I’m hopeful, Amy’s forthcoming new book Right Kind of Wrong will help people in organizations figure this out.
Let’s review it one more time.
In genuinely uncertain conditions, the only way to learn is by trying things out. That means having a hypothesis about what could happen, undergoing an experience that tests that hypothesis and then finding out whether the hypothesis was supported, or not. As I’ve said for a long time, failing (as in trying something out that didn’t work out the way you’d hoped) is the only way to learn in genuinely uncertain environments. It’s what Sim Sitkin called “intelligent failures” and is indeed a class of failure that Amy talks about in her book.
Then we have the failures that Amy calls “basic.” Yup. These are the stupid ones that we incur due to inattention, carelessness or just the right wrong thing happening at the right wrong time. Dropping a tray’s worth of glasses. Missing the turn-off from the highway. And any number of other human moments of just basically screwing up. These don’t teach you much, other than perhaps next time you should make a checklist, double-check the instructions or perhaps pay attention!
As she says in an HBR article of nearly the same vintage as mine, “most failures in this category can indeed be considered “bad.” They usually involve deviations from specification in the closely defined processes of high-volume or routine operations in manufacturing and services. With proper training and support, employees can follow those processes consistently. When they don’t, deviance, inattention, or lack of ability is usually the reason.
But in such cases, the causes can be readily identified and solutions developed. Checklists (as in the Harvard surgeon Atul Gawande’s recent best seller The Checklist Manifesto) are one solution. Another is the vaunted Toyota Production System, which builds continual learning from tiny failures (small process deviations) into its approach to improvement. As most students of operations know well, a team member on a Toyota assembly line who spots a problem or even a potential problem is encouraged to pull a rope called the andon cord, which immediately initiates a diagnostic and problem-solving process. Production continues unimpeded if the problem can be remedied in less than a minute. Otherwise, production is halted—despite the loss of revenue entailed—until the failure is understood and resolved.”
The scariest kinds of failures of all are systemic. These are the ones when small and often unnoticed events intersect to create system meltdowns. The big lesson from these is that you absolutely need to catch the small failures early, because these become the early warning signs for potentially catastrophic failures later on.
I’ll quote Amy here again:
“My research has shown that failure analysis is often limited and ineffective—even in complex organizations like hospitals, where human lives are at stake. Few hospitals systematically analyze medical errors or process flaws in order to capture failure’s lessons. Recent research in North Carolina hospitals, published in November 2010 in the New England Journal of Medicine, found that despite a dozen years of heightened awareness that medical errors result in thousands of deaths each year, hospitals have not become safer.”
Fortunately, there are shining exceptions to this pattern, which continue to provide hope that organizational learning is possible. At Intermountain Healthcare, a system of 23 hospitals that serves Utah and southeastern Idaho, physicians’ deviations from medical protocols are routinely analyzed for opportunities to improve the protocols. Allowing deviations and sharing the data on whether they actually produce a better outcome encourages physicians to buy into this program. (See “Fixing Health Care on the Front Lines,” by Richard M.J. Bohmer, HBR April 2010.)…
So, a helpful way of thinking about failure?
Stop the blame game. Design experiments that reflect real life. Stop imposing the expectation of meeting plans on people doing things in unpredictable environments. Recognize that failure maybe needs to be re-defined.