The title of this post comes from a paper by Andrew J. Foy M. D., where he looks at the use of a specific piece of medical equipment (a pulmonary artery catheter) as an example of where physicians are biased towards taking action, despite their being no evidence that it helps – this is known as intervention bias (which we encountered briefly here).
People are, in certain conditions, biased towards taking action, even when all the evidence shows that doing nothing would lead to better results. Unfortunately, things get more complicated; under different conditions people show “omission bias”, where they’ prefer doing nothing. So what are these biases, when do we show them and how can we get around them?
A significant segment of the business world has become very uncomfortable talking about intervention bias, since the publication of “In Search of Excellence” by Tom Peters and Robert Waterman Jr. That book, which is genuinely one of the seminal texts on management, highlighted a “bias for action” as one of the eight themes of high level performance. It revealed that, across a range of businesses, it is those who are prepared to challenge the status quo and attempt new things who are most successful – without action and risk-taking there’s no progress. This can make leaders feel concerned that either a) awareness of intervention bias diminishes a bias for action or b) people won’t understand how the two are compatible.
These two concepts do fit together and awareness of both is important. The bias for action that Peters and Waterman wrote about relates to having the courage to decide to explore something; the decision to try something new. Intervention bias (or “action bias” – as opposed to the desirable “bias for action”) relates to failing to reliably evaluate whether that action is or will be better than doing nothing.
We should talk about intervention bias; it can have extremely serious consequences. Three meta-analyses (quoted in Foy’s paper) of the use of pulmonary artery catheters found that their use was actually associated with increased mortality rates, except in one specific type of case (those complicated by cardiogenic shock) where it made no difference either way. Robert Myers found that government suffers from intervention bias in setting agricultural policy (specifically the setting of price support levels), meaning government isn’t spending its money effectively. Anthony Patt and Richard Zeckhauser found the phenomena in relation to conservation and the environment. It’s even seen in football goalkeepers!
Intervening gives us a sense of control, even though it’s only a false sense. We also have a natural desire to do something when things seem to be going wrong; sitting around doing nothing doesn’t feel like a plausible option. Sometimes, however, we have to ride out a storm. We noted how businesses struggle when they move away from their fundamentals in the post I referenced earlier – when things start trending downwards businesses suddenly try to open up new markets, sell different products/services or hire and fire staff at random.
On the other hand, there are situations where people have a strong preference for doing nothing over action. There are the obvious social situations where this occurs (e.g. when you’re new at work you’re much less likely to suggest changes than a few months in), but it’s also seen in broader, and more worrying, situations.
Bazerman, Baron and Shonk found omission bias in the drug-approval policies of the U.S. Food and Drug Administration, in resistance to beneficial trade agreements (replicating Baron’s earlier finding) and in the neglect of world poverty (also found by Unger). Ritov and Baron found that it resulted in people failing to vaccinate their children – parents chose not to vaccinate even though the risk of death from the disease was much higher than the risk of harm from the vaccine (they preferred the lack of action).
This is related to the status quo bias, where we favour things staying the same over change. Samuelson and Zeckhauser found this in a range of scenarios, while identifying some handy examples. When New Coke was in the testing phase, executives could imagine their bonus growing exponentially as they saw the taste test results (190,000 tests were done at a cost of $4m); people loved the sweeter taste of New Coke over traditional Coca Cola. Unfortunately they didn’t account for status quo bias and the better flavour didn’t make any difference – consumers were attached to Coca Cola and the combination of marketing and a more desirable taste made no difference. Executives made the mistake of expecting (or maybe just hoping for) a rational response.
When Do We Show Each Bias?
Each of us are different in which of these biases we show and when we show them, but there are some circumstances that increase the likelihood we’ll favour one of these biases over the other.
This is because action bias and omission bias are not really opposites at all, but derive from the same thing (Patt and Zeckhauser again). People show action bias because they attach greater value to positive outcomes that they’ve played a role in than those they haven’t. And people show omission bias because they attach greater value to a negative outcome that they’ve played a role in than those that they haven’t.
Personal involvement amplifies the salience of both positive and negative events, so we seek action where we anticipate a positive outcome and we avoid it where we think the outcome is a negative one.
This amplification throws our judgement off, so we sometimes prefer to take action that achieves a lesser benefit than one which could be achieved if we did nothing at all and vice versa. This feeds through to the framing effect, as shown by Kahneman and Tversky.
Kahneman and Tversky drafted positively and negatively framed versions of the same life or death scenario – 600 people have a deadly disease – and gave participants two treatment options to choose from (the table below, taken from Wikipedia, is a neat way of presenting it):
|Framing||Option A||Option B|
|Positive||“Saves 200 lives”||“A 33% chance of saving all 600 people, 66% possibility of saving no one.”|
|Negative||“400 people will die”||“A 33% chance that no people will die, 66% probability that all 600 will die.”|
The mathematically-minded amongst you will notice that these all have the same average outcome – running through the numbers leaves 400 dying.
That’s not how people responded though; in the positively framed scenario, people preferred the certainty of saving 200 lives (selected by 72%), while in the negative scenario people preferred having some chance of nobody dying, even if it risked everybody dying (chosen by 78%). Given what we saw above, this makes sense – people attach so much value to positive outcomes that they want the certainty of getting a positive, while people abhor a negative so will take the risk of everyone dying for the chance of everybody surviving.
Can We Do Anything About It?
I’ve previously mentioned using your null hypothesis (or “do nothing” option) as a serious option when assessing making a change. This helps you handle both action and omission bias – it forces you to consider it as a possibility when you get over-excited about doing something positive and it makes you do a proper analysis of the impact of doing nothing, when you’re trying to avoid doing something negative (even though the outcome of action would be less negative than doing nothing). Even when we show omission bias, we’re not actively choosing to do nothing; we’re choosing to avoid doing something.
I’ve seen a lot of business cases with a null hypothesis in. Almost all of them put it in out of obligation, rather than with any serious consideration – there should be as much analysis of this option as any other. What are the trends that we’re already seeing? What are the chances that something prompting action was a fluke event that will fade away? What would the resource not tied up in delivery of the active options be capable of delivering otherwise? (Businesses often use net present value, but this only works properly if the do nothing option has been analysed seriously).
I introduced framing earlier because it has a role to play here – not in nullifying the biases, but harnessing them. If you frame a situation positively then you are more likely to see action, while if you frame things negatively then you won’t. If we are looking for people, for example, to put forward ideas for change, then we need to frame that around the positives that can be gained rather than the negatives that can be avoided – for example, we shouldn’t be saying that ideas have helped us get less things wrong, but that they increase the number of things we get right. And if we want people to seriously consider not taking action – such as the medical example I opened with – then we need to highlight what could be lost by taking action.
So we should think about action and omission bias both in relation to ourselves (where we’re trying to limit them) and others (where we might be trying to either limit or harness them). The next time people around you start to flap or panic, think about the situation and which of these biases are being shown, and you’ll significantly increase the chance your organisation makes a good decision.