Don’t Just Do Something, Stand There!

To cut or not to cut...
To cut or not to cut…

The title of this post comes from a paper by Andrew J. Foy M. D., where he looks at the use of a specific piece of medical equipment (a pulmonary artery catheter) as an example of where physicians are biased towards taking action, despite their being no evidence that it helps – this is known as intervention bias (which we encountered briefly here).

People are, in certain conditions, biased towards taking action, even when all the evidence shows that doing nothing would lead to better results. Unfortunately, things get more complicated; under different conditions people show “omission bias”, where they’ prefer doing nothing. So what are these biases, when do we show them and how can we get around them?

Intervention/Action Bias

A significant segment of the business world has become very uncomfortable talking about intervention bias, since the publication of “In Search of Excellence” by Tom Peters and Robert Waterman Jr. That book, which is genuinely one of the seminal texts on management, highlighted a “bias for action” as one of the eight themes of high level performance. It in-search-of-excellencerevealed that, across a range of businesses, it is those who are prepared to challenge the status quo and attempt new things who are most successful – without action and risk-taking there’s no progress. This can make leaders feel concerned that either a) awareness of intervention bias diminishes a bias for action or b) people won’t understand how the two are compatible.

These two concepts do fit together and awareness of both is important. The bias for action that Peters and Waterman wrote about relates to having the courage to decide to explore something; the decision to try something new. Intervention bias (or “action bias” – as opposed to the desirable “bias for action”) relates to failing to reliably evaluate whether that action is or will be better than doing nothing.

We should talk about intervention bias; it can have extremely serious consequences. Three meta-analyses (quoted in Foy’s paper) of the use of pulmonary artery catheters found that their use was actually associated with increased mortality rates, except in one specific type of case (those complicated by cardiogenic shock) where it made no difference either way. Robert Myers found that government suffers from intervention bias in setting agricultural policy (specifically the setting of price support levels), meaning government isn’t spending its money effectively. Anthony Patt and Richard Zeckhauser found the phenomena in relation to conservation and the environment. It’s even seen in football goalkeepers!

Intervening gives us a sense of control, even though it’s only a false sense. We also have a natural desire to do something when things seem to be going wrong; sitting around doing nothing doesn’t feel like a plausible option. Sometimes, however, we have to ride out a storm. We noted how businesses struggle when they move away from their fundamentals in the post I referenced earlier – when things start trending downwards businesses suddenly try to open up new markets, sell different products/services or hire and fire staff at random.

Omission Bias

On the other hand, there are situations where people have a strong preference for doing nothing over action. There are the obvious social situations where this occurs (e.g. when you’re new at work you’re much less likely to suggest changes than a few months in), but it’s also seen in broader, and more worrying, situations.

blind-eyeBazerman, Baron and Shonk found omission bias in the drug-approval policies of the U.S. Food and Drug Administration, in resistance to beneficial trade agreements (replicating Baron’s earlier finding) and in the neglect of world poverty (also found by Unger). Ritov and Baron found that it resulted in people failing to vaccinate their children – parents chose not to vaccinate even though the risk of death from the disease was much higher than the risk of harm from the vaccine (they preferred the lack of action).

This is related to the status quo bias, where we favour things staying the same over change. Samuelson and Zeckhauser found this in a range of scenarios, while identifying some handy examples. When New Coke was in the testing phase, executives could imagine their bonus growing exponentially as they saw the taste test results (190,000 tests were done at a cost of $4m); people loved the sweeter taste of New Coke over traditional Coca Cola. Unfortunately they didn’t account for status quo bias and the better flavour didn’t make any difference – consumers were attached to Coca Cola and the combination of marketing and a more desirable taste made no difference. Executives made the mistake of expecting (or maybe just hoping for) a rational response.

When Do We Show Each Bias?

Each of us are different in which of these biases we show and when we show them, but there are some circumstances that increase the likelihood we’ll favour one of these biases over the other.

This is because action bias and omission bias are not really opposites at all, but derive from the same thing (Patt and Zeckhauser again). People show action bias because they attach greater value to positive outcomes that they’ve played a role in than those they haven’t. And people show omission bias because they attach greater value to a negative outcome that they’ve played a role in than those that they haven’t.

Personal involvement amplifies the salience of both positive and negative events, so we seek action where we anticipate a positive outcome and we avoid it where we think the outcome is a negative one.

This amplification throws our judgement off, so we sometimes prefer to take action that achieves a lesser benefit than one which could be achieved if we did nothing at all and vice versa. This feeds through to the framing effect, as shown by Kahneman and Tversky.

Kahneman and Tversky drafted positively  and negatively framed versions of the same life or death scenario – 600 people have a deadly disease – and gave participants two treatment options to choose from (the table below, taken from Wikipedia, is a neat way of presenting it):

Framing Option A Option B
Positive “Saves 200 lives” “A 33% chance of saving all 600 people, 66% possibility of saving no one.”
Negative “400 people will die” “A 33% chance that no people will die, 66% probability that all 600 will die.”

The mathematically-minded amongst you will notice that these all have the same average outcome – running through the numbers leaves 400 dying.

That’s not how people responded thougpositive-455579_1280h; in the positively framed scenario, people preferred the certainty of saving 200 lives (selected by 72%), while in the negative scenario people preferred having some chance of nobody dying, even if it risked everybody dying (chosen by 78%). Given what we saw above, this makes sense – people attach so much value to positive outcomes that they want the certainty of getting a positive, while people abhor a negative so will take the risk of everyone dying for the chance of everybody surviving.

Can We Do Anything About It?

I’ve previously mentioned using your null hypothesis (or “do nothing” option) as a serious option when assessing making a change. This helps you handle both action and omission bias – it forces you to consider it as a possibility when you get over-excited about doing something positive and it makes you do a proper analysis of the impact of doing nothing, when you’re trying to avoid doing something negative (even though the outcome of action would be less negative than doing nothing). Even when we show omission bias, we’re not actively choosing to do nothing; we’re choosing to avoid doing something.

I’ve seen a lot of business cases with a null hypothesis in. Almost all of them put it in out of obligation, rather than with any serious consideration – there should be as much analysis of this option as any other. What are the trends that we’re already seeing? What are the chances that something prompting action was a fluke event that will fade away? What would the resource not tied up in delivery of the active options be capable of delivering otherwise? (Businesses often use net present value, but this only works properly if the do nothing option has been analysed seriously).

I introduced framing earlier because it has a role to play here – not in nullifying the biases, but harnessing them. If you frame a situation positively then you are more likely to see action, while if you frame things negatively then you won’t. If we are looking for people, for example, to put forward ideas for change, then we need to frame that around the positives that can be gained rather than the negatives that can be avoided – for example, we shouldn’t be saying that ideas have helped us get less things wrong, but that they increase the number of things we get right. And if we want people to seriously consider not taking action – such as the medical example I opened with – then we need to highlight what could be lost by quotescover-PNG-66taking action.

So we should think about action and omission bias both in relation to ourselves (where we’re trying to limit them) and others (where we might be trying to either limit or harness them). The next time people around you start to flap or panic, think about the situation and which of these biases are being shown, and you’ll significantly increase the chance your organisation makes a good decision.

Watch Out for the HiPPO – Avoid Automatically Doing Whatever the Boss Thinks

Will Rogers QuoteIt can be hugely frustrating at work to have your opinion cast aside so lightly when the boss thinks something different. They’ve fallen foul of the law of the HiPPO – the Highest Paid Person’s Opinion tends to win out.

The term HiPPO was coined by Avinash Kaushik in Web Analytics: An Hour a Day, to explain what happens if there is an absence of data (as an aside, I feel like it’s worth mentioning that Kaushik donates all proceeds from that book to charity). If you’ve ever been in a meeting where people have looked to the chair or the most senior person for a decision because there’s not enough information to make an informed choice, then you’ve witnessed the HiPPO effect in action.

Why Does the HiPPO exist? – The Followers’ Role

A hippo from San Diego zoo
The other type of hippo – from San Diego Zoo

There are some sensible reasons why people might choose to agree with the most senior person – they may well be more knowledgeable or skilled at that particular task; after all, they have been promoted to that senior position (though we’ve already seen that doesn’t necessarily mean that they’re great at what they’re doing now). It might also be the safest place to be for your career, depending on how open-minded your boss is…

This can, and does, result in bad decisions being made, as well as employees becoming disengaged. It happens because of a few different biases (such as the desire to conform and loss aversion – we value not looking stupid over being right), but the big one is authority bias.

Authority Bias – We have an in-built tendency to believe those who we perceive as “experts”. It’s completely understandable for the basic functions we need to keep society going – listening to your seniors about what to eat, how to look after children etc – but it does leave everyone thinking in exactly the same way. That’s not great for making big steps forward in business.

One of the most famous psychology experiments of all time provides a terrifying example of how obedient we can be to authority, known as the Milgram experiment. The set-up for the study was that the participants were helping with an experiment about learning – they were to administer electric shocks, of increasing strength, to the learner when they got an answer wrong.

However, both the researcher, who oversaw the participants’ performance, and the “learner” were actors looking to test how far people would go with the shocks. So the participant and the learner were put in different rooms and the experiment began. As the shock increased the “distress” of the learner rose too – eventually the learner stopped responding at all.

The shock generator used in the Milgram experiment
The shock generator used in the Milgram experiment

Amazingly, 26 of the 40 participants, with encouragement from the researcher (both beforehand, with an explanation of why the experiment was so important, and during the experiment, with reminders that they need to carry on, if the participant started to hesitate) proceeded to the maximum shock level – long after the learner appeared to be either unconscious or dead. Those involved in the study did whatever the scientist told them, even though it meant they “killed” somebody – which is pretty scary. The experiment was then repeated in a number of different studies and the results showed the same thing again and again; people do what they’re told by authority figures.

To emphasise this obedience effect the study was conducted in a lab and the researcher wore a lab coat, but it highlights how biased we can be to authority. At a more facile level you probable see advertisers trying to use the authority bias every time you watch tv – there’s always some doctor or dentist recommending this or that skincare/toothbrush/whatever somebody’s trying to sell.

In a meeting room we see the same thing; when there’s uncertainty we tend to look to the most senior person to decide.

Why Does the HiPPO exist? – The Leaders’ Role

Self-Serving Bias – This is where our cognitive or perceptual processes are distorted in order to maintain or increase our self-esteem. Most likely we’ve all felt this at some time, whether it’s initial resistance to negative feedback, remembering more about our contribution than others or seeking out information to support our own theory (which I try to bear in mind when writing, but I’m most likely still guilty of).

There have been some challenges to the universality of the self-serving bias, so Mezulis, Abramson, Hyde and Hankin conducted a meta-analysis of 266 studies, all of which had results on how people attributed positive and negative results (to fit with self-serving bias we’re expecting positive results to be deemed as more due to oneself and more likely to happen again than negative outcomes).

(For context, the most common methodology for testing attribution is to make someone do a test, then give them a random set of results, but the participant is told that they are genuine. They are then asked to assess what influenced their performance and the researcher judges whether the factors chosen are internal or external).

There are global differences in self-serving bias
There are global differences in self-serving bias

They found that the self-serving bias was universal, but that the scale of it was influenced by a number of factors – children and older adults showed a big bias, while those from the US showed a bigger bias than those from Western Europe, with Asians showing an effect that was smaller still.

They also found supporting evidence for one of the main theories for why the self-serving bias exists – that it enables better mental health by distorting reality to make us feel better – because those with psychopathology had a smaller bias, with depression the lowest bias of all the conditions reviewed.

Further research, by Campbell and Sedikides showed that the self-serving bias is magnified when our self-perception is under threat – i.e. if you’re challenged then your bias gets even greater. For example, if someone sees themself as in charge, but feels like their authority is under threat…

This bias is closely related to confirmation bias (searching for, interpreting or recalling information that supports your beliefs or theories), choice-supportive bias (the tendency to assign positive attributes to a choice, after the choice has been made) and egocentric bias (the tendency to believe that we are more responsible for outcomes than we are and that other people think like us).

In summary, this means that leaders have a tendency to believe their own hype – they get a distorted view of their own abilities, using their promotions, previous achievements and the common support of their juniors as evidence. They start to really believe that they’re more capable than the other people in the room – particularly when those surrounding them agree with their opinions.

So the juniors tend to agree with their seniors, and this adds to senior people believing in their superiority. It’s easy to see that this quickly becomes a viscous cycle, so what can we do?

What Can We Do About It?

Find Data – The term HiPPO was created to describe what happens when there’s a lack of data, so this is an obvious one. Preparing objective evidence is a great way to take the emotion and opinion out of a wide range of situations. You need to stay aware of confirmation bias in order to make sure it’s a fair discussion, but evidence will almost always win out over a strong opinion.

Try to think creatively about what data is out there – if there’s not exactly what you’re after then try to come up with a proxy. Has something similar happened before? Is there something in a different sector that is useful? Any academic research (use a specific academic search engine, even if it’s only Google Scholar)? And if there’s nothing that can give a hint, then it’s always worth proposing a trial. This doesn’t only relate to your own ideas/thoughts – if you’re at a meeting and you can feel the HiPPO moving in, then suggest that the group try to find some data to enable an informed decision. 

Alfred P SloanSeek Disagreement – Alfred Sloan, the long-term president, chairman and CEO of General Motors, had a strong belief about making decisions; they shouldn’t be made until someone had expressed why the “preferred” option might not be the right one. As you can see on the right, he actively used to delay decisions if he didn’t feel there had been enough disagreement – a pretty amazing commitment.

When we’re in a position where we are the highest paid person then we should follow his advice. We should be encouraging people to disagree and be as open as possible. If needed, ask people to play devil’s advocate. You can do this when you’re not the HiPPO too – seek a wide range of views. There’s a natural tendency to be positive about your own ideas, so you need others to supply the balance; however uncomfortable, it’ll pay off longer term.

Seek Consensus – I accept this seems like the opposite of the above, but I’ll explain why they are complimentary. Here I’m referring to trying to build support for, or disagreement with, a concept before the formal meeting happens.

At Valve, they tried to remove the HiPPO by getting rid of bosses entirely. The idea was that if there were no more bosses, then the best ideas would win out rather than the organisation just doing what a few senior people at the top say. People simply have to convince others to work with them on their ideas – theoretically a true idea meritocracy. To facilitate it, people even have desks that wheel around, so they can join up with new “teammates”.

While that’s clearly only suited to a limited number of fields (and if you push people who work at Valve, you can still detect a hierachy even there), it is an extreme example of something that’s relevant to us all. If our idea is good then we should start talking to people about it before getting to a decision point – find out whether people will support it, while also discover some of the flaws in your plan. By the time you get to the crunch time meeting, you already know that others in the room think it’s a good idea and you can bring them in to offer support. The risk of the HiPPO is reduced when there is broad group support.

So we should welcome disagreement, so that we can see flaws and improve our ideas, but we should seek concensus in order to reduce the risk of a flash decision from the highest paid person leading to a viable idea getting flushed away.

Everyone slips up from time to time
Everyone slips up from time to time

Remind Ourselves of What’s Gone Wrong Before – The self-serving bias means that we’re much better at remembering our successes than our failures. Most of the time that’s useful for our mental health – as seen in the relationship between depression and reduced self-serving bias – but it  isn’t helpful in the workplace.

To perform as well as possible, we need to remember what went wrong in the past. Firstly, it helps us avoid making the same mistakes again and again (e.g. organisations continually expect projects to deliver without delays – for reasons we explored here). Secondly, and more relevant to this post, it reminds us that we’re not perfect. We can only increase our chances of success by making the most of the people around us, but sometimes we need a reminder.

Remind Ourselves of the Role others have Played – We find it easier to remember our own contributions to successes than those of others. That same research found showed that it was truly a memory effect; when participants were given reminders about the role others played, they attributed less of the success to themselves and more to others. We should note down how others have helped us, as well as what we’ve done ourselves. We should aso ask other what they think they contributed, so we can both celebrate their successes and give ourselves a prompt about how others help us. Combined with the action above, you reduce the chance that you’ll be the person playing the HiPPO.  

Finally, if you want a specific example of a HiPPO then have a look at this Forbes article.