The Fallacy of Precision And Why We Shouldn’t Worry About It

recite-a89dy0Last week I was involved in some forecasting work – both looking at projections for the outside world and the delivery we’d expect within the organisation. As discussions persisted in pursuit of spurious accuracy, I was reminded of a few things: 1) Humans crave certainty (or the illusion thereof); 2) We are consistently pretty bad at estimating the impact of events and the likelihood of them; and 3) that doesn’t matter because the real value of planning and forecasting isn’t in creating something perfect, but the act of planning itself.

A Craving for Certainty

One of the basic drives for mankind is the desire to have an explanation to what’s going on in the world (and pretty much any explanation is better than none). In an evolutionary sense, it’s easy to think why  – you’d be paralysed into total inaction if you didn’t quickly abstract patterns from the environment to inform your future actions. Without drifting too far into philosophy, inductive reasoning can never provide absolute proof, but someone who chose not to use it would be entirely incapacitated.

Arie Kruglanski described this as a road-sign-63983_1280desire for ‘cognitive closure’ and defined that as “individuals’ desire for a firm answer to a question and an aversion toward ambiguity”. Kruglanski conceptualises this in two stages: 1) “seizing”, where we grab information in order to come up with an explanation (‘urgency tendency’); and 2) “freezing”, where we try to hold onto our explanation for as long as possible (‘permanence tendency’). Alongside this conceptual work, research by Tania Lombrozo has shown that we react to uncertainty by spontaneously generating possible explanations, and, more interestingly, we let this bias our decision making – once we have an explanation we start to assign value to it, as if it were evidence itself.

There’s also neurological evidence that shows the impact ambiguity has on us. A 2005 study showed that levels of ambiguity correlated positively with amygdala and orbitofrontal cortex activation, and negatively with the ventral striatum. This reflected ambiguity creating a burden on our decision making faculties, leading to reduced reward sensation and even causing a fear response.

This leaves us desperate to find a precise answer when faced with uncertainty, and when we come up with it we really don’t want to let go. To try to counter this, we need to firstly be aware that we’re craving an answer, even when it’s impossible to have a definitive one. Secondly, we have to try to accept a more open-ended solution – for example, using a range when calculating benefits or presenting options based on different scenarios. Thirdly, when you’re coming up with possible hypotheses then note them all down; it helps keep you aware that there are other possibilities, enables you to moderate your forecasts (by comparison with what the other possibilities would suggest) and forces you to think about the evidence that drove you to decide upon ‘the’ explanation (so that you can review whether that evidence still holds up as time passes). Fourthly, monitor the real-world outcomes against the world your ‘explanation’ would lead to – it’s not meant to get us down, but force reality upon us; when we come to making our next explanation, we often forget how accurately (or not) we’ve forecast in the past. 

An Inaccuracy in Estimation

wrong-way-429723_1280This is a huge topic – much too large to cover in any detail here – so I’ll only highlight a few of the ways in which we make mistakes. It’s worth noting that I’m not denying the utility of some of these biases; they can enable us to take action when inaction might prove fatal, make us more optimistic (and hence driven to take action) and make decisions quickly.

Illusion of Control – We believe that we have more control over events than we really do. Langer showed that, even where we know – rationally – that outcomes are random, we still feel we have control. One experiment either allowed participants to choose their own lottery ticket or gave them a ticket that had been selected for them. The two groups were then offered the opportunity to switch their ticket and enter a different lottery, which offered better odds of winning. Those who chose their own ticket at the start where far less likely to switch into the new lottery, despite their increased chance of winning – they appeared to think that they were “good” at choosing. This effect is seen in a range of scenarios and differs from general over-optimism – it is a belief that we have control over things and this improves the likelihood of positive events.

Overconfidence Effect – We are more confident in our own judgement than we should be. There are three elements to this: 1) overestimation of one’s own performance; 2) overestimation of one’s ability relative to others; and 3) overestimation in how precise our estimates are.

Confirmation Bias – We look for evidence that supports our hypothesis. This means we constantly build support for our theories and ignore anything that would disprove them.

Availability Heuristic – The reliance upon what comes to mind in making a judgement. This makes us biased towards things that spring to mind, such as those that are particularly salient, have happened recently etc.

Overall, there’s not too much we can do to counter these biases, apart from being aware of them and trying to mitigate them through our awareness. For example, to counter the availability bias you can try to separate out the assessment of relevant topics from the forecasting itself – by drawing out all those topics you can make them all available. Or to counter the overconfidence bias, you can bring in others (outside your normal working area) to assess those same tasks or events (although maintaining awareness that they’ll also be overconfident in their judgement = and potentially it’ll be even more extreme).

The Real Value of Planning (or Forecasting)

One of the bias’ that I left recite-4yab89out above was “planning fallacy” – the consistent underestimation of the amount of time it takes to deliver a project. There are a number of possible explanations for this, including some of the biases above (illusion of control, overconfidence and availability). Not only do we show planning fallacy all by ourselves, but organisations often encourage it even more. We tend to underestimate delivery time because we don’t put enough ‘slack’ in our plans, yet managers, customers and executives want to drive delivery plans to be as short as possible – they want a justification for every block of time and “because something is likely to go wrong” doesn’t normally cut it. Further, we’re normally asked to, seemingly sensibly, build our plans on the set of outcomes that seems most likely. So we take each individual action and judge whether it’s more likely to be delayed or not – and most individual events are more likely to go smoothly than not. The problem is that, in aggregate, it’s likely one of the things will go wrong – we just have no idea which one. That means the most likely single set of events may well be everything going right, but the chance of that is, let’s say, 20% – each other individual set of events is less likely, but the chance of something going wrong is still much higher than nothing going wrong.

The rest of this post might seem to be a bit of a downer, but here’s the positive – it’s not as important to be accurate as we feel it is. The process is more important than the answer you come to. At a superficial level, it’s not worth worrying about the last few percentage when developing estimates – we tend to be building on top of so many assumptions that it’s a false economy (it’s a good example of Pareto’s Law – most time is often spent on the fine, and low value, pseudo-accuracy). Thinking in terms of a realistic range is much more helpful than spending hours generating a falsely precise figure. If the world progresses in the world you expect then you’ll be in the right ballpark – if it doesn’t you’ll be off massively anyway.

At a deeper level, there is value in planning because you mentally simulate whatever series of events or plans you’re looking at – as Dwight D Eisenhower said “No battle was ever won according to plan, but no battle was ever won without one… Plans are useless, but planning is indispensable”. The process of planning (or forecasting) forces you to think about the factors in play – you have to think about the requirements, the dependencies, the risks and how you might mitigate them in more detail than you would otherwise.

psychology-544405_1280There’s a skill to doing this properly – your ability to mentally simulate a scenario is limited by two key factors: 1) your imagination and 2) your knowledge. Both of these can be helped by bringing other people into your planning process (you can use the time saved by being less concerned about the fine detail of your benefit or forecast figures). You need to use diverse groups of people to get the most out of planning – to broaden imagination it’s important to bring in people with very different experiences to your own (we tend to think within our own paradigm, which is set by our experiences) and to broaden knowledge we require subject matter experts in areas across the plans’ elements (e.g. if you were building a football stadium, you don’t only want people who’ve built football stadia – they’re useful, but you also want people with knowledge of delivering large construction projects, of the leisure/entertainment industry,  of turf and the conditions that impact on it etc.).

The more you’ve simulated both your preferred option and your range of options, the better prepared you are to deliver the project – whether it goes to plan or (infinitely more likely) not.

When things go wrong we often worry, unsurprisingly, about things having gone wrong. But that doesn’t generate progress for the project (although it might deliver some learning). Mental simulation leaves you more prepared to handle events when they go off the expected path because you don’t have to rely on impulsive decision making – you already at least have a rough idea of what to do.

Worry Less About the Output and More About the Process

Planning is a hugely useful thing to do, but only when time and effort is spent in the right way. Desperately hoping to get your delivery schedule right to the exact day or your benefit figures to a precise figure leaves you on a hiding to nothing – there are too many unknowns in the world and we have all sorts of biases in our reasoning. We just have to be more laid back about that (as well as relaxing about whether people meet our own projections – if we consistently over-deliver, then it’s because we’re consistently under promising).

Advertisements

2 thoughts on “The Fallacy of Precision And Why We Shouldn’t Worry About It

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s