Are You Afraid? – Trends in Fear At Work

recite-1ahvv1gThroughout history leaders, including those of the military, business and state, have used fear to make people fulfil their wishes. Paul Austin, CEO of Coca Cola between 1966 and 1980, once described his management style as liking “to pull all the legs off the centipede and see what he’s really like” and saying that “a certain degree of anxiety and tension has to exist for people to function at the highest level of their potential”.

Fortunately, between our increased understanding of people and the re-balancing of power between leaders and followers (such as the ability to move and the acceptability of moving between employers, the reduction in severe poverty in states with developed welfare systems and the increased accessibility of knowledge), the majority of organisations have moved away from a fear-based model.

Now, fear can be a short-term motivator, but, as we saw with the impact of money on motivation, extrinsic motivators tend to lead to both long-term and short-term problems – and the impact of fear is much more extreme. Fear has a broad range of negative impacts, from reduced quality to distorted organisational learning and from a fear of attempting anything new (as Steve Ross noted above) to “disastrous financial performance”.

Given the above, and our own common sense, hopefully we all agree that fear is not a long term management strategy. In this article, however, I want to highlight a few trends in fear in the workplace, starting with how prevalent it is, but then focusing on people’s fear of redundancy.

Seventy Percent Believe That Raising Concerns Will Have Adverse Consequences

Fear has a number of definitions, but here I’ll keep it as broad as possible – “an unpleasant emotion caused by a threat”. There have been a number of studies that fall within this and show how amazingly common fear is in the workplace.

caution-454360_1280In one study, which involved interviewing 260 people from 22 organisations, the researchers sought to identify what percentage of individuals feared that raising on-the-job concerns may result in adverse repercussions. Amazingly, 70% of those interviewed confessed to having this anxiety. The majority of these highlighted sensitive issues that needed resolution, but which everyone is too scared to talk about. The feedback mechanism between staff and ‘management’ is essential to continuous improvement, so if 70% of staff are afraid to raise concerns then there must be substantial potential for both employers and employees.

To add to this, findings from the 2012 UK Skills and Employment Survey showed that 31% of employees were fearful of unfair treatment at work, while 52% felt anxious about losing their job.

It’s clear that fear, in some form or other, is prevalent at work. Given the negative consequences of fear and the extent of fear in the workplace, this becomes an area that we should invest in improving.

Public Sector Employees Feel as Insecure as Those in the Private Sector

architecture-2892_1280The survey mentioned above showed another trend, which, while understandable, reflects a new challenge for management within the public sector. This study took data from 5 different years – 1986, 1997, 2001, 2006 and 2012 – and compared public sector and private sector responses on a number of questions. One of these questions asked whether people feared losing their job and becoming unemployed. In 1986, 2001 and 2006 the private sector clearly feared redundancy more than those in the public sector. In 1997 (potentially in line with uncertainty due to the general election) both sectors felt equally secure/insecure. In 2012, however, public sector insecurity was significantly greater than that in the private sector. It will be interesting to see whether this trend continues, or perhaps the gap broadens further, with increased austerity.

Some might argue that this has been too long coming. and that the public sector has been far too comfortable, but, irrespective of your personal views, it represents a new challenge for those managing within government. There is an opportunity to harness this shift, particularly for people who have become too relaxed in their role and need a jolt, but it’s important to provide clarity and confidence to those who are performing well. As discussed in this post, it’s about the right approach for the right people, but there may be substantial opportunities for the public sector to learn from private sector colleagues, given their experience of handling a workforce concerned about this uncertainty.

In contrast to this, fear of unfair treatment (formed of anxiety about arbitrary dismissal, discrimination or victimisation by management) was significantly higher in the private sector than the public sector throughout, suggesting learning opportunities in the other direction.

Women are Less Worried About Losing Their Job

Alongside the Skills and Employment Survey, Glassdoor run a quarterly employment confidence survey in the UK and US, giving us a much more frequent and up to date picture. Please accept my apologies for being a tad parochial, but I’m going to concentrate on the UK stats.

woman-41891_1280One of the consistent trends in both these surveys is that women are less worried about losing their job than men. Stereotypically, men are over-confident and cocky, while women often miss out on career opportunities by being overly anxious – so what’s going on?

There are a range of possible explanations, but one piece of evidence shows a clear example of a third factor being in play. Those in part-time work are significantly less worried about being made redundant – and that applies equally to men and women. However, there are a higher proportion of women than men in part time positions, so the overall women’s average anxiety level is pulled down by this higher proportion of part-time workers. I’ve not seen, unfortunately, results for men and women in similar jobs, so don’t know what the like-to-like comparison shows.

It’s worth mentioning, however, that the gap appears to be closing between men and women – while men consistently express higher anxiety about losing their job, the Skills and Employment Survey showed a much bigger increase in anxiety for women than for men between 2006 and 2012. Again there are open questions about whether changes in type of employment played a role.

As a brief aside, while men displayed less confidence in keeping their job, they consistently displayed more confidence across a range of other questions, including the positivity of their business’ outlook and the chance of them getting a pay rise.

Our Fears Fluctuate

Graph and handThe Glassdoor survey suggests that the percentage of people fearing for their job varies hugely quarter to quarter – in 2014 the results were 21% in Q1, 29% in Q2, 19% in Q3 and 35% in Q4. 2014 was, unfortunately, the first year when Glassdoor separated out UK results, so we don’t know whether this pattern of rise, drop, rise repeats year on year (e.g. could our anxiety be seasonal?) or whether being concerned is very sensitive to external factors.
Whatever the reason, it raises some questions about snapshot surveys and how useful they are for judging employee anxiety, particularly trends. It suggests we need a broader range of data (both number of people surveyed and number of surveys) than the survey above has so far, if we want to draw strong conclusions about trends. Organisations that compare year-on-year employee survey results should be aware of this high level of variation.

You’re More Likely to be Fired Than I Am

While it’s not surprising (we tend to think we’re the best at everything), it’s good to see the data – we believe that other people are more likely to be fired than we are. The Glassdoor survey shows that, despite the variation in anxiety, a consistently higher percentage of people are concerned about others being fired than are concerned about losing their own job. Over 2014, the gap was at least 7 percentage points (i.e. at least 20% more people were concerned about others’ jobs than about their own) every quarter.

None of These Fears Are Universal

tree-57119_1280To reiterate a focus from my previous post, there are different groups of people with different points of view. For example, we’re not all worried about losing our job – in Q4 2014 35% feared redundancy, leaving 65% who did not. At the same time, 39% said they would leave their job if they did not receive a pay rise in the next 12 months – presumably the majority of these weren’t feeling too negative about how their employer sees them! And we’re even optimistic about how our organisations will perform – 34% thought things would improve for their business, compared to only 11% expecting things to get worse.

What Does This All Mean? 

We’ve seen that fear negatively impacts on your business and we’ve had a look at some of the trends in fear. The statistics show that fear matters – with such a high proportion of the workforce worried about something, it’s an area where we can make a difference. We need to think about who’s afraid, how we want them to feel and how we can help staff feel more productive emotions (as well as better emotions for their own well-being).

What Can We Do?

The focus of this post has been on trends – specifically in relation to fear for losing a job, but also, hopefully, some broader learning about how we can look at phenomena.

It wouldn’t feel complete, however, without at least giving a few pointers on how to make things better (and a few links with some more detail):

  • Make self-awareness essential
  • Promote emotional intelligence
  • Encourage honest and constructive confrontation
  • Be more transparent and provide clarity where possible
  • Give up unnecessary restrictions and controls on your staff
  • Practice facing fear
  • Train on the basics – and rely on them when the going gets tough
  • Develop an open environment at work – encourage conversation, laughter and positive emotional expression.

Feel free to add more – particularly any practical examples!

Advertisements

The Pygmalion Effect, the Impact of Expectations and the Importance of Context

Pygmalion by Jean-Baptiste Regnault, 1786, Musée National du Château et des Trianons
Pygmalion by Jean-Baptiste Regnault, 1786, Musée National du Château et des Trianons

The Pygmalion of Greek myth was a sculptor. He carved an ivory statue – Galatea – so beautiful and realistic that he fell in love with it. At the altar of Aphrodite, the goddess of love, he wished that she would come to life, as he had no interest in other women. When he returned to the statue, she came to life and they went on to marry and have children. His name has been lent to the Pygmalion Effect – the more positive the expectation placed upon someone, the better they perform.

The Pygmalion Effect

This is also eponymously called the Rosenthal effect, after Robert Rosenthal, who conducted a number of experiments into self-fulfilling prophecies. In 1968, he, alongside Lenore Jacobson, published “Pygmalion in the Classroom”, incorporating a study at Oak School in California. In the experiment they randomly identified around 20% of the students as having particularly high potential for the coming year and they then informed the teachers about these particularly talented individuals. The results showed that, for younger pupils, being labelled as high potential delivered a statistically significant improvement in IQ increase above and beyond that achieved by ‘normal’ students. Thus the first evidence for the impact of expectations on performance was found and a new area of research was born.

This study, however, created just as many questions as answers – why were only the younger children affected? Does that mean expectations play no role for adults? Could it be due to teachers giving more support to those students (and thus less to the rest), meaning the effect can’t be used to benefit everyone in a group? Was any effect due to the teacher’s expectations or was it due to the students identifying themselves as a strong performer?

Research at around the same time (1968 and 1970) by Major Wilburn Schrank looked into these questions, to varying extents. He used similar treatments to the experiment above, but Schrank’s experiment was conducted at the US Air Force Academy Preparatory School (ages 17 and over) and labelled whole ‘sections’ (i.e. classes) as more or less talented. In the 1968 study the teachers were told the labels were authentic, whereas in the 1970 study they knew it was random, while everything else was kept the same. In the 1968 study, the same effect was seen as for the younger pupils at Oak School, showing that the impact of expectations continued beyond youth and that whole groups could benefit (One major caveat – there was no control for how much effort the teachers invested in each group outside of the classroom, such as lesson planning). In the 1970 version there was no effect, revealing that  teachers’ expectations were the key variable rather than the students’ beliefs. This still only looked, however, at a narrow age range and a very specific work environment.

In 2000, both Nicole Kierein & Michael Gold and Brian McNatt conducted meta-analyses of the Pygmalion Effect within work organisations, providing us with some more thorough answers. Kierein & Gold looked at 13 different studies within organisations and found that there was a significant effect (for the statisticians out there, the overall effect size was a d value of 0.81 – for the non-statisticians, Cohen said anything over 0.8 represented a “large” effect), while McNatt looked at 17 studies, finding an effect size of 1.13.

So overall we can be confident that, when looking across a whole population, there really is a Pygmalion Effect. It is not, however – and as ever – that simple…

The Opposite – Golem’s Emergence

Every silver lining has its cloud, and the Pygmalion Effect’s counterpoint is the Golem Effect – if we have negative expectations of someone then they are dragged down by them. In fact, both meta-analyses found that the Golem Effect exceeded the effect size of its Pygmalion brother (to reassure readers that psychologists aren’t too evil, the Golem Effect isn’t tested by randomly designating people as incompetent. Researchers ask managers what their expectations are for their staff, and then actively tell them that their expectations should be average rather than low for a randomly selected portion of those designated ‘low expectation’. Therefore the ‘control’ group is the one that experiences the Golem Effect and the ‘treatment’ group is de-Golem’ed).

A representation of a Golem
A representation of a Golem

Therefore restraining the negative is at least as important as maximising the positive. This is particularly important if expectations are part of a zero sum game (i.e. if designating some as ‘high expectation’ means others become ‘low expectation’). This can happen naturally; we baseline our expectations based on the people around us and then judge people relative to the baseline. If you suddenly start working with an exceptional employee, then you might reasonably wonder why the others aren’t as good – even though before working with the exceptional employee you thought they were all fine.

That’s one reason why holding expectations for a group can be useful, particularly if it’s the group who’s performance is most important to you (and, equally, the group which is most influenced by your expectations). By placing expectations on a group you avoid the need to compare members within that group to define your baseline.

Differences Matter

It turns out that the Pygmalion Effect, while prevalent, varies in size depending on who you are and the environment you’re in. The meta-analyses found three specific moderators on effect size (I really can’t emphasise enough that it is only possible to identify moderators that we record, by definition, so we’re unlikely to see anything too subtle. Only the very basics about participants are normally recorded – there are a number of other potential, but unproven, moderators).

The first moderator they identified is gender. Both analyses found that men were more strongly influenced by expectations than women. It seems likely that this is a societal and cultural effect – and these analyses are now 15 years old and relate to research older than that. Senior levels within organisations were more male dominated then and men had increased opportunities; this may have motivated men towards trying to fulfil expectations or the men in leadership positions may have invested disproportionately more in ‘high expectation’ men – the reasons aren’t clear.

More recent research (2008) by Gloria Natanovich and Dov Eden showed that the gender of the ‘expecter’ was not, at least in a specific environment, an important variable, while also suggesting that ‘expectee’ gender didn’t matter either. As workplace culture becomes increasingly balanced, these results suggests that biases in Pygmalion effect will gradually fade away. In the interim, however, we need to be aware of how workplace biases can play a role.

Secondly, effect size is influenced by initial expectations and/or performance. The lower the initial expectations, the larger the impact of the Pygmalion Effect. This follows logically – the greater the scope for change in a variable, the larger any effect should be.

Thirdly, workplace environment has a significant influence. In particular, both analyses found a stronger impact in military environments than in other workplaces. Perhaps this is due to the hierarchical environment in the military leading to people investing more in living up to their manager’s expectations, in comparison to the less ingrained and less linear (you’re not as strongly ‘owned’ by any one person) relationship in non-military organisations.

There are individual differences everywhere
There are individual differences everywhere

This reflects a broader concept – which stretches far beyond the Pygmalion Effect – that I believe could use a lot more attention; analysis of the differences in the impact of psychological phenomena on different groups (here I am simply defining a group as a number of people who share any specific trait). It is both easy and useful to have a global analysis; it provides more straightforward actions, while only needing random samples from the total population to identify an effect. It’s also controversial and complicated to start to look at actions that would be targeted at specific segments. It could, however, help both employees and employers and it is, in my opinion, something we should explore.

Maintaining Awareness

As a final twist in the Pygmalion tale, we need to think about assessment of the quality of someone’s performance. In general terms, our perception of performance follows this equation (this applies equally to products as it does to people):

  • Perceived Performance = Actual Performance – Expectations

Therefore the higher our expectations, the lower our perception of the quality of the same actual performance (for example, if I bought a watch for £10 and it broke after a couple of years then I’d think that was fair enough. If I bought a watch for £500 and it lasted a couple of years, then I’d feel ripped off).

The Pygmalion Effect means that raising expectations of employees (or students) can lead to increased actual performance, as we know. But if the increase in expectations is larger than the increase in performance, then we can still end up perceiving a decrease in performance. Therefore we are disappointed in delivery, probably express this to others and suppress future performance.

We need to be aware of this and do what we can to act to correct against it. Using objective metrics enables us to make direct comparisons, but mostly we just need to reflect on this when thinking about how individuals (and groups) are performing.

By understanding our expectations, and how to use them, we can improve performance, increase individual motivation and appropriately recognise people for their successes, making this a topic worth exploring – after all, Pygmalion did, eventually, get the girl.

Reaching “Final Placement” – The Peter Principle and Promotion

In 1969, Laurence Peter and Raymond Hull recite-1d0uoeywrote the satirical management book “The Peter Principle: Why Things Always Go Wrong“, and, its key principle is only becoming more relevant. That overarching concept is “every employee tends to rise to his own level of incompetence” – i.e. we each get promoted until we’re no longer good at our job and then we get stuck in that role forever more. This leads not only to organisational ineffectiveness, but also employee unhappiness and anxiety. There are multiple plausible explanations for this and, helpfully, some research which explores the reality of the Peter Principle.

The Peter Principle

The greatness of Peter and Hull’s book was the balance between genuine insight and off the wall comedy. The theories within it were both intuitive and hugely readable. My training, however, is as a skeptical scientist, so I wanted to find a bit of evidence to support it.

In theoretical terms, there are a number of models, from Faria (2000) to Lezear (2001). Pluchino, Rapisarda and Garafalo (2009) even ran an agent based simulation that showed that promoting people at random was better than promotinstockvault-teacher-and-formulas147814g the best current performers – though it’s worth noting they won an Ig Nobel award (it did assume that: 1) you promote the most competent from their current roles; and 2) that performance in current role doesn’t predict performance in the job above). These only show, however, that the Peter Principle is possible, not that it actually happens.

Dickinson and Villeval (2007) showed that the Peter Principle exists in a lab setting, as long as performance has both random and skill-based elements to it. This not only showed the Peter Principle at work, but also that it became stronger as the importance of the random element increased. Further it showed that using a promotion rule (i.e. people got promoted when they hit certain performance criteria) was still better than self-selection – as we’ve seen in other posts, people are bad at making judgements, particularly those relating to their own ability (in this case, a particular problem was attributing chance-related performance to their own ability).

Finally, alongside the many anecdotal examples, there has been some research into the real-world presence of The Principle. Barmby, Eberth and Ba (2006) conducted a review of over a decade’s worth of data from a large financial firm – this indicated that there was a drop in performance after promotion and that 2/3rds of this could be accounted for by the Peter Principle. Earlier Abraham and Medoff (1980) found that subjective performance was lower the longer people had been in a job, while Gibbs and Hendricks (2001) found that raises and bonuses fall with tenure – both these findings reflected that the people who performed worse in their role stayed their for longer (or that staying in the role too long decreases performance). Together these pieces of research suggest that people in a role for a long time are worse at their job, while those who get promoted perform worse at their next job.

The Many Theories Behind the Peter Principle

There seems to be, therefore, a consensus that the Peter Principle is a real world occurrence. There is not, however, agreement over what creates it. Explanations are many and varied, including: the change in skills required between different roles; regression to the mean; a reduction in motivation after a promotion; promoting people out of the way; strategic incompetence; and even that super-competence is less desired within an organisation than incompetence.

Different Jobs, Different Skills

The most immediately logical reason for a person to perform worse after a promotion is where the new job demands different skills to the last one. A common example is where those who are great at actually doing work are then promoted into a management role, but this applies each and every time someone moves into a new role.

This is particularly relevant in competency-based promotion systems, which are designed to try to emphasise cross-cutting skills, increase the available candidate pool and generate common standards across an organisation – clearly worthwhile goals. For organisations that have committed, however, to a largely competence based recruitment and promotion system, such as the UK Civil Service, this presents a real challenge. How can you really assess their ability to use very different skills, relying only on their past experience? Either you have to accept that you recruit people who already have experience in the same kind of role (defeating a key objective of the system) or you have to accept that you’ll sometimes promote people who don’t have the skills you’re after.

Our Culture

Our society constantly informs stockvault-ladder125723us that it is a positive to climb the career ladder, meaning promotion becomes an aspirational thing. This can lead to people either: a) being unhappy in roles that they’d really enjoy, if they didn’t have the weight of aiming for promotion hanging over them; or b) moving from roles that they are happy in and into a job they hate because it was a promotion.

To compound this, we’re hugely uncomfortable with the idea of either an organisation depromoting someone or someone choosing to move back to an old role. That can lead to people being fired, when we know they can be highly productive within the organisation, or staying in a job they hate until they retire, when they know there’s a job within the organisation they enjoy. One of the bravest people I know chose to take a demotion out of a management position back into their previous role – as a result they’re also one of the happiest.

This is a too broad a topic to digest here, but tools that can help include the recent focus on mindfulness, openness in the workplace and emphasising any successful examples.

Not Incompetence, but Stress

The Harvard Business Review wrote in 1976 (“The Real Peter Principle: Promotion to Pain”) that the problem didn’t arise due to people failing to have the right technical, academic or interpersonal skills to succeed, but due to them moving into a role where stress and anxiety suppresses them. This drives both an organisational decrease in productivity and an individual loss of well-being. Be aware that this doesn’t always become apparent through stereotypical over-emotional ways (e.g. temperamental behaviour, crying in the office), but can also be expressed through increasing passivity and detachment.

Returning to Normal Performance

Lazear argues that the Peter Principle is simply due to regression to the mean, and those involved in promoting failing to recognise that there is variation in performance (i.e. the regression fallacy). He argues that, as a promotion suggests a performance standard has been met, performance is likely to be lower in your new role. You are promoted because of your exceptional performance, which means either your ‘normal’ performance is exceptional (or better) and you were performing normally or that your ‘normal’ performance is less than exceptional, but that, for a period of time, your performance deviated from the mean by enough to reach that exceptional hurdle.

If you’re in the second group, then after promotion your performance is likely to regress back to the mean, so your performance is likely to be below the expectations when you were promoted. This is particularly apparent in the sporting world because you can measure someone’s performance – as an example, between the start of the English Premier League in 1992 and the end of the 2012/13 season, 33 players scored 20 goals in a season (the unwritten benchmark for a top quality striker). Only 12 of those were able to do it more than once; the vast majority regressed back to the mean.

Luck

This is a subset of the underestimation of variance in people’s performance, but here I’m only referring to variation due to luck (rather than our own performance). Given that this is a subset of the above, it again has a particular impact on competency-based systems.

As a purely anecdotal example, about 2 yeaDice and Poker Chipsrs ago I was working on six outline business cases with one other person. We shared the workload and both contributed to all of the proposals, but eventually had to (for process reasons) decide who was in the ‘lead’ for each option, so we split them down the middle – 3 each. I would have judged our work as of near identical quality. Yet, due to circumstances beyond my (or his) control, ‘my’ three business cases were accepted, to none of his. For my ‘success’ I received both praise and the highest performance marking, but due to my colleagues ‘failure’ he was largely forgotten.

It was only luck that separated us, but people judged us only on the outcome not the quality of work in getting there. This bias towards outcome is prevalent and it’s easy to get blinded by big numbers and impressive endings, but it’s important to take a step back. We neglect the role luck played in the candidates’ successes and failures – and, as Dickenson and Villeral showed earlier on, the bigger the role luck plays, the larger the effect of the Peter Principle.

Lost Interest After Promotion

This is a fairly obvious one – people work as hard as possible to get a promotion and once they’ve achieved that they no longer have a clear goal so lose motivation. I won’t spend too long on this, as I wrote about motivation in this post, but it makes particular sense in the context which that article set – not only is the motivation of promotion removed, but the focus on achieving promotion is also likely to reduce your intrinsic motivation in doing the job itself.

You’re Too Good; It’s Showing Us Up!

This one takes us back to Peter himself recite-k60xka– he stated that “in most hierarchies, super-competence is more objectionable than incompetence.” This one is impossible to find evidence for, beyond the anecdotal, but the argument is this – people who perform too well (or would clearly be better at their seniors’ jobs) disrupt the hierarchy of an organisation. The organisation fights to maintain that hierachy by acting against these who are “super-competent”. Therefore managers find spurious reasons to not promote staff or to avoid giving them strong performance reviews (this circular nature is part of what makes it difficult to find evidence – the response to super-competence is to find a way to deny any super-competence).

Further, peers often become suspicious of over-performers and can leave them isolated, which can result in organisations losing some of their most talented staff to places where they feel more ‘normal’.

Move Someone Up – and Out of the Way

Another slightly cynical one,recite-n50n5v but one lots of us have seen – people get promoted because their current business area wants to get rid of them. There are arguments that within certain types of firm (such as technology), it’s a organisational theme rather than just one-offs; people are moved into management because they lack the specific skills to do the skilled front-line work. Putt’s Law and the Successful Technocrat, published under the pseudonym Archibald Putt,  pursues exactly this theory – incompetence is “flushed out of the lower levels” leaving two kinds of people in technology, “those who understand what they do not manage and those who manage what they do not understand”.

As a slight aside, it’s worth noting some of the more innovative thinking about organisational structures – Joel Spolsky proposes that we should move away from the concept of a central management team being the top of an organisation and move towards it being ‘led’ by those doing front-line activity. In his world view (tinted towards technology, clearly) the ‘management team’ should not believe itself to be an executive, decision-making function, but a support function. Therefore it should be seen as “administration” – they do the things that help work get done. I’ll have a look at some organisational structure concepts in later posts.

“Strategic Incompetence”

Jared Sandberg came up with this termrecite-1mh30tn, while writing in The Wall Street Journal  (sorry, can’t find a link). His description was this – “Strategic incompetence isn’t about having a strategy that fails, but a failure that succeeds. It almost always works to deflect work one doesn’t want to do—without ever having to admit it.” He meant this in a range of situations, including people simply choosing to be terrible at a specific task (say, doing the ironing)  so they don’t have to do it anymore. In this context, however, we’re talking about when the most suitable people for promotion choose to perform badly enough to avoid it. This might not be by failing in their core responsibilities – presumably a major driver for doing this is that you enjoy your job more than the one you’d be promoted into – but can be by purposefully showing themselves to be unsuitable for the next step up, such as refusing to engage in the expected ‘networking’ activity or acting out at team events.

This behaviour results in less suitable people being promoted, but does mean that individuals using strategic incompetence avoid the Peter Principle – by purposefully avoiding promotion they avoid being moved up into their maximally incompetent role (or “final placement” as Peter called it). It opens up an interesting topic, e.g. is this an effective personal strategy (as you avoid the stress noted above)? Or could it even be good for the organisation as a whole (as you have people in roles where they’re still highly competent)? This leads on to my final section; how can we try to avoid, or at least dampen the effect of, the Peter Principle.

Is There Anything We Can Do?

Here are a few thoughts, to provide a starting point, on how we can counter the Principle:

Focus on Proof of Performance in the New Role – Sorry for stating the blindingly obvious, but this is still the main area where organisations go wrong. The focus should not be on how good the employee has been, but on how good they’re going to be in the new role (which can clearly include consideration of previous performance). One way to do this is to complement competencies with tailored tests (whether specific skill, situational judgement, personality, or scenario-based testing). Another way is to really narrow down the focus on competencies to the new role. This often means trying to forget about the outcomes and concentrating on the elements of the process that are relevant.

Finally, it’s often possible (though you need to consider whether the role merits the cost) to run some sort of “real life interview” over a couple of days, by assessing performance in a more life-like scenario. This can include the obvious – like asking candidates to research and deliver a presentation or analyse some part of your business – but can also involve something a bit different, like allowing them to genuinely perform a business operation for a short period or giving them some funding and letting them show what they can deliver with it.

Adjusting for Luck and Regression to the Mean – We need to avoid focusing purely on outcomes, particularly where luck is a major determinant. By asking people with knowledge of the area where the candidate’s competencies comes from, you can start to assess the impact luck could have had. To adjust for regression to the mean, we need to avoid just relying on a few one-off examples or merely someone’s performance over the last period – find ways to take a range of information on board (and simulate their real-world performance, as suggested above). Allowing someone to merely present a few of their highlights is bound to give a very unbalanced picture of themselves (and is likely to be biased against those who deliver at a high level consistently – they’ll appear worse than those with occasional exceptional work, even if they deliver indifferently most of the time. On a different note, it’s also biased towards those who are comfortable lying).

Have Genuine Skill-Specific Career Paths – This is a fairly well-trodden road, but still a worthwhile one and still one that is poorly executed. Organisations need to recognise the value of specific skills to them – and accept that sometimes people can still be worth the big bucks without having to manage people. By opening up different career paths (rather than the typical process of starting as a worker, becoming a skilled worker, then managing workers), organisations can make the most of their people, while rewarding them appropriately. One of the skills that is often forgotten about is the ability to manage well – this is a skill in itself and should be recognised individually. Just because someone is a great manager, it doesn’t mean they’ll be a great strategic thinker. This is where I think Joel Spolsky’s approach can help managers; acknowledging a distinct “administration” function of a business leads towards the acceptance that specific skills are needed to keep everything else operating.

Typically organisations start this process in good faith, but let it become contaminated as other areas of the business want to have roles at the newly created level – until eventually it just becomes another level across the whole organisation (which is a worse than neutral result, as it simply adds an extra layer into the hierarchy). The purpose behind creating a specific career path must be clearly defined and that purpose must be maintained for the organisation to get a pay off.

Probationary Promotions – This is a cultural (maybe even societal?) challenge, as it would remove the big-deal nature of promotion and make movement both up and down an organisation more common. It is, however, clearly the best way to know whether someone can really do the job; and let’s them know whether they want to!

Generate Movement – There’s a simple solution to people getting stuck in jobs that they’re ill-suited; create a system that forces movement onto people. This has downsides – you’re also moving people out of jobs they’re very well suited to, some people like feeling settled and you break up teams that work well together – but it does avoid people staying in their “maximum incompetence” role. A number of firms do this, through a couple of ways; some companies have firing and promotion for fixed percentages of their workforce on a regular basis, whilst others just have mandated ‘rotation’ periods (e.g. everyone has to move role every two years).

Taking Responsibility for Yourself – We all need to look after ourselves a bit more and really think about our career moves and what we want from it. In reality, I get that it’s very difficult not to get swept up in the excitement of being offered a new position and the societal belief that moving up an organisational ladder is a good thing (never mind the money!). We need, however, to go into new jobs with our eyes open and as prepared as possible – we should think about the possibility of promotion early and consider what we want from work. We can’t just lay the blame at our employers feet if we find ourselves in a job we don’t enjoy – we have to take some responsibility.

So what do you think? Are there any other reasons the Peter Principle exists? Is it, to some extent, inevitable? And, most importantly, is there anything else we can do to mitigate it?

The Fallacy of Precision And Why We Shouldn’t Worry About It

recite-a89dy0Last week I was involved in some forecasting work – both looking at projections for the outside world and the delivery we’d expect within the organisation. As discussions persisted in pursuit of spurious accuracy, I was reminded of a few things: 1) Humans crave certainty (or the illusion thereof); 2) We are consistently pretty bad at estimating the impact of events and the likelihood of them; and 3) that doesn’t matter because the real value of planning and forecasting isn’t in creating something perfect, but the act of planning itself.

A Craving for Certainty

One of the basic drives for mankind is the desire to have an explanation to what’s going on in the world (and pretty much any explanation is better than none). In an evolutionary sense, it’s easy to think why  – you’d be paralysed into total inaction if you didn’t quickly abstract patterns from the environment to inform your future actions. Without drifting too far into philosophy, inductive reasoning can never provide absolute proof, but someone who chose not to use it would be entirely incapacitated.

Arie Kruglanski described this as a road-sign-63983_1280desire for ‘cognitive closure’ and defined that as “individuals’ desire for a firm answer to a question and an aversion toward ambiguity”. Kruglanski conceptualises this in two stages: 1) “seizing”, where we grab information in order to come up with an explanation (‘urgency tendency’); and 2) “freezing”, where we try to hold onto our explanation for as long as possible (‘permanence tendency’). Alongside this conceptual work, research by Tania Lombrozo has shown that we react to uncertainty by spontaneously generating possible explanations, and, more interestingly, we let this bias our decision making – once we have an explanation we start to assign value to it, as if it were evidence itself.

There’s also neurological evidence that shows the impact ambiguity has on us. A 2005 study showed that levels of ambiguity correlated positively with amygdala and orbitofrontal cortex activation, and negatively with the ventral striatum. This reflected ambiguity creating a burden on our decision making faculties, leading to reduced reward sensation and even causing a fear response.

This leaves us desperate to find a precise answer when faced with uncertainty, and when we come up with it we really don’t want to let go. To try to counter this, we need to firstly be aware that we’re craving an answer, even when it’s impossible to have a definitive one. Secondly, we have to try to accept a more open-ended solution – for example, using a range when calculating benefits or presenting options based on different scenarios. Thirdly, when you’re coming up with possible hypotheses then note them all down; it helps keep you aware that there are other possibilities, enables you to moderate your forecasts (by comparison with what the other possibilities would suggest) and forces you to think about the evidence that drove you to decide upon ‘the’ explanation (so that you can review whether that evidence still holds up as time passes). Fourthly, monitor the real-world outcomes against the world your ‘explanation’ would lead to – it’s not meant to get us down, but force reality upon us; when we come to making our next explanation, we often forget how accurately (or not) we’ve forecast in the past. 

An Inaccuracy in Estimation

wrong-way-429723_1280This is a huge topic – much too large to cover in any detail here – so I’ll only highlight a few of the ways in which we make mistakes. It’s worth noting that I’m not denying the utility of some of these biases; they can enable us to take action when inaction might prove fatal, make us more optimistic (and hence driven to take action) and make decisions quickly.

Illusion of Control – We believe that we have more control over events than we really do. Langer showed that, even where we know – rationally – that outcomes are random, we still feel we have control. One experiment either allowed participants to choose their own lottery ticket or gave them a ticket that had been selected for them. The two groups were then offered the opportunity to switch their ticket and enter a different lottery, which offered better odds of winning. Those who chose their own ticket at the start where far less likely to switch into the new lottery, despite their increased chance of winning – they appeared to think that they were “good” at choosing. This effect is seen in a range of scenarios and differs from general over-optimism – it is a belief that we have control over things and this improves the likelihood of positive events.

Overconfidence Effect – We are more confident in our own judgement than we should be. There are three elements to this: 1) overestimation of one’s own performance; 2) overestimation of one’s ability relative to others; and 3) overestimation in how precise our estimates are.

Confirmation Bias – We look for evidence that supports our hypothesis. This means we constantly build support for our theories and ignore anything that would disprove them.

Availability Heuristic – The reliance upon what comes to mind in making a judgement. This makes us biased towards things that spring to mind, such as those that are particularly salient, have happened recently etc.

Overall, there’s not too much we can do to counter these biases, apart from being aware of them and trying to mitigate them through our awareness. For example, to counter the availability bias you can try to separate out the assessment of relevant topics from the forecasting itself – by drawing out all those topics you can make them all available. Or to counter the overconfidence bias, you can bring in others (outside your normal working area) to assess those same tasks or events (although maintaining awareness that they’ll also be overconfident in their judgement = and potentially it’ll be even more extreme).

The Real Value of Planning (or Forecasting)

One of the bias’ that I left recite-4yab89out above was “planning fallacy” – the consistent underestimation of the amount of time it takes to deliver a project. There are a number of possible explanations for this, including some of the biases above (illusion of control, overconfidence and availability). Not only do we show planning fallacy all by ourselves, but organisations often encourage it even more. We tend to underestimate delivery time because we don’t put enough ‘slack’ in our plans, yet managers, customers and executives want to drive delivery plans to be as short as possible – they want a justification for every block of time and “because something is likely to go wrong” doesn’t normally cut it. Further, we’re normally asked to, seemingly sensibly, build our plans on the set of outcomes that seems most likely. So we take each individual action and judge whether it’s more likely to be delayed or not – and most individual events are more likely to go smoothly than not. The problem is that, in aggregate, it’s likely one of the things will go wrong – we just have no idea which one. That means the most likely single set of events may well be everything going right, but the chance of that is, let’s say, 20% – each other individual set of events is less likely, but the chance of something going wrong is still much higher than nothing going wrong.

The rest of this post might seem to be a bit of a downer, but here’s the positive – it’s not as important to be accurate as we feel it is. The process is more important than the answer you come to. At a superficial level, it’s not worth worrying about the last few percentage when developing estimates – we tend to be building on top of so many assumptions that it’s a false economy (it’s a good example of Pareto’s Law – most time is often spent on the fine, and low value, pseudo-accuracy). Thinking in terms of a realistic range is much more helpful than spending hours generating a falsely precise figure. If the world progresses in the world you expect then you’ll be in the right ballpark – if it doesn’t you’ll be off massively anyway.

At a deeper level, there is value in planning because you mentally simulate whatever series of events or plans you’re looking at – as Dwight D Eisenhower said “No battle was ever won according to plan, but no battle was ever won without one… Plans are useless, but planning is indispensable”. The process of planning (or forecasting) forces you to think about the factors in play – you have to think about the requirements, the dependencies, the risks and how you might mitigate them in more detail than you would otherwise.

psychology-544405_1280There’s a skill to doing this properly – your ability to mentally simulate a scenario is limited by two key factors: 1) your imagination and 2) your knowledge. Both of these can be helped by bringing other people into your planning process (you can use the time saved by being less concerned about the fine detail of your benefit or forecast figures). You need to use diverse groups of people to get the most out of planning – to broaden imagination it’s important to bring in people with very different experiences to your own (we tend to think within our own paradigm, which is set by our experiences) and to broaden knowledge we require subject matter experts in areas across the plans’ elements (e.g. if you were building a football stadium, you don’t only want people who’ve built football stadia – they’re useful, but you also want people with knowledge of delivering large construction projects, of the leisure/entertainment industry,  of turf and the conditions that impact on it etc.).

The more you’ve simulated both your preferred option and your range of options, the better prepared you are to deliver the project – whether it goes to plan or (infinitely more likely) not.

When things go wrong we often worry, unsurprisingly, about things having gone wrong. But that doesn’t generate progress for the project (although it might deliver some learning). Mental simulation leaves you more prepared to handle events when they go off the expected path because you don’t have to rely on impulsive decision making – you already at least have a rough idea of what to do.

Worry Less About the Output and More About the Process

Planning is a hugely useful thing to do, but only when time and effort is spent in the right way. Desperately hoping to get your delivery schedule right to the exact day or your benefit figures to a precise figure leaves you on a hiding to nothing – there are too many unknowns in the world and we have all sorts of biases in our reasoning. We just have to be more laid back about that (as well as relaxing about whether people meet our own projections – if we consistently over-deliver, then it’s because we’re consistently under promising).