Are You Afraid? – Trends in Fear At Work

recite-1ahvv1gThroughout history leaders, including those of the military, business and state, have used fear to make people fulfil their wishes. Paul Austin, CEO of Coca Cola between 1966 and 1980, once described his management style as liking “to pull all the legs off the centipede and see what he’s really like” and saying that “a certain degree of anxiety and tension has to exist for people to function at the highest level of their potential”.

Fortunately, between our increased understanding of people and the re-balancing of power between leaders and followers (such as the ability to move and the acceptability of moving between employers, the reduction in severe poverty in states with developed welfare systems and the increased accessibility of knowledge), the majority of organisations have moved away from a fear-based model.

Now, fear can be a short-term motivator, but, as we saw with the impact of money on motivation, extrinsic motivators tend to lead to both long-term and short-term problems – and the impact of fear is much more extreme. Fear has a broad range of negative impacts, from reduced quality to distorted organisational learning and from a fear of attempting anything new (as Steve Ross noted above) to “disastrous financial performance”.

Given the above, and our own common sense, hopefully we all agree that fear is not a long term management strategy. In this article, however, I want to highlight a few trends in fear in the workplace, starting with how prevalent it is, but then focusing on people’s fear of redundancy.

Seventy Percent Believe That Raising Concerns Will Have Adverse Consequences

Fear has a number of definitions, but here I’ll keep it as broad as possible – “an unpleasant emotion caused by a threat”. There have been a number of studies that fall within this and show how amazingly common fear is in the workplace.

caution-454360_1280In one study, which involved interviewing 260 people from 22 organisations, the researchers sought to identify what percentage of individuals feared that raising on-the-job concerns may result in adverse repercussions. Amazingly, 70% of those interviewed confessed to having this anxiety. The majority of these highlighted sensitive issues that needed resolution, but which everyone is too scared to talk about. The feedback mechanism between staff and ‘management’ is essential to continuous improvement, so if 70% of staff are afraid to raise concerns then there must be substantial potential for both employers and employees.

To add to this, findings from the 2012 UK Skills and Employment Survey showed that 31% of employees were fearful of unfair treatment at work, while 52% felt anxious about losing their job.

It’s clear that fear, in some form or other, is prevalent at work. Given the negative consequences of fear and the extent of fear in the workplace, this becomes an area that we should invest in improving.

Public Sector Employees Feel as Insecure as Those in the Private Sector

architecture-2892_1280The survey mentioned above showed another trend, which, while understandable, reflects a new challenge for management within the public sector. This study took data from 5 different years – 1986, 1997, 2001, 2006 and 2012 – and compared public sector and private sector responses on a number of questions. One of these questions asked whether people feared losing their job and becoming unemployed. In 1986, 2001 and 2006 the private sector clearly feared redundancy more than those in the public sector. In 1997 (potentially in line with uncertainty due to the general election) both sectors felt equally secure/insecure. In 2012, however, public sector insecurity was significantly greater than that in the private sector. It will be interesting to see whether this trend continues, or perhaps the gap broadens further, with increased austerity.

Some might argue that this has been too long coming. and that the public sector has been far too comfortable, but, irrespective of your personal views, it represents a new challenge for those managing within government. There is an opportunity to harness this shift, particularly for people who have become too relaxed in their role and need a jolt, but it’s important to provide clarity and confidence to those who are performing well. As discussed in this post, it’s about the right approach for the right people, but there may be substantial opportunities for the public sector to learn from private sector colleagues, given their experience of handling a workforce concerned about this uncertainty.

In contrast to this, fear of unfair treatment (formed of anxiety about arbitrary dismissal, discrimination or victimisation by management) was significantly higher in the private sector than the public sector throughout, suggesting learning opportunities in the other direction.

Women are Less Worried About Losing Their Job

Alongside the Skills and Employment Survey, Glassdoor run a quarterly employment confidence survey in the UK and US, giving us a much more frequent and up to date picture. Please accept my apologies for being a tad parochial, but I’m going to concentrate on the UK stats.

woman-41891_1280One of the consistent trends in both these surveys is that women are less worried about losing their job than men. Stereotypically, men are over-confident and cocky, while women often miss out on career opportunities by being overly anxious – so what’s going on?

There are a range of possible explanations, but one piece of evidence shows a clear example of a third factor being in play. Those in part-time work are significantly less worried about being made redundant – and that applies equally to men and women. However, there are a higher proportion of women than men in part time positions, so the overall women’s average anxiety level is pulled down by this higher proportion of part-time workers. I’ve not seen, unfortunately, results for men and women in similar jobs, so don’t know what the like-to-like comparison shows.

It’s worth mentioning, however, that the gap appears to be closing between men and women – while men consistently express higher anxiety about losing their job, the Skills and Employment Survey showed a much bigger increase in anxiety for women than for men between 2006 and 2012. Again there are open questions about whether changes in type of employment played a role.

As a brief aside, while men displayed less confidence in keeping their job, they consistently displayed more confidence across a range of other questions, including the positivity of their business’ outlook and the chance of them getting a pay rise.

Our Fears Fluctuate

Graph and handThe Glassdoor survey suggests that the percentage of people fearing for their job varies hugely quarter to quarter – in 2014 the results were 21% in Q1, 29% in Q2, 19% in Q3 and 35% in Q4. 2014 was, unfortunately, the first year when Glassdoor separated out UK results, so we don’t know whether this pattern of rise, drop, rise repeats year on year (e.g. could our anxiety be seasonal?) or whether being concerned is very sensitive to external factors.
Whatever the reason, it raises some questions about snapshot surveys and how useful they are for judging employee anxiety, particularly trends. It suggests we need a broader range of data (both number of people surveyed and number of surveys) than the survey above has so far, if we want to draw strong conclusions about trends. Organisations that compare year-on-year employee survey results should be aware of this high level of variation.

You’re More Likely to be Fired Than I Am

While it’s not surprising (we tend to think we’re the best at everything), it’s good to see the data – we believe that other people are more likely to be fired than we are. The Glassdoor survey shows that, despite the variation in anxiety, a consistently higher percentage of people are concerned about others being fired than are concerned about losing their own job. Over 2014, the gap was at least 7 percentage points (i.e. at least 20% more people were concerned about others’ jobs than about their own) every quarter.

None of These Fears Are Universal

tree-57119_1280To reiterate a focus from my previous post, there are different groups of people with different points of view. For example, we’re not all worried about losing our job – in Q4 2014 35% feared redundancy, leaving 65% who did not. At the same time, 39% said they would leave their job if they did not receive a pay rise in the next 12 months – presumably the majority of these weren’t feeling too negative about how their employer sees them! And we’re even optimistic about how our organisations will perform – 34% thought things would improve for their business, compared to only 11% expecting things to get worse.

What Does This All Mean? 

We’ve seen that fear negatively impacts on your business and we’ve had a look at some of the trends in fear. The statistics show that fear matters – with such a high proportion of the workforce worried about something, it’s an area where we can make a difference. We need to think about who’s afraid, how we want them to feel and how we can help staff feel more productive emotions (as well as better emotions for their own well-being).

What Can We Do?

The focus of this post has been on trends – specifically in relation to fear for losing a job, but also, hopefully, some broader learning about how we can look at phenomena.

It wouldn’t feel complete, however, without at least giving a few pointers on how to make things better (and a few links with some more detail):

  • Make self-awareness essential
  • Promote emotional intelligence
  • Encourage honest and constructive confrontation
  • Be more transparent and provide clarity where possible
  • Give up unnecessary restrictions and controls on your staff
  • Practice facing fear
  • Train on the basics – and rely on them when the going gets tough
  • Develop an open environment at work – encourage conversation, laughter and positive emotional expression.

Feel free to add more – particularly any practical examples!

The Pygmalion Effect, the Impact of Expectations and the Importance of Context

Pygmalion by Jean-Baptiste Regnault, 1786, Musée National du Château et des Trianons
Pygmalion by Jean-Baptiste Regnault, 1786, Musée National du Château et des Trianons

The Pygmalion of Greek myth was a sculptor. He carved an ivory statue – Galatea – so beautiful and realistic that he fell in love with it. At the altar of Aphrodite, the goddess of love, he wished that she would come to life, as he had no interest in other women. When he returned to the statue, she came to life and they went on to marry and have children. His name has been lent to the Pygmalion Effect – the more positive the expectation placed upon someone, the better they perform.

The Pygmalion Effect

This is also eponymously called the Rosenthal effect, after Robert Rosenthal, who conducted a number of experiments into self-fulfilling prophecies. In 1968, he, alongside Lenore Jacobson, published “Pygmalion in the Classroom”, incorporating a study at Oak School in California. In the experiment they randomly identified around 20% of the students as having particularly high potential for the coming year and they then informed the teachers about these particularly talented individuals. The results showed that, for younger pupils, being labelled as high potential delivered a statistically significant improvement in IQ increase above and beyond that achieved by ‘normal’ students. Thus the first evidence for the impact of expectations on performance was found and a new area of research was born.

This study, however, created just as many questions as answers – why were only the younger children affected? Does that mean expectations play no role for adults? Could it be due to teachers giving more support to those students (and thus less to the rest), meaning the effect can’t be used to benefit everyone in a group? Was any effect due to the teacher’s expectations or was it due to the students identifying themselves as a strong performer?

Research at around the same time (1968 and 1970) by Major Wilburn Schrank looked into these questions, to varying extents. He used similar treatments to the experiment above, but Schrank’s experiment was conducted at the US Air Force Academy Preparatory School (ages 17 and over) and labelled whole ‘sections’ (i.e. classes) as more or less talented. In the 1968 study the teachers were told the labels were authentic, whereas in the 1970 study they knew it was random, while everything else was kept the same. In the 1968 study, the same effect was seen as for the younger pupils at Oak School, showing that the impact of expectations continued beyond youth and that whole groups could benefit (One major caveat – there was no control for how much effort the teachers invested in each group outside of the classroom, such as lesson planning). In the 1970 version there was no effect, revealing that  teachers’ expectations were the key variable rather than the students’ beliefs. This still only looked, however, at a narrow age range and a very specific work environment.

In 2000, both Nicole Kierein & Michael Gold and Brian McNatt conducted meta-analyses of the Pygmalion Effect within work organisations, providing us with some more thorough answers. Kierein & Gold looked at 13 different studies within organisations and found that there was a significant effect (for the statisticians out there, the overall effect size was a d value of 0.81 – for the non-statisticians, Cohen said anything over 0.8 represented a “large” effect), while McNatt looked at 17 studies, finding an effect size of 1.13.

So overall we can be confident that, when looking across a whole population, there really is a Pygmalion Effect. It is not, however – and as ever – that simple…

The Opposite – Golem’s Emergence

Every silver lining has its cloud, and the Pygmalion Effect’s counterpoint is the Golem Effect – if we have negative expectations of someone then they are dragged down by them. In fact, both meta-analyses found that the Golem Effect exceeded the effect size of its Pygmalion brother (to reassure readers that psychologists aren’t too evil, the Golem Effect isn’t tested by randomly designating people as incompetent. Researchers ask managers what their expectations are for their staff, and then actively tell them that their expectations should be average rather than low for a randomly selected portion of those designated ‘low expectation’. Therefore the ‘control’ group is the one that experiences the Golem Effect and the ‘treatment’ group is de-Golem’ed).

A representation of a Golem
A representation of a Golem

Therefore restraining the negative is at least as important as maximising the positive. This is particularly important if expectations are part of a zero sum game (i.e. if designating some as ‘high expectation’ means others become ‘low expectation’). This can happen naturally; we baseline our expectations based on the people around us and then judge people relative to the baseline. If you suddenly start working with an exceptional employee, then you might reasonably wonder why the others aren’t as good – even though before working with the exceptional employee you thought they were all fine.

That’s one reason why holding expectations for a group can be useful, particularly if it’s the group who’s performance is most important to you (and, equally, the group which is most influenced by your expectations). By placing expectations on a group you avoid the need to compare members within that group to define your baseline.

Differences Matter

It turns out that the Pygmalion Effect, while prevalent, varies in size depending on who you are and the environment you’re in. The meta-analyses found three specific moderators on effect size (I really can’t emphasise enough that it is only possible to identify moderators that we record, by definition, so we’re unlikely to see anything too subtle. Only the very basics about participants are normally recorded – there are a number of other potential, but unproven, moderators).

The first moderator they identified is gender. Both analyses found that men were more strongly influenced by expectations than women. It seems likely that this is a societal and cultural effect – and these analyses are now 15 years old and relate to research older than that. Senior levels within organisations were more male dominated then and men had increased opportunities; this may have motivated men towards trying to fulfil expectations or the men in leadership positions may have invested disproportionately more in ‘high expectation’ men – the reasons aren’t clear.

More recent research (2008) by Gloria Natanovich and Dov Eden showed that the gender of the ‘expecter’ was not, at least in a specific environment, an important variable, while also suggesting that ‘expectee’ gender didn’t matter either. As workplace culture becomes increasingly balanced, these results suggests that biases in Pygmalion effect will gradually fade away. In the interim, however, we need to be aware of how workplace biases can play a role.

Secondly, effect size is influenced by initial expectations and/or performance. The lower the initial expectations, the larger the impact of the Pygmalion Effect. This follows logically – the greater the scope for change in a variable, the larger any effect should be.

Thirdly, workplace environment has a significant influence. In particular, both analyses found a stronger impact in military environments than in other workplaces. Perhaps this is due to the hierarchical environment in the military leading to people investing more in living up to their manager’s expectations, in comparison to the less ingrained and less linear (you’re not as strongly ‘owned’ by any one person) relationship in non-military organisations.

There are individual differences everywhere
There are individual differences everywhere

This reflects a broader concept – which stretches far beyond the Pygmalion Effect – that I believe could use a lot more attention; analysis of the differences in the impact of psychological phenomena on different groups (here I am simply defining a group as a number of people who share any specific trait). It is both easy and useful to have a global analysis; it provides more straightforward actions, while only needing random samples from the total population to identify an effect. It’s also controversial and complicated to start to look at actions that would be targeted at specific segments. It could, however, help both employees and employers and it is, in my opinion, something we should explore.

Maintaining Awareness

As a final twist in the Pygmalion tale, we need to think about assessment of the quality of someone’s performance. In general terms, our perception of performance follows this equation (this applies equally to products as it does to people):

  • Perceived Performance = Actual Performance – Expectations

Therefore the higher our expectations, the lower our perception of the quality of the same actual performance (for example, if I bought a watch for £10 and it broke after a couple of years then I’d think that was fair enough. If I bought a watch for £500 and it lasted a couple of years, then I’d feel ripped off).

The Pygmalion Effect means that raising expectations of employees (or students) can lead to increased actual performance, as we know. But if the increase in expectations is larger than the increase in performance, then we can still end up perceiving a decrease in performance. Therefore we are disappointed in delivery, probably express this to others and suppress future performance.

We need to be aware of this and do what we can to act to correct against it. Using objective metrics enables us to make direct comparisons, but mostly we just need to reflect on this when thinking about how individuals (and groups) are performing.

By understanding our expectations, and how to use them, we can improve performance, increase individual motivation and appropriately recognise people for their successes, making this a topic worth exploring – after all, Pygmalion did, eventually, get the girl.

Reaching “Final Placement” – The Peter Principle and Promotion

In 1969, Laurence Peter and Raymond Hull recite-1d0uoeywrote the satirical management book “The Peter Principle: Why Things Always Go Wrong“, and, its key principle is only becoming more relevant. That overarching concept is “every employee tends to rise to his own level of incompetence” – i.e. we each get promoted until we’re no longer good at our job and then we get stuck in that role forever more. This leads not only to organisational ineffectiveness, but also employee unhappiness and anxiety. There are multiple plausible explanations for this and, helpfully, some research which explores the reality of the Peter Principle.

The Peter Principle

The greatness of Peter and Hull’s book was the balance between genuine insight and off the wall comedy. The theories within it were both intuitive and hugely readable. My training, however, is as a skeptical scientist, so I wanted to find a bit of evidence to support it.

In theoretical terms, there are a number of models, from Faria (2000) to Lezear (2001). Pluchino, Rapisarda and Garafalo (2009) even ran an agent based simulation that showed that promoting people at random was better than promotinstockvault-teacher-and-formulas147814g the best current performers – though it’s worth noting they won an Ig Nobel award (it did assume that: 1) you promote the most competent from their current roles; and 2) that performance in current role doesn’t predict performance in the job above). These only show, however, that the Peter Principle is possible, not that it actually happens.

Dickinson and Villeval (2007) showed that the Peter Principle exists in a lab setting, as long as performance has both random and skill-based elements to it. This not only showed the Peter Principle at work, but also that it became stronger as the importance of the random element increased. Further it showed that using a promotion rule (i.e. people got promoted when they hit certain performance criteria) was still better than self-selection – as we’ve seen in other posts, people are bad at making judgements, particularly those relating to their own ability (in this case, a particular problem was attributing chance-related performance to their own ability).

Finally, alongside the many anecdotal examples, there has been some research into the real-world presence of The Principle. Barmby, Eberth and Ba (2006) conducted a review of over a decade’s worth of data from a large financial firm – this indicated that there was a drop in performance after promotion and that 2/3rds of this could be accounted for by the Peter Principle. Earlier Abraham and Medoff (1980) found that subjective performance was lower the longer people had been in a job, while Gibbs and Hendricks (2001) found that raises and bonuses fall with tenure – both these findings reflected that the people who performed worse in their role stayed their for longer (or that staying in the role too long decreases performance). Together these pieces of research suggest that people in a role for a long time are worse at their job, while those who get promoted perform worse at their next job.

The Many Theories Behind the Peter Principle

There seems to be, therefore, a consensus that the Peter Principle is a real world occurrence. There is not, however, agreement over what creates it. Explanations are many and varied, including: the change in skills required between different roles; regression to the mean; a reduction in motivation after a promotion; promoting people out of the way; strategic incompetence; and even that super-competence is less desired within an organisation than incompetence.

Different Jobs, Different Skills

The most immediately logical reason for a person to perform worse after a promotion is where the new job demands different skills to the last one. A common example is where those who are great at actually doing work are then promoted into a management role, but this applies each and every time someone moves into a new role.

This is particularly relevant in competency-based promotion systems, which are designed to try to emphasise cross-cutting skills, increase the available candidate pool and generate common standards across an organisation – clearly worthwhile goals. For organisations that have committed, however, to a largely competence based recruitment and promotion system, such as the UK Civil Service, this presents a real challenge. How can you really assess their ability to use very different skills, relying only on their past experience? Either you have to accept that you recruit people who already have experience in the same kind of role (defeating a key objective of the system) or you have to accept that you’ll sometimes promote people who don’t have the skills you’re after.

Our Culture

Our society constantly informs stockvault-ladder125723us that it is a positive to climb the career ladder, meaning promotion becomes an aspirational thing. This can lead to people either: a) being unhappy in roles that they’d really enjoy, if they didn’t have the weight of aiming for promotion hanging over them; or b) moving from roles that they are happy in and into a job they hate because it was a promotion.

To compound this, we’re hugely uncomfortable with the idea of either an organisation depromoting someone or someone choosing to move back to an old role. That can lead to people being fired, when we know they can be highly productive within the organisation, or staying in a job they hate until they retire, when they know there’s a job within the organisation they enjoy. One of the bravest people I know chose to take a demotion out of a management position back into their previous role – as a result they’re also one of the happiest.

This is a too broad a topic to digest here, but tools that can help include the recent focus on mindfulness, openness in the workplace and emphasising any successful examples.

Not Incompetence, but Stress

The Harvard Business Review wrote in 1976 (“The Real Peter Principle: Promotion to Pain”) that the problem didn’t arise due to people failing to have the right technical, academic or interpersonal skills to succeed, but due to them moving into a role where stress and anxiety suppresses them. This drives both an organisational decrease in productivity and an individual loss of well-being. Be aware that this doesn’t always become apparent through stereotypical over-emotional ways (e.g. temperamental behaviour, crying in the office), but can also be expressed through increasing passivity and detachment.

Returning to Normal Performance

Lazear argues that the Peter Principle is simply due to regression to the mean, and those involved in promoting failing to recognise that there is variation in performance (i.e. the regression fallacy). He argues that, as a promotion suggests a performance standard has been met, performance is likely to be lower in your new role. You are promoted because of your exceptional performance, which means either your ‘normal’ performance is exceptional (or better) and you were performing normally or that your ‘normal’ performance is less than exceptional, but that, for a period of time, your performance deviated from the mean by enough to reach that exceptional hurdle.

If you’re in the second group, then after promotion your performance is likely to regress back to the mean, so your performance is likely to be below the expectations when you were promoted. This is particularly apparent in the sporting world because you can measure someone’s performance – as an example, between the start of the English Premier League in 1992 and the end of the 2012/13 season, 33 players scored 20 goals in a season (the unwritten benchmark for a top quality striker). Only 12 of those were able to do it more than once; the vast majority regressed back to the mean.


This is a subset of the underestimation of variance in people’s performance, but here I’m only referring to variation due to luck (rather than our own performance). Given that this is a subset of the above, it again has a particular impact on competency-based systems.

As a purely anecdotal example, about 2 yeaDice and Poker Chipsrs ago I was working on six outline business cases with one other person. We shared the workload and both contributed to all of the proposals, but eventually had to (for process reasons) decide who was in the ‘lead’ for each option, so we split them down the middle – 3 each. I would have judged our work as of near identical quality. Yet, due to circumstances beyond my (or his) control, ‘my’ three business cases were accepted, to none of his. For my ‘success’ I received both praise and the highest performance marking, but due to my colleagues ‘failure’ he was largely forgotten.

It was only luck that separated us, but people judged us only on the outcome not the quality of work in getting there. This bias towards outcome is prevalent and it’s easy to get blinded by big numbers and impressive endings, but it’s important to take a step back. We neglect the role luck played in the candidates’ successes and failures – and, as Dickenson and Villeral showed earlier on, the bigger the role luck plays, the larger the effect of the Peter Principle.

Lost Interest After Promotion

This is a fairly obvious one – people work as hard as possible to get a promotion and once they’ve achieved that they no longer have a clear goal so lose motivation. I won’t spend too long on this, as I wrote about motivation in this post, but it makes particular sense in the context which that article set – not only is the motivation of promotion removed, but the focus on achieving promotion is also likely to reduce your intrinsic motivation in doing the job itself.

You’re Too Good; It’s Showing Us Up!

This one takes us back to Peter himself recite-k60xka– he stated that “in most hierarchies, super-competence is more objectionable than incompetence.” This one is impossible to find evidence for, beyond the anecdotal, but the argument is this – people who perform too well (or would clearly be better at their seniors’ jobs) disrupt the hierarchy of an organisation. The organisation fights to maintain that hierachy by acting against these who are “super-competent”. Therefore managers find spurious reasons to not promote staff or to avoid giving them strong performance reviews (this circular nature is part of what makes it difficult to find evidence – the response to super-competence is to find a way to deny any super-competence).

Further, peers often become suspicious of over-performers and can leave them isolated, which can result in organisations losing some of their most talented staff to places where they feel more ‘normal’.

Move Someone Up – and Out of the Way

Another slightly cynical one,recite-n50n5v but one lots of us have seen – people get promoted because their current business area wants to get rid of them. There are arguments that within certain types of firm (such as technology), it’s a organisational theme rather than just one-offs; people are moved into management because they lack the specific skills to do the skilled front-line work. Putt’s Law and the Successful Technocrat, published under the pseudonym Archibald Putt,  pursues exactly this theory – incompetence is “flushed out of the lower levels” leaving two kinds of people in technology, “those who understand what they do not manage and those who manage what they do not understand”.

As a slight aside, it’s worth noting some of the more innovative thinking about organisational structures – Joel Spolsky proposes that we should move away from the concept of a central management team being the top of an organisation and move towards it being ‘led’ by those doing front-line activity. In his world view (tinted towards technology, clearly) the ‘management team’ should not believe itself to be an executive, decision-making function, but a support function. Therefore it should be seen as “administration” – they do the things that help work get done. I’ll have a look at some organisational structure concepts in later posts.

“Strategic Incompetence”

Jared Sandberg came up with this termrecite-1mh30tn, while writing in The Wall Street Journal  (sorry, can’t find a link). His description was this – “Strategic incompetence isn’t about having a strategy that fails, but a failure that succeeds. It almost always works to deflect work one doesn’t want to do—without ever having to admit it.” He meant this in a range of situations, including people simply choosing to be terrible at a specific task (say, doing the ironing)  so they don’t have to do it anymore. In this context, however, we’re talking about when the most suitable people for promotion choose to perform badly enough to avoid it. This might not be by failing in their core responsibilities – presumably a major driver for doing this is that you enjoy your job more than the one you’d be promoted into – but can be by purposefully showing themselves to be unsuitable for the next step up, such as refusing to engage in the expected ‘networking’ activity or acting out at team events.

This behaviour results in less suitable people being promoted, but does mean that individuals using strategic incompetence avoid the Peter Principle – by purposefully avoiding promotion they avoid being moved up into their maximally incompetent role (or “final placement” as Peter called it). It opens up an interesting topic, e.g. is this an effective personal strategy (as you avoid the stress noted above)? Or could it even be good for the organisation as a whole (as you have people in roles where they’re still highly competent)? This leads on to my final section; how can we try to avoid, or at least dampen the effect of, the Peter Principle.

Is There Anything We Can Do?

Here are a few thoughts, to provide a starting point, on how we can counter the Principle:

Focus on Proof of Performance in the New Role – Sorry for stating the blindingly obvious, but this is still the main area where organisations go wrong. The focus should not be on how good the employee has been, but on how good they’re going to be in the new role (which can clearly include consideration of previous performance). One way to do this is to complement competencies with tailored tests (whether specific skill, situational judgement, personality, or scenario-based testing). Another way is to really narrow down the focus on competencies to the new role. This often means trying to forget about the outcomes and concentrating on the elements of the process that are relevant.

Finally, it’s often possible (though you need to consider whether the role merits the cost) to run some sort of “real life interview” over a couple of days, by assessing performance in a more life-like scenario. This can include the obvious – like asking candidates to research and deliver a presentation or analyse some part of your business – but can also involve something a bit different, like allowing them to genuinely perform a business operation for a short period or giving them some funding and letting them show what they can deliver with it.

Adjusting for Luck and Regression to the Mean – We need to avoid focusing purely on outcomes, particularly where luck is a major determinant. By asking people with knowledge of the area where the candidate’s competencies comes from, you can start to assess the impact luck could have had. To adjust for regression to the mean, we need to avoid just relying on a few one-off examples or merely someone’s performance over the last period – find ways to take a range of information on board (and simulate their real-world performance, as suggested above). Allowing someone to merely present a few of their highlights is bound to give a very unbalanced picture of themselves (and is likely to be biased against those who deliver at a high level consistently – they’ll appear worse than those with occasional exceptional work, even if they deliver indifferently most of the time. On a different note, it’s also biased towards those who are comfortable lying).

Have Genuine Skill-Specific Career Paths – This is a fairly well-trodden road, but still a worthwhile one and still one that is poorly executed. Organisations need to recognise the value of specific skills to them – and accept that sometimes people can still be worth the big bucks without having to manage people. By opening up different career paths (rather than the typical process of starting as a worker, becoming a skilled worker, then managing workers), organisations can make the most of their people, while rewarding them appropriately. One of the skills that is often forgotten about is the ability to manage well – this is a skill in itself and should be recognised individually. Just because someone is a great manager, it doesn’t mean they’ll be a great strategic thinker. This is where I think Joel Spolsky’s approach can help managers; acknowledging a distinct “administration” function of a business leads towards the acceptance that specific skills are needed to keep everything else operating.

Typically organisations start this process in good faith, but let it become contaminated as other areas of the business want to have roles at the newly created level – until eventually it just becomes another level across the whole organisation (which is a worse than neutral result, as it simply adds an extra layer into the hierarchy). The purpose behind creating a specific career path must be clearly defined and that purpose must be maintained for the organisation to get a pay off.

Probationary Promotions – This is a cultural (maybe even societal?) challenge, as it would remove the big-deal nature of promotion and make movement both up and down an organisation more common. It is, however, clearly the best way to know whether someone can really do the job; and let’s them know whether they want to!

Generate Movement – There’s a simple solution to people getting stuck in jobs that they’re ill-suited; create a system that forces movement onto people. This has downsides – you’re also moving people out of jobs they’re very well suited to, some people like feeling settled and you break up teams that work well together – but it does avoid people staying in their “maximum incompetence” role. A number of firms do this, through a couple of ways; some companies have firing and promotion for fixed percentages of their workforce on a regular basis, whilst others just have mandated ‘rotation’ periods (e.g. everyone has to move role every two years).

Taking Responsibility for Yourself – We all need to look after ourselves a bit more and really think about our career moves and what we want from it. In reality, I get that it’s very difficult not to get swept up in the excitement of being offered a new position and the societal belief that moving up an organisational ladder is a good thing (never mind the money!). We need, however, to go into new jobs with our eyes open and as prepared as possible – we should think about the possibility of promotion early and consider what we want from work. We can’t just lay the blame at our employers feet if we find ourselves in a job we don’t enjoy – we have to take some responsibility.

So what do you think? Are there any other reasons the Peter Principle exists? Is it, to some extent, inevitable? And, most importantly, is there anything else we can do to mitigate it?

The Fallacy of Precision And Why We Shouldn’t Worry About It

recite-a89dy0Last week I was involved in some forecasting work – both looking at projections for the outside world and the delivery we’d expect within the organisation. As discussions persisted in pursuit of spurious accuracy, I was reminded of a few things: 1) Humans crave certainty (or the illusion thereof); 2) We are consistently pretty bad at estimating the impact of events and the likelihood of them; and 3) that doesn’t matter because the real value of planning and forecasting isn’t in creating something perfect, but the act of planning itself.

A Craving for Certainty

One of the basic drives for mankind is the desire to have an explanation to what’s going on in the world (and pretty much any explanation is better than none). In an evolutionary sense, it’s easy to think why  – you’d be paralysed into total inaction if you didn’t quickly abstract patterns from the environment to inform your future actions. Without drifting too far into philosophy, inductive reasoning can never provide absolute proof, but someone who chose not to use it would be entirely incapacitated.

Arie Kruglanski described this as a road-sign-63983_1280desire for ‘cognitive closure’ and defined that as “individuals’ desire for a firm answer to a question and an aversion toward ambiguity”. Kruglanski conceptualises this in two stages: 1) “seizing”, where we grab information in order to come up with an explanation (‘urgency tendency’); and 2) “freezing”, where we try to hold onto our explanation for as long as possible (‘permanence tendency’). Alongside this conceptual work, research by Tania Lombrozo has shown that we react to uncertainty by spontaneously generating possible explanations, and, more interestingly, we let this bias our decision making – once we have an explanation we start to assign value to it, as if it were evidence itself.

There’s also neurological evidence that shows the impact ambiguity has on us. A 2005 study showed that levels of ambiguity correlated positively with amygdala and orbitofrontal cortex activation, and negatively with the ventral striatum. This reflected ambiguity creating a burden on our decision making faculties, leading to reduced reward sensation and even causing a fear response.

This leaves us desperate to find a precise answer when faced with uncertainty, and when we come up with it we really don’t want to let go. To try to counter this, we need to firstly be aware that we’re craving an answer, even when it’s impossible to have a definitive one. Secondly, we have to try to accept a more open-ended solution – for example, using a range when calculating benefits or presenting options based on different scenarios. Thirdly, when you’re coming up with possible hypotheses then note them all down; it helps keep you aware that there are other possibilities, enables you to moderate your forecasts (by comparison with what the other possibilities would suggest) and forces you to think about the evidence that drove you to decide upon ‘the’ explanation (so that you can review whether that evidence still holds up as time passes). Fourthly, monitor the real-world outcomes against the world your ‘explanation’ would lead to – it’s not meant to get us down, but force reality upon us; when we come to making our next explanation, we often forget how accurately (or not) we’ve forecast in the past. 

An Inaccuracy in Estimation

wrong-way-429723_1280This is a huge topic – much too large to cover in any detail here – so I’ll only highlight a few of the ways in which we make mistakes. It’s worth noting that I’m not denying the utility of some of these biases; they can enable us to take action when inaction might prove fatal, make us more optimistic (and hence driven to take action) and make decisions quickly.

Illusion of Control – We believe that we have more control over events than we really do. Langer showed that, even where we know – rationally – that outcomes are random, we still feel we have control. One experiment either allowed participants to choose their own lottery ticket or gave them a ticket that had been selected for them. The two groups were then offered the opportunity to switch their ticket and enter a different lottery, which offered better odds of winning. Those who chose their own ticket at the start where far less likely to switch into the new lottery, despite their increased chance of winning – they appeared to think that they were “good” at choosing. This effect is seen in a range of scenarios and differs from general over-optimism – it is a belief that we have control over things and this improves the likelihood of positive events.

Overconfidence Effect – We are more confident in our own judgement than we should be. There are three elements to this: 1) overestimation of one’s own performance; 2) overestimation of one’s ability relative to others; and 3) overestimation in how precise our estimates are.

Confirmation Bias – We look for evidence that supports our hypothesis. This means we constantly build support for our theories and ignore anything that would disprove them.

Availability Heuristic – The reliance upon what comes to mind in making a judgement. This makes us biased towards things that spring to mind, such as those that are particularly salient, have happened recently etc.

Overall, there’s not too much we can do to counter these biases, apart from being aware of them and trying to mitigate them through our awareness. For example, to counter the availability bias you can try to separate out the assessment of relevant topics from the forecasting itself – by drawing out all those topics you can make them all available. Or to counter the overconfidence bias, you can bring in others (outside your normal working area) to assess those same tasks or events (although maintaining awareness that they’ll also be overconfident in their judgement = and potentially it’ll be even more extreme).

The Real Value of Planning (or Forecasting)

One of the bias’ that I left recite-4yab89out above was “planning fallacy” – the consistent underestimation of the amount of time it takes to deliver a project. There are a number of possible explanations for this, including some of the biases above (illusion of control, overconfidence and availability). Not only do we show planning fallacy all by ourselves, but organisations often encourage it even more. We tend to underestimate delivery time because we don’t put enough ‘slack’ in our plans, yet managers, customers and executives want to drive delivery plans to be as short as possible – they want a justification for every block of time and “because something is likely to go wrong” doesn’t normally cut it. Further, we’re normally asked to, seemingly sensibly, build our plans on the set of outcomes that seems most likely. So we take each individual action and judge whether it’s more likely to be delayed or not – and most individual events are more likely to go smoothly than not. The problem is that, in aggregate, it’s likely one of the things will go wrong – we just have no idea which one. That means the most likely single set of events may well be everything going right, but the chance of that is, let’s say, 20% – each other individual set of events is less likely, but the chance of something going wrong is still much higher than nothing going wrong.

The rest of this post might seem to be a bit of a downer, but here’s the positive – it’s not as important to be accurate as we feel it is. The process is more important than the answer you come to. At a superficial level, it’s not worth worrying about the last few percentage when developing estimates – we tend to be building on top of so many assumptions that it’s a false economy (it’s a good example of Pareto’s Law – most time is often spent on the fine, and low value, pseudo-accuracy). Thinking in terms of a realistic range is much more helpful than spending hours generating a falsely precise figure. If the world progresses in the world you expect then you’ll be in the right ballpark – if it doesn’t you’ll be off massively anyway.

At a deeper level, there is value in planning because you mentally simulate whatever series of events or plans you’re looking at – as Dwight D Eisenhower said “No battle was ever won according to plan, but no battle was ever won without one… Plans are useless, but planning is indispensable”. The process of planning (or forecasting) forces you to think about the factors in play – you have to think about the requirements, the dependencies, the risks and how you might mitigate them in more detail than you would otherwise.

psychology-544405_1280There’s a skill to doing this properly – your ability to mentally simulate a scenario is limited by two key factors: 1) your imagination and 2) your knowledge. Both of these can be helped by bringing other people into your planning process (you can use the time saved by being less concerned about the fine detail of your benefit or forecast figures). You need to use diverse groups of people to get the most out of planning – to broaden imagination it’s important to bring in people with very different experiences to your own (we tend to think within our own paradigm, which is set by our experiences) and to broaden knowledge we require subject matter experts in areas across the plans’ elements (e.g. if you were building a football stadium, you don’t only want people who’ve built football stadia – they’re useful, but you also want people with knowledge of delivering large construction projects, of the leisure/entertainment industry,  of turf and the conditions that impact on it etc.).

The more you’ve simulated both your preferred option and your range of options, the better prepared you are to deliver the project – whether it goes to plan or (infinitely more likely) not.

When things go wrong we often worry, unsurprisingly, about things having gone wrong. But that doesn’t generate progress for the project (although it might deliver some learning). Mental simulation leaves you more prepared to handle events when they go off the expected path because you don’t have to rely on impulsive decision making – you already at least have a rough idea of what to do.

Worry Less About the Output and More About the Process

Planning is a hugely useful thing to do, but only when time and effort is spent in the right way. Desperately hoping to get your delivery schedule right to the exact day or your benefit figures to a precise figure leaves you on a hiding to nothing – there are too many unknowns in the world and we have all sorts of biases in our reasoning. We just have to be more laid back about that (as well as relaxing about whether people meet our own projections – if we consistently over-deliver, then it’s because we’re consistently under promising).

When Getting Paid Makes You Worse at Things

We’re always looking for our dream job and whatever that consists of – money, power, pleasure or any other motivator. For lots of us, it’s trying to make enough money doing whatever we enjoy most in the world. Research into motivation, however, suggests this might be even more difficult than it sounds.

For many centuries, science (at least Western Science) only recognised two forms of motivation: biological (i.e. the need to drink, eat and reproduce) and external motivation (i.e. the rewards and punishments delivered by the environment you find yourself in). Logically that felt sensible and, as a paradigm, could be used to explain the vast majority of behaviour.

The arts, as is often the case, were ahead of the game. In 1876, Mark Twain wrote “The Adventures of Tom Sawyer”, in which Tom tricks his friend into doing a task he finds particularly dull (whitewashing a large fence) by suggesting that it is actually the most exciting thing anyone could possibly do. Following on from this, Twain wrote “that Work consists of whatever a body is obliged to do, and that Play consists of whatever a body is not obliged to do”. Further, Twain observed that the wealthy choose to do things for fun (e.g. driving horse-drawn passenger coaches) that others are paid for, “but if they recite-1l8kw8lwere offered wages for the service, that would turn it into work and then they would resign”. Daniel Pink – whose book “Drive” I would highly recommend and inspired various parts of this post – even decided to name this change in motivation in response to whether something is mentally classified as work or play the “Sawyer Effect”.

Twain had noticed something that it took science a fair few more decades to realise – there was a third form of motivation. There is an intrinsic pleasure in doing certain tasks, which can’t be explained by either biological need or external stimuli (although it can be influenced by someone convincing you that a task is really exciting!). In fact, I was prompted to write this post because it is that intrinsic motivation that makes me write; I enjoy the process of writing and I find it a rewarding experience in itself.

Firstly I’ll have a look at some of the science behind this form of motivation, before offering a few practical tips on how we can cultivate this motivation in the workplace.

The Science 

Anyway, when science got round to catching up, it did so by chance – another repeating trend. In 1949, Harry Harlow (alongside Margaret Harlow and Donald Meyer) was studying how different organisms learn, on this occasion by studying rhesus monkeys (before launching into his accidental discovery, it’s worth noting that Harlow did make huge progress in the studies of affection and learning). These monkeys were to solve a simple mechanical monkey-166942_1280problem and, in order to get the  monkeys comfortable with the equipment, the puzzle was placed in their cages. To the surprise of the experimenters, the monkeys immediately focused on solving the puzzle. They seemed to be showing both determination and joy. By the time the experimenters wanted to test them, they were already highly capable of solving the problem, due to their unexpected practice habits. These monkeys had not been shown how to solve the puzzle, they’d not been encouraged to do so, they’d received no reward and yet they still did it. It was in response to this that Harlow raised this intrinsic form of motivation – the “performance of the task provided intrinsic reward”.

Harlow wanted to test how strong this motivation was – could it beat out one of the two conventional motivations? He ran the same experiment again, using the same test, but providing a food reward to the trial monkeys. To his surprise the monkeys with a food reward performed worse than those without – they made more mistakes and were slower. The introduction of a reward actually reduced performance, a potentially revolutionary finding. But it was a bit too anti-establishment and no progress was made for another 20 years, when a scientist called Edward Deci decided to push at the boundaries of motivation again – this time with humans.

Intrinsic Motivation in Humans?

Deci wanted to test whether paying someone to perform a task made it intrinsically less interesting. To do this he used a Soma cube – a set of pieces that can be arranged into different shapes – and a group of university students, who he divided into two. All the students were presented with three drawings of Soma configurations and were asked to assemble them. When they had configured two, Deci left the room to, allegedly, get a fourth drawing. In reality, this was the key stage; he wanted to know what they students did when he left the room. Would they keep playing with the cube or would they do something else (various papers and magazines were left around as possible distractions)? Both groups of students had to do this every day for three days.

A Soma cube sofa
A Soma cube sofa

The difference between the two groups of students was that one would never be paid based on their performance – they received no performance related pay on any of the three days – while the other received no payment on day 1, did receive payment on day 2 and were then told that the money had run out and they would not be paid on day 3.

During the period Deci was out the room (8 minutes), the unpaid group played with the Soma cube for about 4 minutes each of the three days. The other group spent about 4 minutes on the Soma cube on Day 1, behaving similarly to the unpaid group. On Day 2 (when they were paid) they got, understandably, a lot more interested in the cube and played with the puzzle for over 5 minutes – they were trying to get a head start on the third and fourth puzzles. This was as you’d expect. The big test, however, was day 3. This group, not paid for day 3, became disinterested in the cube, only playing with it for less than 3 minutes – less than when they were paid, but also substantially less than the group who’d been unpaid all three days.

The payment of money had reduced the intrinsic interest in solving the puzzle. It delivered short term motivation and enhanced focus on Day 2, but reduced interest in it afterwards. Money was like having a sugar boost; it made things better for a while, but worse afterwards.

What can we do?

Alongside other research (such as Glucksberg, 1962 and 1964 – which showed that offering rewards reduces creativity, as people narrow their focus and reduce their ability to think laterally), this suggests that the typical offering of most employers – if you do this, then you’ll get paid that – isn’t the best, or most consistent, form of motivation (particularly as you have to keep handing over more and more money).  It is, however, the easiest and most visible form of reward, so organisations have stuck with this.

There are other options though. Given that we now know that making a task into ‘work’ both reduced productivity and creativity, we need to find ways to either make tasks less work-like or specify less of the tasks that have to be completed at work.

Let People Do More of What They Want

Google’s famous 20% time – where it allows its employees to use 20% of their time on their own projects – makes sense when combined with our learning above. Valve, the game developer, might well argue that 100% of employee time is spent on work that has some intrinsic motivation; it, theoretically at least, has a flat structure where people are free to work on whatever they want (the idea being that the best ideas draw the most people together).

By giving people this freedom you’re allowing them to spend time on what intrinsically motivates them, so they can deliver more productively and creatively. Now 20% may well be too much – it might not be industry appropriate or it might be too big a first step – but trying out a smaller percentage makes sense. It’s also worth testing, so run a trial. You don’t have to commit forever.  Finding a way to give employees time to work on things that both help the organisation and are their own choice is a real win-win.

Give People Purpose, Not Tasks

General George Patton, a US Commander during the Second World War, said of his experiences “never tell people how to do things. Tell them what to do and they will surecite-eu71trprise you with their ingenuity”. By giving people direction, but not specifics for how to do their work, you can prevent tasks becoming ‘work’. This also lets people play to their strengths and bring in other people to help them reach their goal. This additional freedom, or even sense of freedom, helps maintain stable intrinsic motivation.  Further, it means people have a clearer understanding of why they’re doing their work – and we’re predisposed to wanting to know “why?”.

Create Diverse Teams

By creating teams with a wide range of skills and experiences, you generate an environment where interaction is encouraged (primarily because it’s the only way not to get left behind). This means people are often brought into the work of others, so everyone spends time working on topics outside their normal remit. This serves two purposes: you develop a greater sense of buy-in to the team’s purpose, enabling more understanding of, and motivation towards, that goal; and employees get to use previous experiences to solve current “non-work” (for them) problems. Together this means people are more likely to maintain their intrinsic motivation at work, amongst many other non-motivation related benefits.

Create a Peer-to-Peer Benefit System

As we read earlier, tasks become work when you’re rewarded for doing them. For humans in an organisational environment it’s a bit more complicated – it depends where the reward is coming from. It also depends on why you feel you’re being rewarded. If your boss gives you a pay rise because you met all the targets you were set, the tasks you were doing were most likely ‘work’. If yhandshake-442908_1280our peer gives you a bonus because they were really impressed with something you did, then that’s very different – you feel like you’re being recognised for your own skills, abilities and personality rather than doing specific tasks. By creating a peer-to-peer system, you remove the hierarchicaoval of intrinsic motivation.

Involve People in Goal Setting

If you can involve people in setting their own goals, while keeping them ambitious and aligned with the organisation’s objectives, then you’ll be able to internalise the goal. By changing targets from something that is thrust upon employees to something that they have had a role in deciding, you make that objective meaningful to the individual – they were involved so must think it is a ‘good’ thing (and due to cognitive dissonance, if they believe that they were involved, they will come to think that the target is worthwhile, even if it isn’t what they would have chosen beforehand). This can make financial and intrinsic motivation align, leading to great performance.

The Dark Triad – What can we learn from Criminals and CEOs

To some people, given the financial crisis that started in 2007, there isn’t a huge difference between those involved in organised criminality and those leading the largest commercial organisations in the world. There’s a growing body of research that suggests they may be more right than they could have anticipated – there are a group of personality traits, known as “the dark triad”, which link those with the greatest business success to those involved in the most calculated criminality. This research also suggests there’s something we can learn about how people become senior within an organisation.

First, though, an explanation of the dark triad. This model was established by Paulhus and Williams in 2002, when they proved that Narcissism, Psychopathy and Machiavellianism were distinct traits. These traits are:

  • Narcissism – Admiration for oneself
  • Psychopathy – this is characterised by a lack of feelings of guilt and empathy, alongside excessively bold behaviour
  • Machiavellianism – in summary, that the end justifies the means and morality doesn’t come into decision making.

It’s fairly unsurprising that these traits are related to propensity to commit certain types of crime (e.g. Mathieu et al, 2013) – if you believe that you’re worth more than other people, don’t feel guilt, don’t have the normal inhibitions that sub-pathological (i.e. “normal”) people have and are prepared to do anything to achieve what you want, then it becomes logical to commit crime. So it seems like we should be keeping a close eye on these kinds of people, keeping them out of positions of power and doing all we can to eliminate these traits.

Given the above, it’s interesting (though given natural selection, not that uncommon for pathological conditions) to hear that having higher than average levels of these traits leads to a wide number of positives (for the individual, rather than the group- As Hogan, 2007, states the dark triad traits don’t help people to “get along”, they do help people to “get ahead”). They:

  • are perceived more favourably in the first hour after meeting someone (Paulhus, 1998)
  • have more sexual success (as defined by Linton and Wiener, 2001 – that definition was number of partners and/or children) – though there are some other elements to this, such as being more promiscuous and being less concerned about leaving a child with a single parent, which make me slightly hesitant about calling it “success”.
  • are perceived as more attractive (Dufner, Rauthmann, Czarna and Denissen, 2013)
  • obtain more resource than others, leading to competitive advantage (Campbell, Bush, Brunell and Shelton, 2005).
  • are generally happier and more positive disposition (Morf and Rhodevalt, 2001)
  • have, if highly educated, a higher income (Turner and Martinez, 1977) – although for less educated people the correlation was reversed

So why do these things help? I’ll try to use some of this evidence to give a quick breakdown of each trait, a summary of how they combine and some things we can take from each trait.


Remember all those times you wished you’d done something, but didn’t take the risk? If you had a bit more narcissism then you probably wouldn’t. In an organisational environment this has a number of advantages: a comfort networking and getting yourself noticed; the confidence that helps in sales and presentations; the self-belief to make decisions when surrounded by uncertainty; and the ego to shift paradigms and innovate (most people choose to stay within the existing framework because of the fear of being criticized for standing out from the crowd – as seen in the bystander effect). Perhaps most importantly, a healthy dose of narcissism gives people the confidence to seek and apply for new opportunities. The biggest determinant of moving up an organisation or industry is putting in that application – by definition, those who don’t apply can’t get the job. And if you believe that your the best, it doesn’t matter if you get rejected – it must be someone else’s fault because you know that you’re amazing.

This is something we could all learn from – the only way to get to wherever you want to be is to take a chance and not worry about the outcome. When you put your application in your chances might be 1%, but that’s 1% more than if you don’t apply. If you get rejected then you’ll probably get some feedback on how to get a similar job next time as well, so it’s really a win-win. Naturally, however, we’re scared of rejection – so next time you see something that you really want, take a page from the narcissist’s playbook and realise the positives almost always massively outweigh the negative.


There’s a wealth of evidence about how emotion drives irrational behaviour, so it follows that psychopaths make better decisions (in a rational, rather than social, scenario). Research by Osumi and Ohira (2010) proved this through the ultimatum game. This game hinges upon putting the initial power in one player’s hands (whether that is a genuine participant or a scientist) and testing the rationality of the second player’s reaction. The first participant is told there’s a pot of money (let’s say £10) and they can propose how this money is split between them and the second participant (e.g. £7 for them, £3 for the other person). If the offer is accepted by the second person then the money is handed out in that split. If it’s rejected, then both people get nothing. Given that this is a one-off with someone you don’t know, it’s rational for the second person to accept any offer (because otherwise they’ll get no money at all). Most people, however, reject offers at a certain level – they see it as unfair, so would rather they both got nothing. Alongside this they display an increased “electrodermal response” (they sweat more), so at both a sub-conscious and conscious level there’s a response. On the other hand, Osumi and Ohira found that people with higher levels of psychopathy both showed a lower electrodermal response and make better decisions – they accepted offers that non-psychopathic people rejected.

Those findings transfer to everyday life – we make decisions based on emotion and impulse all the time, and often regret it afterwards. The bigger the emotion behind a decision, the bigger advantage someone with psychopathic traits has – if I were deciding whether to fire 1000 people I’d think about how that would affect them and their families, obscuring my ability to decide what’s best for the organisation. A higher level of psychopathy would help me make a better decision (because, ultimately, my company going bust helps no-one). Further, psychopathy also makes people more bold, as they are less worried about what other people will think, delivering a similar stream of advantages as noted for narcissism (e.g. not having nerves when pitching to others) as well as a more exploratory approach. For a much more detailed view, see Kevin Dutton’s “The Wisdom of Psychopaths”. Learning to make those cool headed decisions would be a boon to us all and the experiment above highlights one way to do this – take a step back and let the unconscious response (and any somatic markers) die down before making your decision


Given Machiavellianism’s (Mach) association with doing everything necessary to achieve goals, it’s unsurprising that those displaying high Mach, also display higher motivation than others, are more focused on their goals and work harder to get there (Jones and Paulhus, 2009). The same research also showed that high Machs enjoy situations that offer the opportunity for manipulation – situations that other people often find awkward, such as negotiations and confrontations. Their focus also makes them decisive in delivering what they see as the right result, even when their are difficult decisions to be made or problems are encountered – the mental construct is that it’s worth doing anything to achieve the end result, so everything else only represents a bump in the road.

The obsession with their goal also enables high Machs to improve their skills; by clearly understanding their goals they can quickly assess where they’ve fallen down and can improve next time. This may well explain why high Machs often deliver successfully. They have always thought about life as a series of goals, while we generally tend to meander in a much less directed way. This can partly explain high Machs’ ability to charm others quickly – they understand that people enable them to achieve what they want and they’ve honed how to make that happen. In terms of success (removing any moral element from that definition), manipulating the truth is also often to an individual’s advantage – for example, when applying for a job. The reliance on interviews and written examples leaves organisations vulnerable to those who embellish their achievements, particularly as they often fail to follow up with the current organisation about the facts. Additionally, the more people that you tell these stories to, the more it becomes established as fact, by virtue of everyone knowing it (in an attempt to avoid libel charges I won’t name names, but there are a number of famous examples of people making a career off the back of these kind of lies).

Combined, these traits can leave high Machs ahead of the pack, but often to the detriment of the whole (as Campbell, Bush, Brunell and Shelton found in the research noted above). So what can we take from high Machs? I think we can take their utilitarian view to the world – sometimes we have to be prepared to do “bad” things in order to achieve the truly “great” things. It’s very difficult to keep perspective here and not become obsessive about your goal, but we naturally don’t like making decisions where we cause something bad to happen, and, on top of this, we show hyperbolic discounting (where we value things that are happening sooner disproportionately more than later). There is also something to admire in high Machs approach to networking – they don’t feel the same temerity that non-high Machs do and don’t have the same pretence about it’s purpose (obviously an event created with the intent of lots of executives coming together is meant as an opportunity to network, rather than about making friends).

The Whole Picture

Having run through each trait, it’s easy to see how these can combine, in the right quantities, into high business performers (they often have a negative impact on non-working life – for example, those displaying dark triad traits have higher rates of relationship breakdown (Morf and Rhodewalt, 2001)). A person who believes in themselves, can make big decisions, understands how to work through others, seeks out opportunities to develops their career and thrives in tense situations sounds like an ideal leader. It’s no surprise that a high number of executives display these traits (see a range of work by Oliver James). It also helps to explain why many senior executives have trouble outside their working life – their “working personality” strengths are their social life flaws.

But before we get carried away with the organisational performance of those with dark triad personalities, it’s important to note that their success is often short term and often comes at the detriment to the company as a whole (Furnham, 2010). All we can do is try to take the bits that work, discard the bits that don’t and try to watch out for the snake dressed in a suit…

Why can’t Metrics just be Indicators?

Humans like to measure things. It gives people a sense of understanding, and a related feeling of control. On the topic of measuring things, I’ve been through a range of opinions: I started believing that we just needed to improve performance metrics, so they better fit with their aims; moved to a nihilistic view that the focus on targets was negative, so removing measurements would improve performance; but I now believe that what’s most important is a level beyond that – understanding what the measurement is really measuring, what any results mean and why we should care.

Firstly, though, why the spread of the culture of measurement? Organisations do this for a number of reasons: it allows them to use targets to motivate staff; as entities get bigger, it’s more difficult to see what’s going on in your business; automation allows us to measure more than ever; they are easy to show to shareholders and other stakeholders; and, if I dare to be so bold, the more you measure, the more chance of finding a metric showing a positive trend.

So what’s the problem? Primarily that these measurements can lead to more and more negative behaviours – they can drive different parts of organisations to act against each other, actions that don’t fit with overall aims, time consuming analysis and unnecessary stress.

These issues arise for reasons such as: wanting to measure the performance of the smallest possible organisational unit clashing with the need to maximise the organisation’s overall delivery (this can lead to a lack of co-ordination between areas of the business, creating messy hand-offs and reducing overall efficiency); metrics either not measuring what they’re meant to (or thinking they measure something different); the clouding of a metric’s meaning due to uncontrollable or unforeseen variables; a misdirection of incentives and/or focus; unnecessary proliferation of data; and the creation of too many or inappropriate targets.

Those kinds of considerations led me to my first standpoint – we just need to invest more time and money in measuring the right things and that’ll lead to better results. It’s true that most organisations massively under-invest in their approach to performance metrics. It’s simple to keep using the same ones, it allows an easy comparison to other years and, potentially, across organisations and it’s easy for everyone to understand. And often the longer a metric has been in existence the less we remember about the subtleties of what’s really being measures, and the more we try to flex how we use it. Ultimately, however, I could see no way to reconcile all the different motives for metrics into one cohesive model.

Bill Walsh – the 3 time Super Bowl Champion

This led me to move towards the Bill Walsh and John Wooden philosophy that the “score will take care of itself”. This supposes that focusing on any figure will lead to negative results, even if that figure is the bottom line (in their case the final score). They suggest that the focus should be on processes rather than outputs. If all the processes are performed as well as possible, then the final result will take care of itself. It’s a powerful and empowering approach. It brings activity right down to its basic components and puts emphasis on individuals, rather than numbers. It defines success as each individual achieving as near to their potential as possible, removing ability biases that can demoralise those who are less talented (because they can never “win”) and make the more talented become lazy (because they can “win” without delivering to their potential) It’s also the approach that I would take (and do take) when coaching sport. It makes it easy to be fair, helps push people and removes pressure.

However, it can’t work everywhere – in a large organisation you can’t see every employee, so you have to rely on other people’s judgements. But that then leaves you unable to compare performance across areas because people’s standards are all different. It also works in sport because, however much you try to ignore it, there is ultimately a way to quantify how you’re doing – just look at the score. There is a clear motivation and there is quick feedback on how the team is performing. This isn’t true of a team within a larger business. Finally, it leaves you with only the most basic outputs to communicate to others – if you only have your balance sheet then it’s very difficult to explain what’s really happening.

Thinking about metrics didn’t get me too far, so I started thinking about thinking about metrics. Our workplace culture has led us to view every metric as a key performance indicator and every key performance indicator (KPI) as an output in its own right. The easiest way to justify success is to show a number that’s getting better and the clearest way to be seen as failing is to oversee a range of figures all getting worse. The more we measure the more figures that become targets and the more time we have to spend focusing on how to make them better, rather than trying to achieve real organisational goals.

We’ve forgotten that KPIs are only indicators – they show us what is happening within an organisation and help us to identify areas where we’re performing well or poorly. A change in any metric is an insight into what’s happening, but only as much as any other source of information, such as the organisation’s overall delivery or the reports that you get from your staff (let’s not pretend that numbers can be just as biased as any other sort of information). And, even worse, we increasingly believe that anything we can measure is important.

This is what ultimately leads to the negatives that I noted earlier – an over-reliance on numbers, a misunderstanding of what they reflect and a failure to explore what the numbers are showing us. To shift back to American sports, an NBA team might have their interest piqued by someone who can jump 45 inches high and run the three quarter sprint in under 3 seconds. But they’d then go and watch the player actually play.

This limits what people can achieve – they are confined to act in such a way that increases “their figures”. Not only does this mean we stop people doing what’s right, but we disengage them as well – they begin to feel a lack of trust and an inability to do what they think is best. It can also lead to us rewarding people who aren’t the most deserving – if two salespeople both pitch perfectly, but one pulls in a million pound deal, while the other can’t make a sale then we’d see the first rewarded, when really they’ve both performed identically – leading to further disengagement and encouraging inappropriate behaviours (like trying to close other people’s sales).

Some organisations are better at this than others, but I believe the vast, vast majority can improve. This can only happen by seriously looking at the figures being generated and reviewing how they’re used. This takes time and is an investment, but it enables a workforce to deliver more and leaders to make better, more informed decisions. This change has to be top-down – when you stop being assessed on non-essential metrics, then you are able to free others.

Do I believe this is an easy thing to do? No – it’s a big cultural shift and removes the illusion of having sight of all parts of a business, but I do believe it’s possible. And we still need to look for better metrics, which I’ll talk about in later posts, and maintain a focus on the process. Overall it reflects where a lot of organisational issues come from; in line with Nietzsche’s quote above, everything else starts to become more important that what we actually set out to achieve.

Why Organisational Productivity Matters

This blog will look at factors that affect the success of a organisation; not the basics of how to run a business, such as pricing models, cashflow cycles or accounting principles, but how organisations can make the most of their people, the environments that cultivate that and how we can improve ourselves in the workplace too.

I’ve always had an interest in why people do the things they do and, since spending time working in a large organisation and increasing my academic knowledge, I’ve become increasingly interested in how workplace environments affect employees. I hope that discussions arising from my posts will help me explore and deepen my understanding of these issues, as well as yours.

With the economic climate still fragile, business has an important role to play in global prosperity, while the public and voluntary sectors continue to keep the state running, despite cuts in funding. While senior executives are often looking for paradigm-shifting board-level decisions that lead to immediately visible change (and easy to deliver, impressive-sounding announcements), I believe that creating an environment that helps people deliver gives organisations the greatest chance of success.

I’ll cover a wide range of topics here from the conventional HR areas – like managing performance, recruitment and leadership – to those more normally covered in business academia – such as metrics, business structures and strategies, and how new trends are impacting on organisations.

I look forward to hearing from you and appreciate thoughts on my posts and any areas where I should focus in future – as Dan Akerson, former CEO of General Motors, espoused, there’s no progress if everyone just keeps saying the same thing.