Management Tool – Performance Quantification Framework
Quantification: the act of counting and measuring that maps human sense observations and experiences into members of some set of numbers
The problem with measuring performance
A common challenge of performance assessment is measuring both the magnitude of change in a person’s performance as well as the direction of that change.
For the purpose of our discussion, here are some quick definitions:
- Direction: Whether a person is getting better or not
- Magnitude: How much that person is getting better or worse
As the old saying goes, you can’t manage what you can’t measure. If you’re trying to get someone to improve at something, you need to be able to not only determine whether they are moving forward (or backwards), but also – and equally as importantly – by how much.
Someone running up your Slope of Expectations needs to be managed very differently than someone who is strolling up … or sliding down.
This is somewhat easily solvable for objective criteria. You can measure how many times a person broke an integration build, the number of times the infrastructure under management went down unexpectedly or whether a project was delivered within acceptable time, financial and quality thresholds.
The problem becomes quite tricky, however, for measuring behavioural criteria, which are generally subjective to begin with. For example, performance plans usually contain behavioural goals, such as demonstrating leadership, or one of my personal favourites: tenacity. I mean, how do you measure leadership or tenacity?
A Performance Quantification Framework
One solution to this problem is to put a framework around how to quantify, measure and record performance indicators, both outcome-based and behavioural. The foundational attributes of such a framework are:
- Relevance
- Definition
- Frequency
- Score
- Reasoning
Using these attributes in conjunction with other frameworks (such as the Craftsman Model), you can create a valuable mechanism to provide real insights into a person’s performance. You basically take each performance indicator for a given role at a career stage, and apply the attributes against it. While this might take a little bit of time up-front to set up, it will yield great benefits when you sit down do performance appraisals later.
Lets look at each of the attributes in detail.
Relevance
Carefully assess whether the metrics being measured make sense, both in general, and in your environment.
A lot of times I’ve found that these metrics exist because they’re a remnant of the past. Someone copied them from a self-help book they were inspired by, and the metric became embedded in the performance measurement process. That, or someone in management needed labels for check-boxes on the performance appraisal form and they thought tenacity was a good candidate.
If you find this idea amusing – it’s actually not. I’ve worked in places where I’ve had to provide examples of, and justify my tenacity in annual performance reviews. I can tell you its a mind-numbing task.
Definitions and guidelines
To be able to determine the magnitude of change, first you need to agree on the definition. This can be done by establishing clear guidelines and examples of what such behaviour looks like. For example, you could articulate what sort of things qualify as leadership for a given role, or what tenacity would look like.
It’s useful to get these examples publicised and validated by at least the senior members of your team. This doesn’t have to be a long, drawn out democratic process – part of being a good leader is to understand when autocratic decisions are appropriate. However, getting others involved allows the standards you’re setting to be externally validated and ratified.
Frequency
Figure out how often the desired behaviours need to be demonstrated in the appraisal period. This will partly depend on the role and an individual’s stage on their career path.
For example, you can decide that you don’t expect apprentices to show a lot of leadership. This doesn’t mean they may not do so – just that you recognise that they’re young in their career and will generally follow rather than lead. If they show leadership behaviour, then by all means it should be recorded and rewarded – that’s one of the ways to identify high performers for leadership grooming and succession.
Journeymen, however, are expected to demonstrate leadership as a sign of progression. For them, you can decide what leadership means, which behaviours exemplify it, and how often you would like to record it.
The other aspect is how well someone is doing at a particular point. If someone is under-performing, then obviously you have a problem and weekly (or sometimes even daily) measurements are relevant. For high performers who only need to be appraised quarterly, it might be a fortnightly or monthly measurement.
Also remember that making and recording observations requires an overhead in terms of time and effort. This is true in all systems – human or machine – and you’ll need to make a call about how much overhead you’re prepared to bear of your and other people’s time.
Score
This is can be fairly straightforward or as complex as you want to make it. My advice is to use a simple linear scale from +2 to -2. The negative numbers are required to record instances where the opposite of the desired behaviour was observed. An example of such a scale would be:
Score | Description |
2 | Exceeded expectations, went above and beyond |
1 | Met expectations |
0 | Did not demonstrate expected behaviour |
-1 | Showed signs of behaviour contrary to expectations |
-2 | Clearly behaved against expectations |
The description of each scale gradation aren’t that important, as long as the general idea is understood.
Also, remember to keep it simple. It is quite appealing to extend the scale out both ways, or to do fancy things like make it non-linear. Don’t. Keep it simple, and focus on why the framework is being used rather than the framework itself.
Reasoning
When you record a score, always also record the reason that score was given. Again, this can be as short or as detailed as required. Depending on the frequency and the significance of the observation, I usually add enough detail to give me context and refresh my memory when I come back to it later.
One of the reasons for doing this is that it makes you think about the score you just gave. Sometimes I’ll record a score, write out the reasoning, and then realise that the score wasn’t really an accurate reflection of the commentary I’ve written.
Having comments is also useful when sitting down with someone to discuss their performance.You can have a more meaningful conversation when you are able to provide concrete examples of instances where you observed a behaviour that you want to encourage or discourage.
Similarly, if your environment requires it, these records are also helpful for formal human resource processes. For example, you might need to include evidence for justifying an extra performance bonus, or alternatively, to let someone go for consistent under-performance.
Add a pinch (or two) of discipline
A certain amount of discipline is required to consistently use this framework. There’s no point setting up the attributes for each performance indicator across your teams for each person if you’re not going to stick to using it.
Depending on how many people you need to do this for and how frequently, the best way to do this is to add timeslots and reminders to your calendar. All it then takes is perhaps two or three minutes – sometimes even less – to record a score and the reasoning for it.
Because your team is worth it.
Whoa! I hear you say. That sounds like a lot of work.
That’s right. No one said people management was easy. It’s a responsibility, and if you’re taking it seriously, you need to put a lot of hard work into it. Creating cohesive, engaged and high performing teams and a culture to support them requires a lot of effort. But that’s why you do it. That’s why it’s part of your craft.
I’ll write some more posts soon with examples of usage and the sorts of insights that can be extracted from using such a framework. In the meantime, I’d love to hear about any other tools and frameworks being used for quantifying performance.