September 30, 2009
Performance Metrics Often Hide The Reality From ManagersOver the years I've taken a special interest in the design and application of metrics. My interest was piqued way back in 1998 when I consulted on a project at The Post Office to develop tools for capturing, consolidating and analysing performance data from mail sorting operations. I read voraciously on the subject of performance measurement, and - even if I say so myself - became something of an armchair expert on it.
The only wisdom I tend to spout about designing metrics that might influence behaviour is "be careful what you wish for".
If someone's bonus, or even their whole job, depends upon showing good performance, as measured by a metric of our design, then we should fully expect them to steer their efforts directly towards achieving the best numbers, and - people being people - to do so with the minimal effort and application possible. In that sense, when performance metrics are applied seriously, the metric itself tells people exactly what it is you want them to achieve. Metrics are to performance what tests are to TDD. They are precise specifications.
It's ironic, then, that recently I've been witnessing some pretty blatant gaming of performance metrics from the very same organisation that inspired my interest in them. On two occasions I've ordered items online and paid a premium for Royal Mail's "special delivery" service. Special delivery items are "guaranteed" to be delivered before an agreed time the following working day. On the first occasion, the item didn't show up for several days - probably because of strike action. I got a tracking reference from the company I ordered the item from, and used Royal Mai's online Track & Trace feature, which showed that the item had been delivered the previous day. Literally as I was on the phone to the item's supplier, a Royal Mail driver appeared at my door with the item that had already been delivered. As I signed for it, I asked how come their own tracking system had it recorded that this item had been delivered the previous morning. She admitted that they often recorded delayed items as having been delivered to keep their performance numbers looking good.
Two week's later I'm expecting another item by special delivery that is currently a day later than was "guaranteed" (again, probably due to strike action), and this time the Track & Trace entry shows it as having been "returned to sender". I've been working at home both yesterday and today and nobody has called at the house, and no little card has been popped through my letterbox to tell me anyone tried to deliver it, which is what would happen if someone tried to deliver it but found nobody to sign for the item.
Again, I suspect the item will turn up in a day or two (two or three days later than they "guaranteed"), and their records will show the usual exemplary service.
It's highly likely that nobody in senior management is any the wiser to this metrics scam, or that the excellent service that their meaures tell them they're achieving might be the result of gaming rather than actual good service.
Indeed, I don't believe Royal Mail are alone in this. I suspect there are thousands upon thousands of organisations who - when you get above the level of the people being measured - have no clue that the quality of their products and services, in reality, is anything less than what their reports tell them. So it doesn't get addressed. And if you dare to complain, they will point to the 99% of satisfied customers who prove that it is you who is probably at fault here.
Deliveries are a big deal for businesses. And when customers pay a premium and their items don't show up on time, the supplier - and not the delivery firm - suffers, as does the customer. To sort out these problems, the first step might be to lift the veil on what's actually happening with these deliveries.
The design and application of metrics needs to go beyond simple, trite indicators like "the guy who drives the van says it's been delivered". You could drive a bus through the potential loopholes. A good way of tightening up the metric would be to apply a set of simple secondary tests to help identify this sort of gaming. Mystery customers are a great tool in these kinds of cases. If a small representitive sample - e.g., a few hundred - of the millions of deliveries Royal Mail handle were tracked not by computer but by actual human beings who can tell you whether the items actually were delivered when the record shows they were, then delivery drivers would soon realise that fabricating delivery times is too risky an option.
One wonders if they do something like that. From these experiences, I suspect not.
Posted 3 days, 10 hours ago on September 30, 2009