As project managers we're all into measuring things. That's not a bad thing - if we didn't measure, how would we know whether we're getting better? Metrics are a way of life, for project management and for business in general.

 

Some of the conventional wisdom tells us to measure something - anything - and work from there. And to some extent it's true, you can of course always change what you're measuring. In fact, it's often useful to change metrics as we change business focus and processes and as we improve dramatically in an area we're already measuring.

 

We measure for any number of reasons. Sometimes there's a systemic issue that we want to understand and improve. Sometimes we want to measure people on an important aspect of performance. Sometimes there's a shiny new quality program that requires every group to have measurements. In agile projects some measurements - like velocity - are built right into the system as much to help with planning as to measure performance.

 

Nope, there's nothing wrong with any of that. What can go wrong is that peoples' response to a measurement is of course to work to improve it - and sometimes that focus has unexpected (and unwanted) results.

 

Here's an example. In the 80s (that's right, decade of big hair, shoulder pads, and looking to Japan for all things business) there was a case study of the application of House of Quality concepts (as they existed at the time) to software development in Japanese firms. The crux of the process was that a track record of number of defects per line of code (or more often number of defects per function point - really, look it up :-) ) was established first. For subsequent development efforts, the number of total defects was predicted based on the track record and the function points being developed. Now here's the tricky part - no product/code was released until the predicted number of defects was found. Here in the U.S. many of us were skeptical. Sure enough, it wasn't too long before the whole approach was abandoned. Why? Because programmers at the companies in Japan where the approach was used were actually deliberately coding small defects in - so if their code was holding up a release because the requisite number of bugs hadn't been found, they could 'find' the small defects and declare the product complete.

 

This effect doesn't mean that we should abandon metrics entirely. What we need to do is think carefully about the possible reactions to (or fallout from) efforts to improve the metrics and be sure to measure those areas as well, knowing that everyone wants to improve and usually will focus on any measurements being taken.

 

Suppose you run a call center. Your primary metric may be call duration - how quickly does a support representative resolve a caller's problem. Of course this is a very important aspect of running a call center - you want customers' issues resolved quickly, you want reasonable wait time for customers, and you don't want to overstaff (that's expensive!). If this is your only measurement, your support engineers will be working to get people off the phones quickly. They're being measured on their call times. That doesn't mean that they're getting to the bottom of underlying issues, it doesn't mean that they're allowing the customer to fully explain what they're encountering, it doesn't mean that they're waiting on the line to see if their suggested actions work, it doesn't mean that the support representatives won't be making assumptions, and it doesn't mean the customer won't be calling back. Any and all of these things might happen as the support reps work to reduce their call time, and it's definitely not what you had in mind. Call duration isn't a bad measurement but to make it serve the business you will want accompanying measurements of customer satisfaction and/or repeat calls per customer. The message to your support reps then becomes clearer: resolve the real issue in the shortest time possible to the satisfaction of the customer.

 

In agile software development I've seen an emphasis on velocity result in a large defect rate unless QA is explicitly involved in the velocity calculations - engineers rush the code out without unit testing in order to make the numbers. In all kinds of project management I've seen actual time spent mysteriously align perfectly with estimates (when estimates vs. actuals are being measured) - meaning there's no way to know whether estimates are valid or to accurately plan because people are too focused on estimates=actuals to report accurately.

 

 

The bottom line? Definitely measure, but don't pick a metric randomly. Decide what's important to your business, figure out how to measure it, and give some thought to what other behaviors improving that metric might drive. Once you do that, develop an accompanying measurement to help balance the behavior and you're on the right track.