The point of this post isn't my weight though. It is the about the saying: "If you can't measure it, you can't manage it." I have had an interesting history with measurement. I have always liked numbers. In fact I used to calculate (in my head) the estimated time of arrival (in minutes) to home every time I passed a sign with miles to home. Yes, I know I have issues. When I worked at Motorola they used to publish our software release sigma number. I found out that this number was the number of defects found in the last release divided by the number of lines of count. This was obviously wrong as I could have added NOPs and got that number lower without improving the quality at all. When I consulted with government contractors (almost all where SEI level 3 or higher) I found several measures that were more complicated but in effect similar to Motorola's sigma. When I spoke up about these measures being easily duped and misleading, the response was: what do you suggest we measure instead? Good question...I didn't have an answer.
How do you manage if you have no quantitative way of saying you are getting better or worse? The key, I think, is something I was taught my a measurement guru a worked with once (Chris Miller, please forgive me if I butcher this:-).
First start with the goal (i.e. improve productivity, quality, waist line). Next find some measures that are good predictors of those goals. Examples:
- Goal: medium rare meat - measure: internal temperature as measured by a meat thermometer.
- Goal: smaller waist line - measure: same scale at the same time of day
- Goal: software productivity - measure: lines of code (this is worth a whole post, but not this one)
Be careful not to believe the measures are the goal as this trap will lead to manipulation and a false sense of where you are. Now, look at your process (lots of people use an idealized process...another trap) and see what measures you can capture cheaply. Start capturing (sometimes the historical data is laying around and you can go back in time) and do some simple statistical analysis to see if any of these measures correlate with the goal measures and thus can be your predictive measure.
Let me give me an example that most can empathize with. I have a goal of smaller waist line. My goal measure is my weight on the gym scale in the morning (experience has taught me my weight fluctuates widely throughout the day). My predictive measures eluded me until my wife introduced me to SparkPeople (image an intersection between Facebook and Weight Watchers) The web site is fairly painful to use, but the android app is quite good and lets me enter in all the food I eat fairly easily. Now I have a predictive measure that is fairly cheap. Now for the simple statistical analysis.
We are after correlation coefficient (CORREL in Google Docs Spreadsheet) . Next we square it and display it as a percentage ("The square of the coefficient (or r square) is equal to the percent of the variation in one variable that is related to the variation in the other." from http://www.surveysystem.com/correlation.htm) The smaller the better. In my case it was .1% so I think it's a keeper.
One trap I fell into, and still do at times, is the old accuracy/precision trap. I have already described this:
We are after correlation coefficient (CORREL in Google Docs Spreadsheet) . Next we square it and display it as a percentage ("The square of the coefficient (or r square) is equal to the percent of the variation in one variable that is related to the variation in the other." from http://www.surveysystem.com/correlation.htm) The smaller the better. In my case it was .1% so I think it's a keeper.
One trap I fell into, and still do at times, is the old accuracy/precision trap. I have already described this:
I am sure there is quote out there, but here is my general rule: let the level of precision of a number be based on its accuracy. Accuracy of an estimate is always low so please don't use decimals (precision) If you don't understand, please read this explanation of the difference between precision and accuracy.In my case it isn't estimation as much as it math. If you spend hours getting your predictive measure (i.e. calories consumed) you are wasting precious time. Let me give you an example. SparkPeople keeps my calories to 1/10 a calorie, but what if round my calories to the nearest 100....my correlation changes to .02%...yes in actually gets better (its a math trick...given different sets of data it could have gone .08% the other way) but the point is .08% or even .8% isn't worth the time it took to write it. If I round my correlation to the nearest whole percentage they both are 0% or practically perfectly correlated. So don't sweat the small stuff and keep you mind on the goal whether it be fitting into old pants or productivity improvement.
4 comments:
Its hard to argue with the statement that "unless you know where you're going, you won't know when you get there". So yes, I do agree that measurement is an important part of the process. However, you have to be very careful that:
a> The metric selected is a simple, widely understood metric (such as Mike's waist size)
b> The measurement process is accurate and actually represents something meaningful (as opposed to something like the federal deficit reports or the monthly jobs creation reports)
c> The metric doesn't become the goal itself - it stays a measure of progress towards the goal.
-Poorav
I wonder if you read the whole post ;-)
Point A is a good point as goal measures can be tricky and simply keeping track and asking the team a simple question "do we believe the gaol is this much better/worse as the measure states?" in retrospective (all healthy teams use this :-)
Point B not sure what the "measurement process" refers. Do you work for the federal government?
Point C was addressed "Be careful not to believe the measures are the goal as this trap will lead to manipulation and a false sense of where you are."
Here I thought you tuned me out during all of mind-numbing discussions.
It is very true that when people focus on the metric and lose sight of the goal, almost certainly the metric will become bias (either intentionally or unintentionally).
In your example the measurement analyst and the metrics consumer/decision maker are one in the same. In this instance there is very little to be gained by biasing the data.
I have found (and very recently reconfirmed) that when the data collector, analyst, and decision maker involve more than one person (and many times several people) that integrating the metrics into a single information product including situational awareness is essential to the metric's value and overall perceived creditabilty.
I enjoyed your post!
Chris Miller
Interesting post Mike,
In finance, we measure against an benchmark which could be an index or composition of a set of indices. However beating a benchmark does not mean you reach your goal if you take extra risk. Therefore, tolerance for risk must be added into the goal which in turn needs to be measured. For example, goal: beating the S&P index with portfolio beta < 1. Or reducing waist size without the 10% reduction of calories intake. The point here is that a goal for measurement needs to have constraints to properly assess the validity or fairness of reaching the goal.
-Will
Post a Comment