Sunday, September 18, 2016

Measuring anything

Most projects require some sort of measurement to obtain approval, determine viability, estimate return on investment, etc. It can appear challenging to think of how to measure risk, productivity, profit, etc.; however, Douglas W. Hubbard's book, How to Measure Anything, demonstrates anything can be measured, and in more practical ways than you might think.

A reduction of uncertainty
It is important to note how Hubbard defines measurement: observations which quantitatively reduce uncertainty. This is key as it takes the pressure off individuals for having to be exactly precise in their answers. Especially when just starting out, even a small reduction in uncertainty can be a large step toward a particular outcome. Hubbard points out that even sophisticated scientific experiments have margins of error; measurements for business are no different.

Really, anything?
Yes, anything can be measured. (Although not everything necessarily should be measured.) Hubbard suggests the following to help demonstrate this:
If it matters at all, it is detectable, observable.
If it detectable, it can be detected as an amount or range of possible amounts.
If it can be detected as range of possible amounts, it can be measured.
Determining the "what"
Understanding why you want to measure something helps guide the scope of what can be measured. For example, someone may say, "We want to measure IT security." The first question to ask is: what is IT security? From there, you should be able to identify particular objects of measurement within each part of your answer. Once you have your object of measurement and understand what it means, you are halfway there.

It is easier than you think
When we are struggling with measurements, Hubbard reminds us of the following:
  1. Your problem is not as unique as you think. Recognizing that others may have solved similar types of problems in the past may help to put things in more perspective.
  2. You have more data than you think. Some data is better than none. 
  3. You need less data than you think. Again, we are not looking for 100% certainty.
  4. An adequate amount of new data is more accessible than you think.
Obtaining measurements
Hubbard's "Applied Information Economics" has 5 steps to help obtain measurements. I try to summarize them below:
  1. Define a decision problem and the relevant variables. Asking "why?" helps here. Start with the decisions you need to make, then identify the variables which would make your decision easier if you had better estimates of their values. What is the decision this measurement is supposed to support?
  2. Determine what you know. Quantify your uncertainty about those variables in terms of ranges and probabilities. Hubbard uses the term Confidence Interval (CI) to gauge the level of uncertainty for a certain interval. A 90% CI would be one in which there is a 90% chance all outcomes fall in the interval you provided. For example, my 90% CI for average commute times in my office is 30-70 minutes. It is important to be "well-calibrated" in giving your 90% CI. Hubbard suggests the equivalent bet test as a way to gauge how calibrated you are.  
  3. Pick a variable, and compute the value of information for that variable. Some variables' measurements will be more valuable than others. The goal is to find the variable with a reasonably high information value. (If you do not find one, then skip to step 5.)
  4. Apply the relevant measurement instruments to the high-information-value variable. Go back to step 3 to repeat this process with any remaining high-value variables.
  5. Make a decision and act on it. 
Hubbard suggests at least 10% of the project budget for large efforts be spent on performing measurements to first justify the investment.

Note: Beware the "measurement inversion." Hubbard warns that most managers tend to measure the data which are easiest to obtain, but provide the least amount of economic value. Hence why step 3 above is critical.

Measurement instruments
Hubbard outlines the following to help start us toward our measurements:
  • Decomposition: Which parts of the thing are we uncertain about?
  • Secondary research: How has it (or its parts) been measured by others?
  • Observation: How do the identified observables lend themselves to measurement? Can you create a way to observe it indirectly?
  • Measure just enough: How much do we need to measure it?
  • Consider the error: How might our observations be misleading? Consider things like confirmation, observer, and selection bias. 

Hubbard describes at length many different types of measurement instruments like controlled experiments, regression modeling, and Monte Carlo simulations. I will highlight just a few of those which do not involve too much (or any) math, because I think it is important to have a few straightforward methods "in your pocket:"
  • Rule of 5. There is a 93.75% chance that the median of a population is between the smallest and largest values in any random sample of five from that population. This rule allows us to obtain a CI greater than 90 by only sampling a small amount of the population. 
  • Spot sampling. Determining how many fish are in a lake can seem impossible (unless you drain the lake!), but spot sampling can help here without draining the lake. In this case, a biologist might catch 1,000 fish, tag them, and release them back to the lake. A few days later she may catch another 1,000 fish and see that only 50 fish (5%) had a tag on them. This means there are approximately 20,000 fish in the lake.

Simple personal example
I'll conclude with a personal example of how understanding that anything can be measured can help expand possibilities. I was in a senior management meeting on a topic around improving our company's leadership, and someone said, "It is almost impossible to measure the performance of managers."

So I suggested the following:
  1. What things do we consider make a good manager or leader? 
  2. Of those traits, what are ways we can observe and measure those?
The group discussed many areas like team performance (which itself needed to be broken down further to define measurements), as well as retention/attrition rates, referrals (i.e. employees referring friends for open positions under that manager), promotions, etc.

The group was able to identify why measuring manager performance was worth measuring, and as a team identified possible measurements. The next step would be for us to put a value on each variable, and decide what decisions the measurements would be used to support.