Say your goal is to increase the number of customers you serve each day. Perhaps you run a city office processing food stamp applications, or maybe you’re offering technical support for your company’s product. How many customers do you serve online, in person, and over the phone? What’s the average time to resolve a problem in each of these channels? Which types of customer requests take the longest, and which can be handled expediently?
If you can’t answer these questions, you’re setting yourself up for failure before you even begin to try.
Data-driven decision making is a way of life these days, from city hall to the corporate boardroom. If you have the numbers to dictate a course of action, the thinking goes, why would you use your heart or your mind? But in the quest to back up every move with cold, hard data, it can be easy to mistake any old numbers for useful numbers. Not all data is created equal, and the best way to ensure you’ll be collecting the right data is to develop the right set of performance metrics.
So how do you decide which metrics will help you and which will just distract you from the central issues? Here are five common mistakes people make when dealing with data, and some tips to avoid them.
Mistake #1: Just Having Metrics is Enough
It’s true that measuring a little bit is better than measuring nothing. But too many people are satisfied upon merely being able to utter the word “metrics” to a supervisor, and too many supervisors assume that if their team is counting anything at all, they must be doing something right.
Data is only useful if it allows you to measure and manage performance quality. This means it’s not necessarily as important for, say, the Buildings Department to count how many buildings passed inspection as it is for it to know the types of citations that caused them to fail, the number of inspections each inspector completed in one day, and how many buildings corrected their violations within one or two months of initial inspection. This richer set of data will reveal inefficiencies in the inspection process and allow the department to work toward better safety standards.
Mistake #2: The More Metrics, the Better
A common misconception is that if something can be counted, it should be counted. I’ve made the mistake of laying out tabs and tabs of metrics on a spreadsheet, only to find that the effort required to collect the data is a drain on not only my time, but the time of the people assigned to carry out the very work we’re trying to measure.
You never want your performance monitoring to be so onerous that it actually hinders performance itself. When coming up with a set of metrics, it helps to start by brainstorming everything you could possibly measure, then prioritizing the top 10 indicators that will yield the most critical information about your program. Start with a manageable load, and gradually add more—as long as the effort required to collect the data will pay for itself in useful observations and opportunities for improvement.
Mistake #3: Value Judgements Should be Assigned to Volumes
On the surface, it may seem intuitive that more calls answered is better than fewer calls answered. But imagine that in order to squeeze in an extra five calls an hour, the quality of each call is compromised. Less information is gathered, and fewer issues are addressed. Callers aren’t satisfied with the first call, so they call a second or a third time, further increasing your call numbers but taking up extra time and failing to address the reasons why the calls are coming in the first place. Perhaps calls that last a minute longer but more adequately address the caller’s questions end up preventing repeat calls, thus rendering the more-equals-better line of thinking not just mistaken, but backwards.
It’s also important to realize that many metrics, when counted as absolute numbers, aren’t particularly helpful. Without context, a number is more or less meaningless. Any numerator deserves a denominator, and pure numbers should be represented as a percentage of the total. For example, moving 1,000 homeless individuals off of the street and into temporary housing is laudable. But if the goal is to create housing for 20,000 homeless people, then it’s important to recognize that you’re only 5% of the way there.
Mistake #4: Let the Numbers Speak for Themselves
It’s dangerous to assume that numbers tell the whole story. It’s better to think of data not as a smoking gun, but as a trail of breadcrumbs. Metrics can point you toward problem areas or alert you to a potential issue that you might not have otherwise noticed. But until you dig in with your bare hands, the numbers are just that—numbers. Uncovering the root of a problem often involves interviewing the people who work close to the matter at hand, observing, and making sense of qualitative data. The metrics reflect a result, but not a root cause.
You may find that the amount of time to complete a filing process has increased by five days. But don’t automatically assume that clerks are spending all day procrastinating on BuzzFeed. A few simple questions may reveal that a recent marketing effort successfully yielded a 20% increase in applications, or newly legislated changes added a step to the process. Let your numbers lead you to focus in on areas of questioning, rather than taking them as the answers themselves.
Mistake #5: If it’s a Good Metric Now, It’ll Be a Good Metric Later
Problems shift and change, as do objectives. Perhaps an initial set of metrics allowed you to address lagging turnaround times on contracting paperwork. Once that problem is solved, however, it’s important not to rest on your laurels. Chances are, that metric can be improved upon further, or there’s an entirely different problem area begging for attention.
Make a point of revisiting your metrics every three to six months to make sure they still make sense in the current context. You’ll likely find that some have become obsolete, and others require tweaking. But take care when deciding to change a metric. Changing the way you measure a particular piece of data may render historical data less useful and interrupt the continuity of the data you’re collecting. This is not to say that metrics should not be adapted as time goes on, only that the decision shouldn’t be made lightly.
Data is a science and deserves to be treated as such. When you take the time to approach metrics from a considered place, you’ll be in a position to constantly evaluate your efforts and implement meaningful improvements.