Thursday, May 17, 2007

Measure Twice, Cut Once

It's been too long since my last post. I've been doing some thinking about measuring stuff lately and pulled out an "article" I wrote for another blog that never got published.

Measure Twice, Cut Once

The art of benchmarking and establishing numbers based best practices is easier said than done. Using metrics to guide decision making (“Business Intelligence”) also requires a commitment to build systems that produce reliable and timely numbers as well as a staff who can understand what they mean and build action plans that will improve them.

Over the past few years, I have been involved in several projects designed to help non-profit organizations become more efficient, build better best practices and become more effective in their marketing efforts. Establishing benchmarks has been difficult for two key reasons.

It’s proven to be very hard to build consensus on what to measure and how much weight to put behind the measurement. For example, do we measure individual participants renewal rates as part of the overall event more than, less than or equal to corporate participation?

Understanding the data model and the technology needed to generate the numbers has also proven to be quite difficult without some IT spending on business intelligence tools. For example, it’s been very hard or impossible to quantify the value of a web lead vs. a telephone lead because the systems that collect the data do not have a way to exchange data or track users through the system. Putting better technology in place takes time, but is crucial to building a complete metrics system. In the past, the staff had generated information manually and used gut instinct to manage the ebb and flow of the campaign. Learning to trust data that was generated by the systems was a mind-shift and required some time and learning.

As part of a team working on developing a metrics system for a charity walk program I saw this play out in first hand. Led by the COO who was demanding accountability, we had several meetings that helped us to establish both the philosophy of a metrics system and to flush out what exactly we intended to measure. The initial meetings were frustrating. The lack of common terminology between departments a misunderstanding of what a solid metrics platform meant had to be dealt with first. Some key team members didn’t understand the value of transparency or how to managing by the numbers in a collaborative fashion. After many weeks of massaging how we collected and reported on the different data points, consensus was reached and we started generating weekly reports. Once the data was freely available, feedback was immediate – local staff loved to see how they stacked up against other local offices and national staff finally felt as if they understood how the campaign was progressing at a macro level.

Over time, the project will evolve, and will start to generate actionable information to help the campaign staff drive up acquisition and retention rates, leading to a more robust and profitable event with more oversight and accountability for everyone.

At the International Rescue Committee, we've just started producing a monthly web metrics report and are now trying to pull in additional data points from around the organization so we get a better, 360 degree view of what's happening. It's a long road to hoe - but a required one!

Labels: ,

Links to this post:

Create a Link

<< Home