Financial institutions are fixated on hard data. But sometimes we must look beyond the numbers alone to measure success.
My favorite courses in graduate school focused on statistics. Not the math-major variety, where formulas were simply formulas. These statistics came from psychology, and they were essentially numerical analyses of human behavior. In psychology, stats help you determine whether a test result is a trend or a coincidence, whether a test has legs, and whether you can confidently predict the future. (Well, future human behavior, anyway.)
On one hand, my love for statistics made perfect sense. The certainty, clarity and finality of statistics created a welcome departure from the rest of my theory-driven work, where “it depends” was the most popular answer, and change-a-variable-change-the-result studies dominated the discussion.
On the other hand, my love for statistics made no sense. It was, in fact, diametrically opposed to my totally-in-control and data-free intuition, which fights aggressively to drive the important decisions in my life.
Given this tension, I am uniquely positioned to talk about why measurement gone wrong is causing a barrage of problems for financial institutions large and small. Here it is, in a nutshell: careless measurement favors short-term gains at the expense of long-term progress.
To explore the implications of my argument, let’s start with the current popularity of integrated, interactive marketing campaigns. Typically, the “key metrics” that tell you whether a campaign was successful fall into the “hard data” camp: response rates, sales, ROI. All of these are legitimate ways to measure a campaign.
But there are “soft” results that in the end could prove to be far more important. The campaign, successful or not, may improve the knowledge of the team that designed, executed and monitored it. This new knowledge and experience can help the team make better decisions, design better campaigns and improve the organization’s overall performance. Measuring this kind of campaign, however, tends to highlight hard data and mask everything else, including the cumulative positive effect of campaigns.
Let’s assume that a hypothetical campaign does not meet a single hard-data goal. Response rates fall short of expectations. Sales numbers are subpar. ROI is negative. But while all of these discouraging numbers stream in, the team responsible for the campaign is learning critical lessons — not just about the campaign, but about the organization’s entire strategy for building customer relationships.
Did the campaign fail? Measurement says yes, without a doubt. But this short-sighted assessment completely ignores — and ultimately undermines — the highly strategic, long-term benefits.
Data in context
So what happens to those big, valuable and potentially difference-making lessons when the campaign is deemed a (short-term) failure? Well, they may not disappear, but they are undoubtedly undermined. The campaign is written off and the team might even lose credibility.
Don’t get me wrong. Rigorous measurement of any effort, including integrated marketing campaigns, is important. To fully understand the success of the effort in question, however, measurements must be placed in context. It’s important to differentiate today’s numbers from tomorrow’s potential success.
Organizations are living, breathing, complex entities that change only as a result of the cumulative efforts of many individuals. This is a critical point that often gets lost when it comes time to measuring results. No single effort is capable of producing a sustained, dramatic change in behavior — whether in customers, prospects or employees. Our ideas about measurement must account for this reality.
© 2010 Martie Woods