Sunday, August 30, 2009

When Testing Doesn't Work by - David Baker


There are probably thousands of articles on the Web about how to test marketing campaigns effectively. You have a hypothesis, you have test variables, you have test segments, and you have some view of statistical probabilities associated with the outcome. That's a perfect world. Let's first assume you actually have the time to build a proper test and the resources to create the many versions -- and you can actually deliver on the test program and keep the program sterile. What do you do when the tests don't tell you what you want to hear?

A consumer marketer I know with a really large database decided to test the cadence of their email programs. They developed several test cells: a high frequency cell, a low frequency cell, a control group that was exposed to the same cadence they've run for a year, and then a hold-out group. Seemed like a pretty straightforward test. The group with the highest revenue would lead to some conclusions on how often they should send promotional email. Well, what they found was that the hold-out group actually performed better in terms of revenue than any of the other test cells. At first glance, you'd think, our email program stinks and all this work we put in is wasted energy. We can generate these sales without email.

This test somewhat backfired on the CRM team; now they had to now justify what they do day in and day out. But they realized that the frequency of communications wasn't an indicator of their program's success. The problem was at the root of their program: early lifestage messaging. They had a very progressive offer strategy for new customers and people who signed up for their product/service, and they were effectively numbing their audience to their email promotions. It wasn't that they had declining revenue for their retention audience, but email was not as effective at driving conversion under their own terms, and increasing offers didn't increase sales at the same rate as the increased offer value.

What do you do in a circumstance like this? Would the senior team actually believe this new hypothesis and be able to deal with the subsequent costs of ripping apart the company's s early-stage promotional strategies? Could they migrate away from "crack marketing," where you're so addicted to doing the same things that you are afraid to make wholesale changes?

The challenge with a scenario like this is, the CRM team didn't think about all the possible outcomes and dynamics that might influence the results -- and they weren't ready to react to the results.

Another story involves a company that decided to test the effects of direct mail and email through different scenarios: email only, direct mail only, and a combination of the two. Seems like a fruitful exercise. If you can prove that email has a better influence as a standalone channel, you have a winner from a cost savings perspective. If you show the combination of email and direct mail have an influence on increased revenue value, you can potentially leverage this to be very timely in your launch and communications. But what if every scenario pays off? What happens if revenue is down all around? What happens if you find that a large portion of your loyal customers are no longer buying through the email channel, but your paid search numbers have increased dramatically?

Testing is a funny exercise. It's hard to contemplate all the potential outcomes, and it takes a lot of energy to put together good tests. Is the trade-off to do it poorly and risk outcomes that you can't justify? That is almost worse than not testing at all.

I won't belabor the point. The only time tests don't work is when you aren't prepared to react, haven't thought about all the potential outcomes, or take no action at all on the results.


No comments: