Email Marketing: Taking the mystery out of customer motivation

4

It’s a little over-simplified, but an email marketer’s job is to get the right message to the right person at the right time to achieve a specific goal. Doing that means understanding what motivates subscribers to open a message and engage with your offer – and that’s where the process gets tricky.

Like our colleagues at MarketingExperiments, we at MarketingSherpa believe that nothing provides the better insights into the “right” approach than a good test. A marketer’s personal bias, best guess, gut instinct or assumptions aren’t enough. In fact, they’re often wrong. You have to be willing to let your audience SHOW you what motivates them.

Today in Munich, MarketingSherpa is hosting its second Germany Email Marketing Summit, which features a Case Study that demonstrates the power of testing to determine customer motivation. VNR.de, a publisher of lifestyle and professional advice from experts in their fields, is sharing the results of a list-cleansing/subscriber reactivation campaign they recently conducted.

Winning back “inactive” subscribers

The campaign targeted “inactive” members of their list, which they defined as subscribers that had not opened or clicked an email in 120 days. They wanted to either reactivate those subscribers, or else determine that they were truly inactive and remove them from the list. So they set up a four-message reactivation campaign to encourage a response.

email lineEach message took a different approach to the reactivation effort:

– The first was a survey about email preferences
– The second was a request for subscribers to update their personal information
– The third was a contest to win a book
– The fourth repeated the request to update personal information

What is more appealing than FREE?

Going into the campaign, the team believed the contest offer would have the best response. After all, people like getting free stuff, right?

Maybe not: The contest offer had the weakest open rate and clickthrough rates of the four messages. Its open rate was 60% lower than the best-performing email – the survey about email preferences. And the contest offer’s CTR was 82% lower than the best-performing email.

The good news is that the reactivation campaign was a success overall. They reactivated 9% of the inactive subscribers they targeted – and they won a MarketingSherpa Email Marketing Award for it.

They also learned important lessons about what motivates their subscribers. Their conclusion: “People seem to be most interested when we are interested in them.”

Final lesson: Assumptions are no match for results data. So get testing!

Sean Donahue is the editor of MarketingSherpa, a research firm publishing Case Studies, benchmark data, and how-to information read by hundreds of thousands of advertising, marketing, and PR professionals every week.

You might also like
4 Comments
  1. Flamex Online Marketing says

    Hi, interesting but I have one problem with what you are trying to say, or should I say test.

    First of all something that is truly FREE is different from a contest. Free means you actually get it without doubt, contest is chance based which = doubt, many people totally ignore contests, competitions as such because they know the chances of winning, not only that the perceptions of contests are that they are generally fixed.

    So you are not testing against something FREE, you are testing against a contest only. Testing can be a waste of time if don’t properly know what are the best things to test against in the first place, don’t you think??

  2. Daniel Burstein says

    Thanks for the comment.

    I believe the point Sean was making was that the third email offered an incentive where readers would be offered something at no cost (even if what they were offered was only a chance at receiving the prize, not the prize itself).

    In testing, we seek to continually learn and then test further based on that new knowledge. In this case, if the incentive had won, it would indicate further testing to decide on the best incentive (perhaps something free).

    However, this list seems to prefer the sense that the sender is interested in and listening to its audience. That would indicate testing further two-way communication, including possibly branching out into social media use to really solicit and act on feedback.

    In a perfect world, marketers would be able to test every conceivable permutation of every possible treatment (e.g., several different incentives against each other, several different incentives against several ways of listening to customers, etc.). In reality, we all must deal with certain restrictions (budget, resources, traffic, time, etc.) and therefore occasionally must decide on one treatment that would represent a broader idea (in this case, using a contest to represent all incentives).

    Deciding on which of these trade-offs to make is one of the most challenging aspects of testing.

    That being said, you bring up two good points. Just because one incentive underperformed (a free chance to enter a contest) does not mean that every incentive will underperform (such as a free, no-strings-attached gift). And second, when developing and evaluating tests, make sure to gain a firm understanding about why you are testing each treatment and what knowledge you are hoping to gain from including that treatment.

  3. Sean Donahue says

    Great point about the importance of knowing what you’re testing against. But the confusion here was caused by my awkward phrasing, and not indicative of a failing by the VNR.de team. They understood they were testing the value of a contest incentive – not a pure “free gift” incentive – against other motivators. I should have been more precise by saying something like, “Who doesn’t like the chance to win a free gift?”

    And I agree with Daniel’s thoughts: Incentives can, and do, work — and are definitely worth further testing. In fact, the guarantee of a free gift, however small, often is more compelling than the chance to win a big prize. For example, we’ve featured a case study at our 2009 B2B Marketing Summit from a marketer who wanted to get B2B leads to respond to a quick email survey.

    They tested a contest incentive that offered the chance to win one of 50 $250 gift cards against a guaranteed gift of $5 gift cards for all survey participants. The $5 gift card was the winner by a mile: That guaranteed incentive attracted nearly five times more survey responses and lowered the campaign’s cost-per-lead 97.8%.

    So as Daniel says, just because one incentive underperforms doesn’t mean your audience won’t respond to another incentive offer. Keep testing.

  4. Tim Watson says

    I read a lot of the MarketingExperiments material and its great stuff. Always good insight and stimulating points.

    I’ve an observation on this one and some experience, which could change their conclusion of:

    “People seem to be most interested when we are interested in them.”

    I’ve read the full case study and reviewed the emails used by VNR.de.

    The first and most significant point is that the book being offered was one on Gardening. The VNR.de audience is not biased towards Gardening. Its thus likely the offer only appeals to a fraction of the audience. No wonder the response was lower.

    When using incentives, from tests I’ve run, its not hard to get a 400% difference in response by use of different incentives of the same monetary value. A change in incentive would have made the contest outperform the other emails.

    Also of note is that the copy (I read German too) and size/placement of call to action for the survey is much better than the request to update personal information.

    Testing is vital but some of the challengers have been put at a big disadvantage and its little wonder they didn’t work.

    I would have taken a win back approach of survey combined with incentive and tested around that theme to find best incentive, copy etc.

Leave A Reply

Your email address will not be published.