Lessons Learned

Our analysts reveal the top takeaways from our 2008 optimization research

0

To discover what really works in optimization, our team is constantly
running tests and conducting experiments.

After hundreds of these tests, several wins, some losses and surprises, we asked our analysts to identify the most important lessons of 2008.

Then we looked for common themes that appeared in our research with B2B, B2C, lead-gen, ecommerce, and other areas — and distilled the list. The result?

Our December 3 clinic featured case studies and takeaways linked to three of the top challenges marketers grappled with this year.

Our Analysts’ Round-Up:

When our analysts reviewed case studies that worked—and ones that didn’t—their wide-ranging research experience gave them  perspective that helped to pinpoint some common challenges shared by marketers in every field.

The top three:

  • Optimizing for the entire sales funnel
  • Developing tests that produced significant gains
  • Improving their cycle of testing and optimization

Marketers can combat these challenges by developing an informed testing strategy, one that incorporates preparation, analysis, reflection, and revision.

Clogged Funnel: The Alice-in-Wonderland Effect

Your customers’ experience should not be as disorienting as Alice’s experience falling down the rabbit hole.  Yet marketers often invest more time and energy on the beginning of the funnel than on the end, or the steps in-between.

If the primary difficulty in optimization is deciding where to begin, one suggestion is for marketers to take a fresh look at their funnel.  A conventional, top-to-bottom (or entrance-to-exit) approach to the funnel that you’ve built may only be traveling over familiar territory.  But beginning your review with your shopping cart or sign-up process can bring to light flaws in those pages.

Within the funnel, the most common flaw is offering too many exits before conversion.  A complex purchase or sign-up process, while it generates leads, may also discourage prospects from completion, or, if they do finish the process, discourage them from returning because they anticipate further “hassle.”

Hurdles in the funnel process, such as pop-ups, registrations, or verifications, can interrupt your prospect’s momentum, making it more likely that they will abandon their goal.

The end of the funnel—especially for B2C or ecommerce sites—can easily get overburdened with cross-sell and up-sell options.  In addition, marketers rely on customer motivation to gloss over flaws in the shopping cart.  But a complex shopping cart process with hard-to-find and hard-to-manage carts and purchase pages discourages even motivated prospects from completing their purchase.

The Funnel from the Prospect’s Perspective

Attracting prospects depends on a good first impression.  So, it’s easy to see why the beginning of the funnel gets more attention from marketers.

However, retaining prospects depends on maintaining that positive first impression throughout the conversion process:

The Conversion Process Which part of the funnel...
The Conversion Process

 

Case Study 1: Fixing the Funnel

This B2C partner maintains an online community for artists.  The goal of this test was to increase subscription sales.

  • Primary Research Question: Which subscription path will produce a higher conversion rate?
  • The Approach: The test ran three versions of the subscription path.  The control was a 4-step process with an email validation.  The treatment used a 3-step path offering an upgrade on the sign-up page.
  • Additional Observations: The initial subscription path included an email verification requiring users to open their email, click on a link in that email, and launch a new browser window to complete sign up.

Control

The original order path for subscription:

For this test, we were unable to change the number of fields.  In addition, the partner’s style guide made it difficult to change any aspect of the logo.  So, we focused on optimizing how the product offering was displayed.

Other optimization challenges included:

  • The headline didn’t mention anything about free membership.
  • People who already filled out all the user information were faced with a fairly complex offering: between free and paid membership.  If they chose a paid membership, they were then asked to make a second buying decision: which type of membership.
  • Although this is a subscription sign up, visitors pay in a shopping cart format rather than a simple payment page.

Treatment

The optimized treatment:

Optimizing the product offering focused on simplifying the funnel.  Changes included:

  • Changing the headline from “Join now” to “Get Your Free Access.”  Previous tests have shown that underscoring “free” in the headline often has positive results.
  • Simplifying the product offering into one easy-to-understand dropdown.  The choices became either subscription offers or “no thanks.”  Now subscribers must choose “no thanks” if they don’t want the subscription.
  • Improving eye-path on payment page by changing the tiles to tabs.  Instead of offering multiple offers on one page, the page stays focused on membership.  Casual browsers or experienced visitors interested in viewing other aspects of the site have the option to click on the tabs.

Eliminating the interruption in the funnel process was the most significant change:

  • Removing the email validation step made the subscription process one seamless decision.  In this way, we capitalized on the users’ motivation and kept them moving swiftly through the funnel.

Results

The results of the test suggest the effectiveness of a streamlined funnel.

Case Study #1
Order Path Conv. Rate
Control .16%
Treatment .31%
Relative Difference:89.23%

What you need to understand: The treatment outperformed the control by 89.23%.

Much as in architecture, where a simple structure often makes the strongest foundation, the simplest funnel frequently provides the strongest support for conversion.

Conclusions

An informal formula for optimization through simplicity is to“Take stuff away, improve the stuff you’ve got, add new stuff.”  In other words, strip your pages down to the bare essentials, and then add only those elements that improve the overall performance.

Our analysts recognize that the gains from a successful test can be furthered by additional testing.  Some potential follow-up tests for continuing funnel optimization include:

  • Testing new headlines that communicate the value proposition
  • Testing a radically short path that starts on the homepage call-to-action
  • Testing an even more simplified offering.  The five choices currently available may complicate the decision-making process.

Transferable Principles

This case study offers several principles that apply to both B2B and B2C conversion sequences.

One obstacle is lack of awareness of how newcomers experience your processes.  Review your funnel as a customer would.  Consider the most appropriate time for cross-sell or up-sell from the prospect’s point of view.  You’ve seen your funnel so often that every part seems familiar and necessary.

Another obstacle to optimization is an attachment to current practices.  People become attached to established process, either because they take pride in their creation or because they’ve become mired in tradition.  Businesses eager to generate leads often see extra steps—like non-essential fields or forms—as vital logistical processes, not as inconvenient or annoying additions that can harm conversion.

When working to optimize your funnel, look first at the things you think you could never change and evaluate how truly necessary they are.

Testing:  The Marketer’s “Frienemy”

In their 2008 reflections, our analysts repeatedly observed the need to take a more informed, more consistent approach to testing.

It is not overstating the case to say that marketers have a love-hate relationship with testing.

Testing pinpoints problems, provides hard data, and supports your plans in discussions with sales and management.  Yet testing can also be complex and time-consuming, and positive results are never guaranteed.

As the old song says, “You can’t live with it, but you can’t live without it.”

So, which tests are marketers conducting most?  During the clinic, we polled our audience to find out.  Out of 396 responses, here’s the breakdown:

  1. A/B split tests—34%
  2. Multivariate tests involving multiple A/B test cells at the same time—16%
  3. Taguchi-style multivariate tests—10%
  4. We read our analytics but we don’t test—33%
  5. “C/G” Split tests (Coin toss/Go with Gut)—8%

While 34% of our respondents often use A/B Split tests, 41% do not test at all, relying on analytics, chance, or instinct to make their decisions.

A similar survey from MarketingSherpa indicates that this trend is not unique.  When asked, “Which tests are you conducting the most?” their respondents heavily favored A/B split tests.

MarketingSherpa A/B Split Tests

A/B testing remains the most popular option because many marketers find it the easiest format to use.

The Pitfalls of Testing

According to our analysts, the most common errors in testing and analysis include:

  • Bias: Approaching a test with a clear predisposition toward a particular outcome
  • Impatience: stopping a test too soon based on early results
  • Extrapolation: Drawing overreaching conclusions from limited test data—for example, interpreting high time-on-page as engagement when visitors are actually confused
  • Follow-through: Failing to prepare follow-up tests

In addition, marketers face validity threats with information they culled from tests.

  • Data Problems:
    • Collecting the wrong data – point-of-capture, level of detail (granularity)
    • Collecting the data wrong – instrumentation problems, sampling problems

In truth, sometimes the pitfalls of testing start before the test has even begun.

Testing Cycles: Aim for Normal

Ideally, a test should be conducted in conditions that are as close to daily market conditions as possible.

As the chart above shows, this ecommerce site ran a four-week test but some very substantial variables leave their results in doubt.

The test validated in the first two weeks, but the results in the third and fourth week were compromised by the merchant’s Election Day special.  When, by the end of the four weeks, the test results were not conclusive, there were two choices.  First, continue the test and see if it might validate again when more regular conditions resumed.  The second choice was to end the test and draw conclusions with compromised data.  The first option wastes time and the second option wastes data.

Testing is most accurate when nothing unusual is happening (holidays, seasonal spikes, changes to channels, special sales, etc.).  Plan when you will execute your tests just as carefully as you plan the tests themselves.

A Testing Review Checklist

To help you plan more successful tests, our analysts composed the following review checklist:

  1. What is the best objective of this test?
  2. Am I certain that this test took place in a controlled environment?
  3. Were there any external factors that could have contributed to the test result that were not accounted for in the testing?
    1. Did I launch any ad campaigns during the execution of this test?
    2. Did I make any changes to PPC spend during this test?
  4. Was traffic evenly and randomly split between all treatments?
  5. Is my tracking system accurate?
  6. What were my optimization objectives?
  7. Is there more than one possible interpretation of my test results?

Regardless of whether you follow every step, this checklist can help bring structure to the testing process.  While most of the steps focus on logistical considerations, steps 1 and 6 allow you to set and review your goals.  These steps will help you keep your testing aligned with your ultimate goal of optimization.

Even modest testing, done consistently, will give you an advantage over your competition.

Room for Growth

Our analysts concur: Testing is the area where marketers stand to make the greatest gains in understanding and skill.

Marketers need to be clear about the objectives of a test before they begin.

After clarifying what knowledge or improvement you hope to gain from a test, you need a structured and proven approach for deciding what to test, and how to test it.  The MarketingExperiments Conversion Sequence provides a common language by which page elements and performance can be analyzed.  It also allows for aspects of a page to be identified and isolated for very granular optimization.

Even after pages have been analyzed according to a scientific structure, many marketers have difficulty conducting a valid test.  For example, valid A/B or multivariate tests require discipline and precision because each page variation must change significantly yet retain consistency.

Finally, marketers must develop their post-test analysis and follow-up.  This is where many of the testing pitfalls mentioned earlier (bias, impatience, etc.) come into play, to the detriment of accuracy and future optimization plans.

Multi-level Hypotheses

No single test provides a complete answer to optimization.  Whether tests validate or remain inconclusive, the results should inspire reflection and further testing.

This four-step process explores the ideal testing process, one that includes follow-up tests to the primary test.

Four Step Process

Follow-up tests often answer the questions raised by the initial test.

After the first test is complete, you need to keep asking “whys”—because you’ll often learn more from tests that did not perform as expected.

Our next case study offers an example of how a successful test can be bolstered by follow-up tests.

Case Study 2: Increasing Requests for Quotes

This B2B partner works with high-tech industrial equipment.  The goal of this test was to increase the conversion rate for the request for quotes (RFQs) form.

  • Primary Research Question: Which conversion path will yield a higher RFQ Conversion Rate?
  • The Approach: The control is a detailed form that must be completed in order to generate an RFQ (lead).  The treatment uses a “Configurator” that captures leads prior to completion of a streamlined RFQ process.
  • Other Considerations:  Which conversion path will yield a higher Lead Generation Rate?

Control

Note that the call-to-action is essentially hidden in the top-nav bar.

This three-step form moves from nine reasons to choose the supplier and a hidden call-to-action to a detailed form.  The form had 15 fields and none of them included a drop-down menu.  Instead, the users had to manually enter their information.

In this version, 80% of people who began the process did not complete it.

Treatment

In the treatment, the form has more steps, but the steps have been greatly simplified.

The treatment made RFQs the main focus of the landing page and included an email capture at the beginning to facilitate the recovery process.

In addition, our research determined that five of the questions on the original form were only needed for 5% of the RFQs.  So, those questions were gathered together and placed with an option to speak to a representative.

The remaining 10 questions incorporated drop-down menus.

Results

Case Study #2
RFQ Conversion Conv. Rate
Control 4.16%
Treatment 8.71%
Relative Difference109.58%

What you need to understand: The treatment outperformed the control in generating RFQs by 109.58%.

After optimization, 90% of the prospects who started the process completed it.

Even though this was a highly successful test, it is important to consider future tests that will build on these findings.

Conclusions

The main goal was accomplished: The configurator acquired more RFQs per unique visitor. Almost everyone that started the RFQ process finished.

This would indicate that reducing friction was the main reason for improvement:

  • Limiting the number of questions and making all questions easily answered with drop-down boxes reduced friction dramatically.
  • Submitting an RFQ became the primary focus of the site.
  • Because starting the Configurator required an e-mail address, the number of non-qualified “starts” was minimized. Curious clicks on the RFQ form were virtually eliminated.

What Next?

The following chart demonstrates an actual testing cycle, moving from initial tests to follow-up tests.

Testing Cycle

While more people completed the RFQ form, it is important to determine whether or not the leads that come from this new process are of a higher quality than the leads from the original process.

Also, consider that not every valid prospect is sales-ready at the same time.  Some people who are in the initial stages of researching before they buy may be turned away by the commitment of requesting a quote.  Consider testing landing pages or calls-to-action that provide information to early stage buyers without requiring them to go through the RFQ process.

Transferable Principles:

Whether your market is B2B, B2C, or both, the following principles can guide the testing phase of your optimization cycle:

  • Test with a specific goal or hypothesis in mind – the improvement you want and how you hope to achieve it. Identifying what to change and how to change it begins with knowing what you want to achieve.
  • Predict as many secondary metrics as you can imagine.  For example, consider how adding images may affect page load time, a prospect’s perception of value and eye-path.
  • Check and recheck to make sure you are using the correct metric for your goals.  After performing a test, consider if there is another way to measure results.  For instance, decide whether measuring user engagement is best done by looking at time-on-page, clickthrough, or by tracking return visitors, or using heat maps.
  • Take the time to execute follow-up tests. They will help you evaluate if your optimization translates into long-term gains.
  • Be discriminating with your metrics and analytics. Focus on relevance and insights rather than data overload.  Prioritize the data relevant to your immediate optimization goal.

Summary:

Focus on the before-and-after of your testing cycle.  The following points will help you identify testing pitfalls to avoid and protect the integrity of the testing process.

  • Recognize whether the barriers to your optimization process are internal or external.
  • Cast yourself as a customer advocate and walk through your own conversion funnel regularly.
  • Avoid interruptions in the conversion path—elements that stop the momentum.
  • Beware of factors like bias, impatience, and extrapolation in your interpretations of data.
  • Strive for environmental stability during your tests.
  • Use the testing checklist to prepare for and review your testing process.

As wise men—and seasoned marketers– say, “Experience is the only teacher that gives the test before it teaches the lesson.”

Related Marketing Experiments Reports

Credits:

Managing Editor — Hunter Boyle

Writer(s) — Anna Jacobson

Contributor(s) — Flint McGlaughlin
Jimmy Ellis
Bob Kemper
Andy Mott
Adam Lapp
Paul Clowe

Production — Austin McCraw
Cliff Rainer
Amanda Mehlhoff

Testing Protocol(s) — TP1115
TP1135

You might also like

Leave A Reply

Your email address will not be published.