NOTES:
These are the notes for our interactive May 10 clinic on Landing Pages. The recording of the event will be posted here in a few days.
If you are participating in the live teleclinic, we will ask you to refresh your page several times through the call as we add data and other notes. Data will appear below this sentence.
Test 1: Landing Page A/B Split Test | |
---|---|
Overall Improvement in Conversion Rate | 40.7% |
Test 2 – Old Page (with photo)
Test 2 – New Page (without photo)
Test 2: Landing Page A/B Split Test | |
---|---|
Overall Improvement in Conversion Rate | 39.0% |
Notice:
We just released part 1 of our Optimizing Landing Pages brief. Due to the overwhelming response to our last clinic, we have decided to evaluate more submitted landing pages in today’s call.
Send your landing pages to jimmy.e@marketingexperiments.com. While we will not have time to evaluate every page, we will attempt to look at as many as we can during the call this afternoon.
Learn from the MEC Research Team How to Test and Optimize Your Website
Become Professionally Certified in Online Testing! We have just 12 spots left for the upcoming certification course starting on June 15. If you have not yet enrolled, and are planning to, you may do so here.
We have extended the early registration deadline to May 30.
URLs:
- Optimizing Landing Pages, Part 1
- Test 1 – Old Page
- Test 1 – New Page
- Test 2 – Old Page (with photo)
- Test 2 – New Page (without photo)
- Landing Pages Brief
- A/B Split Testing
- Multivariable Testing
- Long Copy vs. Short Copy
- Shopping Cart Recovery Tested
- Abandoned Order Recovery Tested
- Live Review – Page 1
- Live Review – Page 2
- Live Review – Page 3
- Live Review – Page 4
Free Information on Test Data Validity
When testing, the validity of the data is a function of the how much a difference there is between your results, and the sample size. In simple terms we could say that validity can be described as a function of the size of the data sample, and the variance between 2 or more sets of results.
There are other factors to validity, but this allows us to understand the factors that impact whether a data set is valid or not. There are obvious technical issues that impact test results when we are testing online, and often reporting can be crude and inaccurate.
Still, we can learn a lot from testing and it should be a strong part of any marketers daily practice.
Testing is not the goal of marketing, rather a tactic that can be used to save money and improve results.
In a practical world of revenue targets, it can be difficult to always slow down long enough to stop and test and then carefully analyze your results.
Understanding the validity of your data can help you to quickly make decisions and truly understand what a test is telling you.
Simply put, if you have a larger variance between two results, then you will need a smaller sample size to achieve a strong degree of confidence.
Imagine these are the results of a ficticious landing page optimization test:
Landing Page A
Unique Visits: 4,203
Leads: 32
Conversion: 0.76%
Landing Page B
Unique Visits: 3,454
Leads: 534
Conversion: 15.46%
In this particular example, the difference between the number of leads is significant. Using our intuition, we can see that Landing Page B outperformed Landing Page A.
However the sample size for Landing Page A Leads is still relatively small, so there is a high amount of room for error caused from sampling.
There are obviously very complex algorithms for calculating the statistical relevance of a given data sample.
Where you would like us to send you information on how to calculate validity…
I think your site is an amazing resource. But I want to nitpick one item about statistical validity.
What about the much simpler approach of just doing a random divide of the data in half to see if they match? If they don’t, your sample’s too small (or your division is not random).
I read this and your follow-up article.
I remember from my statistics courses (many decades ago) a simple rule to test sample size adequacy without the math. It was based on randomly dividing the data in half and then comparing the two results.
Do you think this is valid?