I’m writing this article on a laptop computer at my desk. And in your marketing department or agency, you likely do most of your work on a computer as well.
This can cause a serious disconnect with your customers as you design A/B tests.
Because more than half (52.4% according to Statista) of global internet traffic comes from a mobile device.
So, I interviewed Rebecca Strally, Director of Optimization and Design, and Todd Barrow, Director of Application Development, for tips on what considerations you should make for mobile devices when you’re planning and rolling out your tests. Rebecca and Todd are my colleagues here at MECLABS Institute (parent research organization of MarketingExperiments).
Consideration #1: Amount of mobile traffic and conversions
Just because half of global traffic is from mobile devices doesn’t mean half of your site’s traffic is from mobile devices. It could be considerably less. Or more.
Not to mention, traffic is far from the only consideration. “You might get only 30% of traffic from mobile but 60% of conversions, for example. Don’t just look at traffic. Understand the true impact of mobile on your KPIs,” Rebecca said.
Consideration #2: Mobile first when designing responsive
Even if mobile is a minority of your traffic and/or conversions, Rebecca recommends you think mobile first. For two reasons.
First, many companies measure KPIs (key performance indicators) in the aggregate, so underperformance on mobile could torpedo your whole test if you’re not careful. Not because the hypothesis didn’t work, but because you didn’t translate it well for mobile.
Second, it’s easier to go from simpler to more complex with your treatments. And mobile’s smaller form factor necessitates simplicity.
“Desktop is wide and shallow. Mobile is tall and thin. For some treatments, that can really affect how value is communicated.” — Rebecca Strally
“Desktop is wide and shallow. Mobile is tall and thin. For some treatments, that can really affect how value is communicated,” she said.
Rebecca gave an example of a test that was planned on desktop first for a travel website. There were three boxes with value claims, and a wizard below it. On desktop, visitors could quickly see and use the wizard. The boxes offered supporting value.
But on mobile, the responsive design stacked the boxes shifting the wizard far down the page. “We had to go back to the drawing board. We didn’t have to change the hypothesis, but we had to change how it was executed on mobile,” Rebecca said.
Consideration #3: Unique impacts of mobile on what you’re testing
A smartphone isn’t just a smaller computer. It’s an entirely different device that offers different functionality. So, it’s important to consider how that functionality might affect conversions and to keep mobile-specific functionality in mind when designing tests that will be experienced by customers on both platforms — desktop and mobile.
Some examples include:
- With the prevalence of digital wallets like Apple Pay and Google Pay, forms and credit card info is more likely to prefill. This could reduce friction in a mobile experience, and make the checkout process quicker. So while some experiences might require more value on desktop to help keep the customer’s momentum moving through the checkout process, including that value on mobile could actually slow down an otherwise friction-lite experience.
- To speed load time and save data, customers are more likely to use ad blockers that can block popups and hosted forms. If those popups and forms contain critical information, visitors may assume your site is having a problem and not realize they are blocking this information. This may require clearly providing text explaining about the form or providing an alternative way to get the information, a step that may not be necessary on desktop.
- Customers are touching and swiping, not typing and clicking. So information and navigation requests need to be kept simpler and lighter than on desktop.
- Visitors can click to call. You may want to test making a phone call a more prominent call to action in mobile, while on desktop that same CTA may induce too much friction and anxiety.
- Location services are more commonly used, providing the opportunity to better tap into customer motivation by customizing offers and information in real time and more prominently featuring brick-and-mortar related calls to action, as opposed to desktop, which is in a static location, and the user may be interested in obtaining more information before acting (which may require leaving their current location).
- Users are accustomed to app-based experiences, so the functionality of the landing page may be more important on mobile than it is on desktop.
Consideration #4: The device may not be the only thing that’s different
“Is mobile a segment or device?” Rebecca pondered in my office.
She expanded on that thought, “Do we treat mobile like it is the same audience with the same motivations, expected actions, etc., but just on a different physical device? Or should we be treating those on mobile like a completely different segment/audience of traffic because their motivations, expected actions, etc., are different?”
She gave an example of working with a company her team was performing research services for. On this company’s website, younger people were visiting on mobile while older people were visiting on desktop. “It’s wasn’t just about a phone, it was a different collection of human beings,” she said.
Consideration #5: QA to avoid validity threats
When you’re engaged in conversion optimization testing, don’t overlook the need for quality assurance (QA) testing. If a treatment doesn’t render correctly on a mobile device, it could be that the technical difficulty is causing the change in results, not the changes you made to the treatment. If you are unaware of this, it will mislead you about the effectiveness of your changes.
This is a validity threat known as instrumentation effect.
Here are some of the devices our developers use for QAing.
(side note: That isn’t a stock photo. It’s an actual picture by Senior Designer James White. When I said it looked too much like a stock image, Associate Director of Design Lauren Leonard suggested I let the readers know “we let the designers get involved, and they got super excited about it.”)
“If your audience are heavy users of Safari on iPhone, then check on the actual device. Don’t rely on an emulator.” — Todd Barrow
“Know your audience. If your audience are heavy users of Safari on iPhone, then check on the actual device. Don’t rely on an emulator. It’s rare, but depending on what you’re doing, there are things that won’t show up as a problem in an emulator. Understand what your traffic uses and QA your mobile landing pages on the actual physical devices for the top 80%,” Todd advised.
Consideration #6: The customer’s mindset
Customers may go to the same exact landing page with a very different intent when they’re coming from mobile. For example, Rebecca recounted an experiment with an auto repair chain. For store location pages, desktop visitors tended to look for coupons or more info on services. But mobile visitors just wanted to make a quick call.
“Where is the customer in the thought sequence? Mobile can do better with instant gratification campaigns related to brick-and-mortar products and services,” she said.
Consideration #7: Screen sizes and devices are not the same things
Most analytics platforms give you an opportunity to monitor your metrics based on device types, like desktop, mobile and tablet. They likely also give you the opportunity to get metrics on screen resolutions (like 1366×768 or 1920×1080).
Just keep in mind, people aren’t always viewing your websites at the size of their screen. You only know the size of the monitor not the size of the browser.
“The user could be recorded as a full-size desktop resolution, but only be viewing in a shrunken window, which may be shrunk down enough to see the tablet experience or even phone experience,” Todd said. “Bottom line is you can’t assume the screen resolutions reported in the analytics platform is actually what they were viewing the page at.”
Consideration #8: Make sure your tracking is set up correctly
Mobile can present a few unique challenges for tracking your results through your analytics and testing platforms. So make sure your tracking is set up correctly before you launch the test.
For example, if you’re using a tag control manager and tagging things through it based on CSS properties, if the page shifts at different breakpoints that change the page structure, you could have an issue.
“If you’re tagging a button based on its page location at the bottom right, but then it gets relocated on mobile, make sure you’re accounting for that,” Todd advised.
Also, understand how the data is being communicated. “Because Google Tag Manager and Google Optimize are asynchronous, you can get mismatched data if you don’t follow the best practices,” Todd said.
“If you see in your data that the control has twice as many hits as the treatment, there is a high probability you’ve implemented something in a way that didn’t account for the way asynchronous tags work.” —Todd Barrow
Todd provided a hard-coded page view as an example. “Something to watch for when doing redirect testing … a tracking pixel could fire before the page loads and does the split. If you see in your data that the control has twice as many hits as the treatment, there is a high probability you’ve implemented something in a way that didn’t account for the way asynchronous tags work. This is really common,” Todd said.
“If you know that’s going to happen, you can segment the data to clean it,” he said.
You might also like …
Free Mobile Conversion Micro Class from MECLABS Institute
Mobile Marketing: What a 34% increase in conversion rate can teach you about optimizing for video
Mobile Marketing: Optimizing the evolving landscape of mobile email marketing