19Dec
By: Ian McLean On: December 19, 2017 In: Artificial Intelligence Comments: 0

Online businesses should focus on A/B testing. After all, improving the conversion rate of your website traffic is going to give you a better return on your marketing spend, so why not focus efforts there?

Conversation Rate Optimization (CRO) is the practice of how well you get customers to do what you want them to do on your website – whether that’s signing up for an email newsletter, completing an online quote, or buying a product. Sounds straightforward, right? But the key to seeing significant results, is through the implementation of a structured A/B testing framework within your organization, putting the right team in place to support it, and of course, using the right technology to track and analyze your results.  Without such a framework, ad hoc testing is unlikely to deliver significant results to your bottom line.

3 parts to an ideal framework for A/B testing

People:

First, let’s talk about people. Without commitment from key stakeholders within your organization, CRO may not work. Support from senior management is essential to getting the right team in place and ensuring there is a focus on testing. Committing a team member whose primary focus on CRO is key – they can lead the charge and ensure momentum and velocity is maintained and bring the different groups within your organization together to achieve the CRO plan.

The key to A/B testing is to align your team to the process. Ideally the CRO team should consist of:

  1. Marketing and Product Managers: They are essential in consolidating the market needs, customer expectations and business goals into one product roadmap
  2. Business Analysts and UX Designers: They help in breaking up the roadmap into smaller and attainable tasks that can be implemented and optimized
  3. Developers and CRO Analysts: These are the experts in implementation and experimentation. They execute, measure and re-test depending on the previous result.

Here’s how product and people can work together to develop compelling tests.

WebTechnology:

Next, using the right technology is important to ensure your tests are being tracked accurately. Data is the lynchpin of a good A/B testing program because it ensures that decisions are made on evidence rather than on opinions. Further, if data is easily accessible, decisions are made quickly, allowing you to understand what’s working and what’s not.

Advanced data modelling also helps our business understand revenue growth from online sources.  Most software platforms, like Google Experiments, Optimizely, and VWO, include reporting tools as part of their platforms.  These reports are extremely useful to generate insights on experiments at a quick glance, allowing faster decision making.

In addition, tools like mouse tracking, heat maps or eye tracking help us understand the meaning behind the data. For instance, if you see an unexpected result, these tools can allow you to dig deeper and understand the consumer behaviour that is driving the numbers. They can also help identify areas for future tests by illustrating where users are getting stuck on an aggregated level.

Web

When it comes to managing the overall success of the CRO program, processes like tracking win rates, managing work flows and idea creation, you need something more sophisticated than a simple spreadsheet.  Rather than tracking your testing program in a giant Excel doc, there is a benefit to using a standalone tool to allow users to keep on top of progress, deliverables, results and successes in one place.  When running more than 100 tests a year, with ten or more team members working on CRO, it helps to have a more robust solution than a simple Excel sheet.

Process, Process, Process:

Finally, the right process ensures that the team is constantly improving their approach to generating new tests, as well as refining existing tests. To give CRO a fair chance, make it a focus within the organization by carving out time, be it 20% or 70%, to allow the program to succeed.  Here are 6 steps your organization should consider as part of your A/B testing process or framework.

Step 1: Pick the pain point

Choose the page or process that you want to optimize. This will eventually be known as the control version. Ideally, you should start with the conversion page or the ‘money-making page’. This page is often the one that has the greatest impact on your conversion rate given you’ve already spent marketing dollars to get your customer to this stage in the funnel.

A great way to choose which process to prioritize is by reviewing your analytics and focus on the KPIs you want to improve:

  • Look at areas with the largest drop off within your funnel. This could indicate users are either not moving forward due to a usability issue or a lack of motivation.
  • Target low converting, but high traffic areas of your site with the opportunity to make a big impact with little effort.
  • Identify landing pages that get more visits, and funnel starts. Use your analytics tool to understand the problem areas and identify pages with the highest bounce rates. These are often good places to start.

To help prioritize which tests to run next, there are a few ways to help guide your team’s thinking for the greatest impact.  Tools like the PIE framework (Potential, Importance, Ease)  or the ICE Score (Impact, Confidence, Ease) help quantify which experiment to run next.

Web

At Kanetix Ltd., the acronym we use is PILL:

Potential
Impact
Level of Effort
Love for the project

We rate each element on a 1-5 scale.  Based on feedback from the entire CRO team, we prioritize which test to tackle next.

Step 2: Develop your hypothesis:

The next step is to identify what action you want the user to take and brainstorm how you can influence user behaviour on that page or step of your funnel. Always develop a hypothesis to formulate your ideas.  Here’s one that is easy to follow:

IF (variable), THEN (result), DUE TO (rationale)

For example:

If I add an element on social proof to my landing page, I expect users to have more trust in the service and be more motivated to start an application.

Develop the hypothesis based on research by using the tools and data gathered on the pages within the flow.  If needed, you can always set up a pop-up form that collects data on why visitors are leaving the page or not converting, leverage screen recording software services like Inspectlet, Session Cam or Hotjar, or even by watching an unbiased visitor navigate through your landing page or using a tool like Peek or UserTesting to see where they are getting stuck.

Design and layout changes, such as colour contrast or the design of the call-to-action (CTA) are elements that teams typically test first. These can be quick wins to get a testing program started.

Our own experience at Kanetix Ltd. has shown content and offers tend to influence consumer behaviour and have a more significant lift on conversion rate. Explore how you can craft messaging that either creates urgency (fear of missing out) or illustrates social proof.

For example, on the quote page of Kanetix.ca , introducing the number of consumers shopping at any given time on the page added an element of social proof and urgency to an offer and as a result, the conversion rate increased by over 12%.

Web

Step 3: Design and implement variations:

Create the new variation of the page you are testing using your hypothesis, and present it to your users. Typically, Kanetix Ltd. tests are shown to 50% of traffic, but it doesn’t necessarily have to be the case. If you are concerned that the test could impact conversion negatively, start out by showing it to a smaller percentage of your traffic until you are confident in the test.

It is important to remember that all variations don’t always perform positively. Here’s an example of a test that did not perform as expected.

Web

Just because a test does not perform well, don’t let it dissuade you from continuing to A/B test. Keep testing with new variations irrespective of whether the previous variation was a success or a failure. It’s all part of the process. In fact, oftentimes somewhere mid-way between the start and end-date, it becomes clear if the KPIs and the objectives are going in opposite directions. That is a great indicator of whether to cut your losses and put a halt to the test, or to continue.

So, what is the right duration of the test? When can we start measuring?

There are different theories when it comes to the frequency and duration of tests, depending on the volume of traffic at the stage of the funnel you are optimizing and of course, the resources available to you to implement tests.

At Kanetix Ltd., we opt for increasing the velocity and volume of tests. We tend to test as quickly as possible to get statistically significant results, call the results of the test, and move on. That way, we can keep iterating and learning to improve our funnel. Our goal for one site is 100 tests per year, which amounts to 2 different tests on different areas of the funnel per week. However, there are other approaches – some may opt for fewer tests, which aim to test larger process flow changes and which may take longer to get results.

Step 4: Analysis

Data is one of the most important elements of a successful CRO program. Data collection provides the possibility of more tests with statistical significance and importantly, ready-access to the data allows for quick decision making about the success or failure of tests.

At Kanetix Ltd., we strive for 95% confidence to determine a winning variation on our tests.  Some tests have one variation that clearly shows better conversions, which allows us to make a decision with lower traffic volumes versus experiments which are closer together in terms of conversion rates.  Closer converting tests require a larger sample size in order to determine statistical significance for a confident result.

It’s important to follow your test plans. There’s a tendency to hold on to losing variations because you are invested in it. If a particular experiment loses, accept it and move on to the next iteration. You can always review user recordings to understand interactions with the test and brainstorm ways to re-test the hypothesis but with a different implementation.

Step 5: Continue to iterate

Once the test works and the winning variation is determined, what do you do next?

The mark of a strong, institutionalized CRO team, is that they are constantly striving to find new areas of the site or process to improve.  This consistent evolution is another key to success of a CRO program, never resting on laurels and always re-examining areas for improvements.

Sometimes the next best idea might be another iteration of a previous test.  There are no hard and fast rules about when to retest variations, as the web is constantly evolving and so are consumers.  What made sense a year ago, may no longer fit with user behavior and expectations now.  At Kanetix Ltd., we re-visit past experiments to confirm users are still behaving the same way, and use these as inspiration for future tests.

In order to maximize revenues, companies need to be ready to constantly evolve.  So with a dedicated team in place, the right technology to track your program, and the internal processes in place to develop, refine and action your testing program, your organization can be well on their way to accelerated A/B testing.

Keep on experimenting!

Resources:

https://blog.optimizely.com/2015/01/29/why-an-experiment-without-a-hypothesis-is-dead-on-arrival/

http://www.blastam.com/blog/best-revenue-significance-calculator-ab-testing

https://conversionxl.com/better-way-prioritize-ab-tests/


About the author:

IMG_2871_NoneFloatingHeadIan

Ian is Director, Conversion Rate Optimization at Kanetix Ltd, with over a decade of insights on improving existing processes, tools to help in that endeavour, and creating exceptional customer experiences.  Beyond his family his passions include all kinds of athletic undertakings, keeping up on tech trends, the Leafs, and gadgets of all kinds.

 

Trackback URL: https://www.kanetixltd.ca/2017/12/testing-1-2-3-build-effective-ab-testing-program/trackback/