Here’s a common scenario:
- Conversion rate optimization (CRO) marketer has a theory they want to test.
- CRO marketer runs an A/B test to figure things out.
- The A/B test returns a result different from what the CRO marketer predicted.
- CRO marketer comes up with an excuse like, “Oh, we don’t have enough data,” and reverts back to the lower-performing item.
This sort of thing happens all the time. It’s commonly discussed on e-commerce Twitter. Why?
A lot of CRO guys on here say everything should be tested but then when a test result is different to what they 'expect' they claim you haven't got enough data. The same people make generalisations about what 'works' from previous tests which is the pot calling the kettle black.
— Jack Rubin (the soap guy) (@JackRubin1) September 26, 2022
Well, agencies, freelance copywriters, and even in-house teams love the golden ticket of “conversion rate increase.”
Whether you’re the person hiring CRO marketers—or you’re a CRO marketer yourself—here’s a simple way to avoid this dilemma:
- Decide on the variables you want to test.
- Decide, in advance, the margins and amount of data you’ll need to be convinced to change your mind.
- Then, run the test.
That’s it. Deciding what it’ll take before you run a test eliminates the “cope mechanism” that can follow after the test.
The constant repetition of CRO “best practices” makes it easy to forget that, sometimes, weird stuff works.
… And there’s not always a good explanation for it.
When the weird stuff happens, be ready to accept it with open arms. It’ll make you more money. That’s a win in our book.