Skip to Content

Why so many conversion tests fail

Here’s a common scenario:

  1. Conversion rate optimization (CRO) marketer has a theory they want to test.
  2. CRO marketer runs an A/B test to figure things out.
  3. The A/B test returns a result different from what the CRO marketer predicted.
  4. CRO marketer comes up with an excuse like, “Oh, we don’t have enough data,” and reverts back to the lower-performing item.

This sort of thing happens all the time. It’s commonly discussed on e-commerce Twitter. Why?

Well, agencies, freelance copywriters, and even in-house teams love the golden ticket of “conversion rate increase.”

Whether you’re the person hiring CRO marketers—or you’re a CRO marketer yourself—here’s a simple way to avoid this dilemma:

  • Decide on the variables you want to test.
  • Decide, in advance, the margins and amount of data you’ll need to be convinced to change your mind.
  • Then, run the test.

That’s it. Deciding what it’ll take before you run a test eliminates the “cope mechanism” that can follow after the test.

The constant repetition of CRO “best practices” makes it easy to forget that, sometimes, weird stuff works.

… And there’s not always a good explanation for it.

When the weird stuff happens, be ready to accept it with open arms. It’ll make you more money. That’s a win in our book.