We’re all familiar with the adage “actions speak louder than words.” It underscores the premise that, despite what people may tell you about themselves and their preferences, their actions will often differ.
In the book “Consumer.ology,” author Philip Graves explores this phenomenon in the context of consumer behavior patterns. Graves believes that focus groups and questionnaires can lead to faulty data, which in turn can lead to poor product development and/or market receptivity to a new product or service. Out in the wild, consumers will often act irrationally -- and contained market research efforts can’t effectively replicate that irrationality. This is the result of the focus group model itself; prospective consumers are asked to act in rational ways when presented with consumption options.
This entire concept is a perfect analogy for the digital marketing space. Perhaps it’s because everyone interfaces with the Web -- broadly speaking, we all use Google, we’re all on Facebook, we all shop from Amazon. That makes us all experts, right?
advertisement
advertisement
I’ve sat through many painful discussions where either a colleague or client is sure that they know exactly how a Web page should be presented, or which offer will be most compelling. These opinions usually aren’t based on any hard evidence, but rather gut instinct. The mentality in the room is usually along the lines of, “I know our customers, and I shop online all the time -- this is how we need to build our website.”
Constructing Web experiences following this type of logic is obviously a flawed process. There are best practices, of course. UX and conversion specialists will know which levers to pull, and will be versed in persona archetype and mental model creation. But for the more analytical among us, we know that iterative experiences are best. As search and digital marketers, we should always look to on-site behaviors as the source of definitive truth. As Graves pointed out in Consumer.ology, there’s no better substitute than actual customer (or in our case, visitor) behavior.
Arriving at that “definitive truth” is a process, and the simplest way to begin is by landing page experimentation. Landing pages are often campaign- and message-focused, across pages that are removed from the primary .com Web experience. These are isolated pages used solely for the purpose of experimentation to drive conversion. Conversion rate optimization then, is another way to describe the ongoing search for the “definitive truth” in digital communications.
In more practical terms, there are several testing methods, each of which builds upon the previous one. Which method is best for you depends upon the data sample size (number of unique visitors), the available resources you have internally, and your ability to interpret and respond to the data you capture.
A/B Split Test – This is the most popular online experiment. The advertiser in this scenario works with two landing pages, likely splitting traffic across the pages 50/50 in order to determine which page converts best once some degree of statistical certainty has been reached.
Fractional Factorial Analysis – A more advanced testing methodology that introduces multiple page variables being tested simultaneously (multivariate experimentation). Fractional factorial analysis allows the marketer to determine the efficacy of each individual element being tested, but is somewhat limited in that it ignores the possible interactions of different element combinations being tested (for example, the impact that the “hero” image on a page has on the primary call-to-action). The key benefit of fractional factorial analysis though, is that it requires a much smaller data set to arrive at conclusions deemed statistically valid.
Full Factorial Analysis – Full factorial analysis is a methodical, comprehensive approach to experimentation. It tests every possible combination across the test. So in my above example, every possible hero image would be tested alongside every possible call-to-action until all combinations were exhausted. Full factorial has the luxury of that more complete investigation, which in turn makes the data that much more reliable -- but the additional combinations make the testing process lengthier to complete.
Numerous technologies can help you get started on your conversion rate optimization journey. The most notable include Google’s free Website Optimizer, Adobe Test & Target, Ion Interactive’s LiveBall, and SiteSpect (which doesn’t require a true “landing page” environment, but instead manipulates on-page variables before the page is ever rendered in the user’s browser).
However, deciding that your organization needs to go all-out into experimentation and conversion rate optimization isn’t enough. You need to develop a testing plan and schedule. Confirming that a green button outperforms a red button is insufficient; experiments should be built to uncover the underlying pulse of your marketplace. So the logical first place to start is with your organization’s marketing KPIs. What are your specific objectives? What incremental customer data would help propel you closer to accomplishing those objectives? These essential questions will help you to craft an experimentation roadmap.
I’ve long regarded landing page experimentation as the most cost-effective means of conducting primary research. Put your newfound knowledge to work -- and kill it with conversion rate optimization.
There is nothing quite like actual visitor behavior based on innovative and iterative experimentation. I'm personally pretty addicted to trying new things and seeing how they play out in real-time conversion, segmentation and engagement rates. It's a lot more compelling (and authentic) than when we used to do usability or focus testing. Great stuff. Thanks for writing and thanks for mentioning LiveBall. Cheers.
Ryan, enjoyed reading your article tying Consumer.ology's premise on consumer behavior patterns to the digital marketing space and testing.
Thanks Ryan, I agree with the premise that iterative experience with real world tweaks works better than artificially constructed environments, as stand alone propositions.
That said, we often find that a combination of in-home task or reaction based feedback (using near real time Video-in-Video capture of respondents and what they say and do), with some variant of conversion rate optimization works best.
For example, one of our clients used our system to test and video record over 160 individual respondents in their home with successive iterative variations of 4 interactive display ad variations. They did so during the course of 1 week, spending approximately $9000 to do so.
They discovered that the number of seconds of display of a key message was a tipping point in whether respondents "got it"
They then proceeded with live A/B testing.
If they had started with live A/B testing only it is questionable whether they would have been able to get beyond optimizing the "best of a worst" group of choices, or if so how, much it might have cost them in traffic to do so.