Biased cultures of digital marketing testing in organizations exist because of risk-aversion in the management culture. In such environments, managers see little gain in expending political capital to try testing digital approaches that might not work, even if they also might work. This is especially true when they are rewarded for efforts to fine-tune what is already working.
In these cultures, digital marketers will test variations of demand generation approaches to attract more of the segments they’ve pre-determined as “qualified” consumers based on their similarity to past consumers. They will test variations of marketing pages to drive these same consumers into a purchase funnel. And they will test variations of purchase funnels to tweak conversion rates by a few points of a percent.
What such cultures will not test is whether what’s being done now is really the best approach, or if there are ways to expand on the current approach. It will not test whether the segments of the market that aren’t being targeted, or those that are have been dismissed as “unqualified” based on attrition in the path to purchase could actually have become customers with a different approach to demand generation or their experience in the conversion funnel.
Clearly, what such organizational cultures miss in their approach to testing is what makes successful businesses successful – the willingness to take risks and pursue new strategies in response to changing conditions and new opportunities. In biased cultures, rather that testing being an independently operated method for exploring opportunities for strategic change, testing becomes more operations than marketing research, simply a mechanism for doing more of the same at a rate of regular increase.
The Data-driven Dilemma
All truly effective marketers are aware that their effectiveness and impact in marketing is driven by the data they have around their consumer base and their market. Thus, while no decent marketer would work without data, there are unfortunately many marketers working with insufficient data.
While good decisions can often be made with limited data, the real dilemma in this latter situation come when marketers believe that their limited data actually tells them the whole story about their consumers and their paths to engagement and purchase.
Digital analytics functions in organizations have seldom been seen as consumer insights disciplines in a primary capacity. Instead, digital analytics have traditionally been founded and organized around performance analysis, i.e. as the source of reports and dashboards showing click-through and conversion metrics.
Such performance metrics are of course critical to an organization’s management. The issues for a marketing organization arise when the measures of performance are in and of themselves leveraged to justify the optimization of conversion tactics without further ongoing testing and research dedicated specifically to the continuous validation of the marketing strategy in general.
This approach becomes an issue because this moment when the organization becomes more sophisticated in testing to optimize its current approach to generating awareness and channeling purchase intent to conversion is potentially the same moment that it may begin limiting its capacity to consider ideas of other customer segments, demand generation methods and conversion paths.
Since the practice of testing and optimization is often highly venerated in digital analytics circles, it may initially seem surprising that organizations that become highly sophisticated in such approaches may actually be more backwards than others when it comes to recognizing and responding to changing consumer segments, their multiple paths to purchase, and the ever-expanding options in marketing channel mixes.
Cracks in the Foundation
Unfortunately, the establishment of a testing culture is often a difficult political process for analytics functions, and one which often results in the funding for tests being “owned” by managers outside of the analytics function. This means that analytics teams often cannot establish truly independent approaches to testing, but are instead constrained to testing that seeks to confirm multiple (and occasionally conflicting) product managers’ theories about what works best for their product.
Thanks to the always present influence of confirmation bias in human thought and decision making, often, those management theories on what works best for a product that form the basis for optimization tests are based on rear-view analysis of what has worked until now, an analysis which in an ongoing feat of perfect circular logic, is justified by the performance metrics, which of course only measure what has been done, and offer no comparison or context to what could have been done.
It is no small individual career risk for managers to consider implementing strategies which have not been proven out over time. Of course, it is a massive risk to organizations to let the risk-aversion of managers limit their consideration of future marketing strategies, especially in the digital marketing arena where consumer segments and their engagement with technology are always changing.
Therefore, companies with digital marketing practices that seek to sustain ongoing competitive advantage must ensure they support processes (such as the BIO digital research process) that involve independent testing of digital marketing strategies and evolving systems of marketing performance evaluation which align with other ongoing consumer insights and market research. These organizations must allow their marketing managers to consider marketing strategies that explore new approaches, utilizing testing to evaluate these strategies and implement the winners as effectively as possible.
Those organizations that enable their managers to explore opportunities to capitalize on an ever shifting digital marketplace will have an ever increasing advantage over digital marketing organizations designed to test nothing other than how to do the same thing slightly better than before.