How do we product-test so that we are more likely to find the best?
Q: Our products are in the FMCG [fast moving consumer goods] category. Every time we have a new product prototype, we subject it to a rigorous product testing process. For the product testing, we first discuss with product development and brand what two, three or even four variants of the prototype we want tested. Once ProdDev submits its, say, three prototype variants, we get our market research to do the product testing. What the prototype test identifies as the best among the three, that’s the new product intro that we put up for market launching.
At the “request” of our CEO, our market research reviewed our past five years’ product testing results. They were distressing to say the least. Out of the 18 product prototype tests conducted, none of these “best prototypes” attained their indicated “percentage will definitely buy” ratios. Worse is their profitability record. None were profitable in their first three years. Only eight started showing some profit after five years. As the Marketing Group, our conclusion was simple: “Our product testing process is not set up to identify what’s the true ‘best’ in any set of tested prototype variants.”
Our prototype testing system draws from the chapter on product testing in your popular book, User-Friendly Marketing Research. So what is it that our product testing process and your book missing? Why can’t the “best” prototype variant be identified? How should we product test so that we are more likely to uncover what truly is the “best?”
A: Since 2006 when the third edition of the User-Friendly Marketing Research book came out, there have been several important changes in the market research discipline and practices. A 2011 edition should have captured and contained these changes although the earliest that the book’s 4th edition will be completed is in 2012. Anyway, one of those important changes has to do with your question.
Two good reasons
There are two good reasons why your product testing system or any other system for that matter would be short of sales predictability. A first reason has to do with its design. When you ask your ProdDev to come out with two, three or four variants of a new product concept, the assumption is that those variants are all good and contain a “best.” Many times this is actually true. But there are just as many times when it’s not.
Article continues after this advertisementThose other times are when everyone is running after everybody else to beat the launch date. As you can easily imagine, the heaviest pressure is on ProdDev who has to come out with its targeted, say, three prototype variants. On several occasions, ProdDev is able to rise to the occasion. But at many other times, it yields to the pressure and come out with the three that looks “good enough” or “puede na.”
Article continues after this advertisementWhen subjected to the prototype testing, two likely outcomes are predictable. Because all three are just good enough, there will be no best. All three similarly score on “acceptability,” “likeability,” and “purchase intention.” But there are a lot more times when one variant makes the “best” grade. When you closely examine this best, you will find out that it’s actually “the best among the worst.” All three variants were mediocre ProdDev outputs. This will be the occasion when your subsequent market launching yields an embarrassing way-way-below-quota performance.
So what do you do to eliminate or at least minimize the likelihood of such a disaster? Here’s what I’ve come to develop based on the logic of “Kaizening” and thanks to four change-embracing clients who conduct their business along an experimenting and innovating culture. We’ve to call the product testing process as the “TELERERE” system. “TElerere” is short for “Test just the one best from ProdDev.” From the test results “LEarn what to improve on the tested product.” Then “REtest the product as improved.” Next, “RElearn what to improve some more.” If there’s something material to really still improve, do so and repeat the RE-RE. At this point, it should by now be obvious why this iterative testing process is “Kaizening-inspired.”
How does this testing process bring you to the true “best of the best” and protect you from the trap of ending up with the best of the worst? Notice your starting point. That’s the one prototype variant that ProdDev is betting on even when under pressure. ProdDev is willing to give this assurance because it’s being asked to only develop ONE instead of 3 or 4 variants. So even with limited time, it can invest that entire time on just this one and not thinly spread that time across three or four prototype variants. Then the tested prototype is improved for a second testing and then improved some more for a third testing and so on. My clients’ experience and adoption of this testing process revealed that even after the first RE-RE, the true best obtains. Typically though, it’s after the second RE-RE when this happens.
What’s the second good reason why your product testing process is likely to be low in sales predictability? This has to do with an established “marketing theory.” As you know and as we’ve often invoked and explained, sales comes from the synchronized working of the entire marketing mix or on all the 4 Ps (product, price, placement and promotion). Product testing looks at how well the “Base P” (or product) has been developed and prepared to give its contribution to total sales. It’s a contributory “cause” although admittedly a leading and “base” contributory cause. But it’s not the sole and sufficient sales cause. When you launch that tested product prototype, if any of the three support Ps (price, placement, promo) is flawed, the projected sales and share of market won’t happen.
To summarize, here’s our Marketing Rx’s for you. First, try and experiment with the TELERERE product testing process in place of your conventional simultaneous testing of multiple test product variants. Second, in preparation for your product launch in your target market segment, see to it that your 4 Ps will work in synch rather than along incompatible ways. For this, remember to subject your marketing program to a “compatibility test.”
Keep your questions coming. Send them to us at [email protected] or [email protected]. God bless!