Question: Frankly speaking, we were a bit disappointed by your Marketing Rx column last week. Why make such a fuss over a trivial market research matter as “the correct research respondent?” To think that you were only talking about qualitative research.
We have had FGDs and FGDs and IDIs. In most of them, our respondent recruiters got for us whoever was available or respondents who can talk on the subject—two respondent recruiting criteria you regarded as incorrect. But we always got useful consumer insights out of such respondents.
So we don’t see what difference it will make if we apply your correct IDI respondent recruiting rule. It was clear to us that your rule is “good in theory but not in practice.”
Even for our quantitative research like our UAI or product prototype testing, we noted that your User-Friendly Marketing Research book requires us in addition to representativeness and random selection to ask of our respondent sample your qualifying question, “Are these the consumers from whom you can learn the most?” Our two contracted research agencies never asked such respondent qualifying question but we never got into any kind of trouble.
Our UAIs had always helped identify our appropriate target market segment and brand positioning. Our product prototype testing never failed to tell us in what product attributes our test product is better than or not as good as that of the competition.
So again, we ask: “What’s with all the fuss?”
Answer: Your candid way of speaking what’s on your mind is endearing. But you have to be open to exceptions to your generalizations.
Let’s talk about exceptions. While your UAIs “had always helped identify (your) appropriate target market segment and brand positioning,” sooner or later you will experience the humbling exception.
In the UAI study for pH Care, for example, the target respondents that Unilab initially defined were the feminine wash users.
It turned out that this segment represented only 18 percent of the total market of menstruating women. The larger segment to go after was the 82 percent non-users. When pH Care targeted this larger segment, it gave the brand a whopping market share of 52 percent.
It was also the respondents who revealed its differentiator positioning.
I will say something similar about your product testing where you claim that it has “never failed to tell (you) in what product attributes (your) test product is better or not as good as competition.”
Once, a telecommunications company contracted me to reanalyze its product testing data on international remittance. The company’s research users were unhappy with the data analysis that its research agency did.
In the product testing, the research agency asked users of client’s remittance service to compare it with that of competition. Another sample of respondents who were users of the competitor’s remittance service were asked to compare that with the client’s service.
The results showed parity preference between the two “competing” services. The client company interpreted this result as implying that they just had to invest more in promoting their brand against that of the competitor.
In the reanalysis, I found a questionnaire item that asked both respondent samples what remittance service they had used in the past and which service they used most often. Neither the client’s nor the competitor’s service ranked in the top 3. The most used services were Western Union and unbranded door-to-door remitters who were tied for first place. The implications were clear. The true competitor was the consumer-defined one: Western Union and the unbranded door-to-door.
For data on what attributes your test product being better than or not as good as competition’s, that’s not going to come from the respondents who were the customers of the marketer-defined competitor, the other telecommunications company. It was the Western Union and the door-to-door customers who were the respondents from whom to learn the most and whose responses will define the right weakness-correcting campaign.
In qualitative research, such kind of “exceptions” happen. A leading consumer food company once contracted me to undertake a “reconvened FGD” to product-test its instant rice porridge and find out how mothers compare it with competition, which was another rice porridge brand by another consumer food company.
After trying the client’s rice porridge, from one FGD to the next, mothers who tried cooking and serving to their children the test porridge gave a similar response of rejection: “Hindi masarap” (It does not taste good)–both the client’s and the competitor’s.
When asked what they found wrong, a similar set of responses was heard: “Ang talagang masarap ay yung aming lugaw na niluluto” (What really tastes good is what we cook and serve). It became clear to the client that it should have tested against the consumer-defined competitor and that’s no other than the mothers themselves. The product testing should have asked the respondents to compare the client’s rice porridge with the rice porridge that the respondent mothers cook and serve.
So as you can see, getting the correct research respondent is no small matter. That the respondents of your UAIs and product testing have “always” been right is pure coincidence.
I will say the same thing about your selection of respondents for your qualitative research.
I end with a few words about your attitude toward research.
Because you regard research as a mere support to your marketing decisions and campaigns, in a failed decision or a flawed campaign, you never traced to research the source of failure.
But as you should have appreciated from the foregoing, the role of market research has taken on a strategic significance.
The feminine wash case, the telecom remittance case, and the rice porridge case prove that market research is strategic.
The correct market research is also a competitive advantage because you know something about your consumer and your market that is unknown to your competitors.
Keep your questions coming. Send them to me at ned.roberto@gmail.com.