In new product market research we often discuss the topic of bias, though typically these discussions revolve around issues like sample selection (representativeness, non-response, etc.) but what about methodological or analysis bias? Is it possible that we impact results by choosing the wrong market research methods to collect the data or to analyze the results?
A recent article in the Economist presented an interesting study in which the same data set and the same objective was given to 29 different researchers. The objective was to determine if dark skinned soccer players were more likely to get a red card than light skinned players. Each researcher was free to use whatever methods they thought best to answer the question.
Both statistical methods (Bayesian Clustering, logistic regression, linear modeling...) and analysis techniques (some for example considered that some positions might be more likely to get red cards and thus data needed to be adjusted for that) differed from one researcher to the next. No surprise then that results varied as well. One found that dark skinned players were only 89% as likely to get a red card as light skinned players while another found dark skinned players were three times MORE likely to get the red card. So who is right?
There is no easy way to answer that question. I'm sure some of the analysis can easily be dismissed as too superficial, but in other cases the "correct" method is not obvious. The article suggests that when important decisions regarding public policy are being considered the government should contract with multiple researchers and then compare and contrast their results to gain a fuller understanding of what policy should be adapted. I'm not convinced this is such a great idea for public policy (seems like it would only lead to more polarization as groups pick the results they most agreed with going in), but the more important question is, what can we as researchers learn from this?
In custom new product market research the potential for different results is even greater. We are not limited to existing data. Sure we might use that data (customer purchase behavior for example), but we can and will supplement it with data that we collect. These data can be gathered using a variety of techniques and question types. Once the data are collected we have the same potential to come up with different results as the study above.
Clearly our clients don't have the time or budget to engage multiple firms to study the same objective, so the solution proposed in the article won't work.
I would suggest a better solution is to have multiple perspectives throughout the process. For example, at TRC our senior level team independently reviews RFP's and then meets to offer opinions on the best way to gather and analyze the data. Often there is quick consensus, but frequently there is disagreement. I might propose using a conjoint analysis design while another partner might suggest a monadic design is superior or we might argue the merits of using our proprietary Bracket™ method vs. the standard Max-Diff approach. As we each make our case, the entire group becomes clearer on the best way to proceed.
The same system works on the back end as well. While one of our analysts will comb through the data and create a focused report with recommendations, at least two others will review the report and offer different perspectives. A lot of "did you consider?" or "does this really mean?" types of questions are asked, investigated and if appropriate incorporated into the findings. Add in collaboration with our clients and it greatly reduces the chances that no one's personal preferences and biases might impact the analysis.
Rich brings a passion for quantitative data and the use of choice to understand consumer behavior to his blog entries. His unique perspective has allowed him to muse on subjects as far afield as Dinosaurs and advanced technology with insight into what each can teach us about doing better research.