Welcome visitor you can log in or create an account

800.275.2827

Consumer Insights. Market Innovation.

blog-page
Subscribe to this list via RSS Blog posts tagged in Discrete Choice Conjoint

Truth or Research

Posted by on in New Research Methods

respondents telling truth in surveysI read an interesting story about a survey done to determine if people are honest with pollsters. Of course such a study is flawed by definition (how can we be sure those who say they always tell the truth, are not lying?). Still, the results do back up what I’ve long suspected…getting at the truth in a survey is hard.

The study indicates that most people claim to be honest, even about very personal things (like financing). Younger people, however, are less likely to be honest with survey takers than others. As noted above, I suspect that if anything, the results understate the potential problem.

To be clear, I don’t think that people are just being dishonest for the sake of being dishonest….I think it flows from a few factors.

First, some questions are too personal to answer, even on a web survey. With all the stories of personal financial data being stolen or compromising pictures being hacked, it shouldn’t surprise us that some people might not want to answer some kinds of questions. We should really think about that as we design questions. For example, while it might be easy to ask for a lot of detail, we might not always need it (income ranges for example). To the extent we do need it, finding ways to build credibility with the respondent are critical.

Second, some questions might create a conflict between what people want to believe about themselves and the truth. People might want to think of themselves as being “outgoing” and so if you ask them they might say they are. But their behavior might not line up with reality. The simple solution is to ask questions related to behavior without ascribing a term like “outgoing”. Of course, it is always worth asking it directly as well (knowing the self image AND behavior could make for interesting segmentations variables for example).

...

My daughter was performing in The Music Man this summer and after seeing the show a number of times, I realized it speaks to the perils of poor planning…in forming a boys band and in conducting complex research.  

For those of you who have not seen it, the show is about a con artist who gets a town to buy instruments and uniforms for a boys band in exchange for which he promises he’ll teach them all how to play. When they discover he is a fraud they threaten to tar and feather him, but (spoiler alert) his girl friend gets the boys together to march into town and play. Despite the fact that they are awful, the parents can’t help but be proud and everyone lives happily ever after.

It is to some extent another example of how good we are at rationalizing. The parents wanted the band to be good and so they convinced themselves that they were. The same thing can happen with research…everyone wants to believe the results so they do…even when perhaps they should not.

I’ve spent my career talking about how important it is to know where your data have been. Bias introduced by poor interviewers, poorly written scripts, unrepresentative sample and so on will impact results AND yet these flawed data will still produce cross tabs and analytics. Rarely will they be so far off that the results can be dismissed out of hand.

The problem only gets worse when using advanced methods. A poorly designed conjoint will still produce results. Again, more often than not these results will be such that the great rationalization ability of humans will make them seem reasonable.

...

big league research conjointWhile there is so much bad news in the world of late, here in Philly we’ve been captivated by the success of the Taney Dragons in the Little League World Series. While the team was sadly eliminated, they continue to dominate the local news. It got me thinking about what it is that makes a story like theirs so compelling and of course, how we could employ research to sort it out.

There are any number of reasons why the story is so engrossing (especially here in Philly). Is it the star player Mo’ne Davis, the most successful girl ever to compete in the Little League World Series or perhaps the fact that the Phillies are doing so poorly this year or maybe we just like seeing a team from various ethnicities and socio-economic levels working together and achieving success? Of course it might also be that we are tired of bad news and enjoy having something positive to focus on (even in defeat the team fought hard and exhibited tremendous sportsmanship).

The easiest thing to do is to simply ask people why they find the story compelling. This might get at the truth, but it is also possible that people will not be totally honest (for example, the disgruntled Phillies fan might not want to admit that) or they don’t really know what it is that has drawn them in. It might also identify the most important factor but not make note of other critical factors.

We could employ a technique like Max-Diff and ask them to choose which features of the story they find most compelling. This would provide a fuller picture, but is still open to the kinds of biases noted above.

Perhaps the best method would be to use a discrete choice approach. We take all the features of the story and either include them or don’t include them in a “story description” then ask people which story they would most likely read. We can then use analytics on the back end to sort out what really drove the decision.  

...

You may have heard about the spat between Apple and Samsung. Apple is suing Samsung for alleged patent infringements that relate to features of the iphone and ipad. The damages claimed by Apple? North of 2 billion dollars. The obvious question is how Apple came up with those numbers? The non-obvious answer is, partly by using conjoint analysis – the tried and tested approach we often use for product development work at TRC.    

Apple hired John Hauser, Professor of Marketing at MIT’s Sloan School of Management to conduct the research. Prof. Hauser is a very well known expert in the area of product management. He has mentored and coauthored several conjoint related articles with my colleague Olivier Toubia at Columbia University. For this case, Prof. Hauser conducted two online studies (n=507 for phones and n=459 for tablets) to establish that consumers indeed valued the features that Apple was arguing about. Details about the conjoint studies are hard to get, but it appears that he has used Sawtooth Software (which we use at TRC) and used the advanced statistical estimation procedure known as Hierarchical Bayes (HB) (which we also use at TRC) to get the best possible results. It also appears that he may have run a conjoint with seven features, incorporating graphical representations to enhance respondent understanding.

There are several lessons to be learnt here for those interested in conducting a conjoint study. First, conjoint sample sizes do not have to be huge. I suspect they are larger than absolutely necessary here because the studies are being used in litigation. Second, he has wisely confined the studies to just seven attributes. We repeatedly recommend to clients that conjoint studies should not be overloaded with attributes. Conjoint tasks can be taxing for survey respondents, and the more difficult they are, the less attention will be paid. Third, he is using HB estimation to obtain preferences at the individual level, which is the state of the science approach. Last, he is incorporating graphics wherever possible to ensure that respondents clearly understand the features. When designing conjoint studies it is good to take these (and other) lessons into consideration to ensure that we get robust results.

So, what was the outcome?

As a result of the conjoint study, Prof. Hauser was able to determine that consumers would be willing to spend an additional $32 to $102 for features like sliding to unlock, universal search and automatic word correction. Under cross examination he acknowledged that this was stated preference in a survey and not necessarily what Apple could charge in a competitive marketplace. This is another point that we often make to clients both in conjoint and other contexts. There is a big difference between answering a survey and actual real world behavior (where several other factors come into play). While survey results (including conjoint) can be very good comparatively, they may not be especially good absolutely. Apple used the help of another MIT trained economist to bring in outside information and finally ended up with a damage estimate of slightly more than $2 billion.

...
Recent comment in this post - Show all comments
  • Ed Olesky
    Ed Olesky says #
    how interesting! thanks for sharing this, Dr. Sambandam. i wonder how many price points they tested. and was it subsidized price,
higgs bosonI read an article about the discovery of the Higgs Boson at CERN. This is the so called "god particle" which explains why matter has mass. While the science generally is beyond me, I was intrigued by something one of the physicists said:

"Scientists always want to be wrong in their theories. They always want to be surprised."

He went on to explain that surprise is what leads to new discoveries whereas simply confirming a theory does not. I can certainly understand the sentiment, but it is not unusual for Market Research to confirm what a client already guessed at. Should the client be disappointed in such results?

I think not for several reasons.

First, certainty allows for bolder action. Sure there are examples of confident business people going all out with their gut and succeeding spectacularly, but I suspect there are far more examples of people failing to take bold action due to lingering uncertainty. I also suspect that far too often overconfident entrepreneurs make rash decisions that lead to failure.

Second, while we might confirm the big question (for example in product development pricing research we might confirm the price that will drive success) we always gather other data that help us understand the issue in a more nuanced way. For example, we might find that the expected price point is driven by a different feature than we thought (in research speak, that one feature in the discrete choice conjoint had a much higher utility score than the one we thought was most critical).

...

Want to know more?

Give us a few details so we can discuss possible solutions.

Please provide your Name.
Please provide a valid Email.
Please provide your Phone.
Please provide your Comments.
Enter code below : Enter code below :
Please Enter Correct Captcha code
Our Phone Number is 1-800-275-2827
 Find TRC on facebook  Follow us on twitter  Find TRC on LinkedIn

Our Clients