Welcome visitor you can log in or create an account

800.275.2827

Consumer Insights. Market Innovation.

blog-page

Market Research Event conjoint AnalysisLast week we held an event in New York in which Mark Broadie from Columbia University talked about his book “Every Shot Counts”. The talk and the book detail his analysis of a very large and complex data set…specifically the “ShotLine” data collected for over a decade by the PGA. It details every shot taken by every pro at every PGA tournament. He was able to use it to challenge some long held assumptions about golf…such as “Do you drive for show and putt for dough?”

On the surface the data set was not easy to work with. Sure it had numbers like how long the hole was, how far the shot went, how far it was from the hole and so on. It also had data like whether it ended up in the fairway, on the green, in the rough, in a trap or the dreaded out of bounds. Every pro has a different set of skills and there were a surprising range of abilities even in this set, but he added the same data on tens of thousands of amateur golfers of various skill levels. So how can anyone make sense of such a wide range of data and do it in a way that the amateur who scores 100 can be compared to the pro who frequently scores n the 60’-s?

You might be tempted to say that he would use a regression analysis, but he did not. You might assume he used Hierarchical Bayesian estimation as it has become more commonplace (it drives discrete choice conjoint, Max Diff and our own Bracket™), he didn’t use it here either.

Instead, he used simple arithmetic. No HB, no calculus, no Greek letters, just simple addition, subtraction, multiplication and division. At the base level, he simply averaged similar scores. Specifically he determined how many strokes it took on average for players to go from where they were to the hole. These averages were further broken down to account for where the ball started (not just distance, but rough, sand, fairway, etc) and how good the golfer was.

These simple averages allow him to answer any number of “what if” questions. For example, he can see on average how many strokes are saved by going an extra 50 yards off the tee (which turns out to be more than for being better at putting). He can also show that in fact neither driving nor putting is as important as the approach shot (the last full swing before putting the ball on the green). The ability to put the ball close to the hole on this shot is the biggest factor in scoring low.

...

Truth or Research

Posted by on in New Research Methods

respondents telling truth in surveysI read an interesting story about a survey done to determine if people are honest with pollsters. Of course such a study is flawed by definition (how can we be sure those who say they always tell the truth, are not lying?). Still, the results do back up what I’ve long suspected…getting at the truth in a survey is hard.

The study indicates that most people claim to be honest, even about very personal things (like financing). Younger people, however, are less likely to be honest with survey takers than others. As noted above, I suspect that if anything, the results understate the potential problem.

To be clear, I don’t think that people are just being dishonest for the sake of being dishonest….I think it flows from a few factors.

First, some questions are too personal to answer, even on a web survey. With all the stories of personal financial data being stolen or compromising pictures being hacked, it shouldn’t surprise us that some people might not want to answer some kinds of questions. We should really think about that as we design questions. For example, while it might be easy to ask for a lot of detail, we might not always need it (income ranges for example). To the extent we do need it, finding ways to build credibility with the respondent are critical.

Second, some questions might create a conflict between what people want to believe about themselves and the truth. People might want to think of themselves as being “outgoing” and so if you ask them they might say they are. But their behavior might not line up with reality. The simple solution is to ask questions related to behavior without ascribing a term like “outgoing”. Of course, it is always worth asking it directly as well (knowing the self image AND behavior could make for interesting segmentations variables for example).

...

Recently I had lunch with my colleague Michel Pham at Columbia Business School. Michel is a leading authority on the role of affect (emotions, feeling and moods) in decision making. He was telling me about a very interesting phenomenon called the Emotional Oracle Effect – where he and his colleagues had examined whether emotions can help make better predictions. I was intrigued. We tend to think of prediction as a very rational process – collect all relevant information, use some logical model for combining the information, then make the prediction. But Michel and his colleagues were drawing on a different stream of research that showed the importance of feelings. So the question was, can people make better predictions if they trust their feelings more?

To answer this question they ran a series of experiments. As we researchers know, experiments are the best way to establish a causal linkage between two phenomena. To ensure that their findings were solid, they ran eight separate studies in a wide variety of domains. This included predicting a Presidential nomination, movie box-office success, winner of American Idol, the stock market, college football and even the weather. While in most cases they employed a standard approach to manipulate people’s feelings of trust in themselves, in a couple of cases they looked at differences between people who trusted their feelings more (and less).

Across these various scenarios the results were unambiguous. When people trusted their feelings more, they made more accurate predictions. For example, box office showing of three movies (48% Vs 24%), American Idol winner (41% Vs 24%), NCAA BCS Championship (57% Vs 47%) and Democratic nomination (72% Vs 64%), weather (47% Vs 28%) were some of the cases where people who trusted their feelings predicted better than those who did not. This, of course, raises the question of why? What is it about feelings and emotion that allows a person to predict better?

The most plausible explanation they propose (tested in a couple of studies) is what they call the privileged-window hypothesis. This grows off the theoretical argument that “rather than being subjective and incomplete sources of information, feelings instead summarize large amounts of information that we acquire, consciously and unconsciously about the world around us.” In other words, we absorb a huge quantity of information but don’t really know what we know. Thinking rationally about what we know and summarizing it seems less accurate than using our feelings to express that tacit knowledge. So, when someone says that they did something because “it just felt right”, it may not be so much a subjective decision as an encapsulation of acquired knowledge. The affective/emotional system may be better at channeling the information and making the right decision than the cognitive/thinking system.

So, how does this relate to market research? When trying to understand consumer behavior through surveys, we usually try to get respondents to use their cognitive/thinking system. We explicitly ask them to think about questions, consider options and so on, before providing an apparently logical answer. This research would indicate that there is a different way to go. If we can find a way to get consumers to tap into their affective/emotional system we might better understand how they arrived at decisions.

...

Market Researchers are constantly being asked to do “more with less”. Doing so is both practical (budgets and timelines are tight) and smart (the more we ask respondents to do, the less engaged they will be). At TRC we use a variety of ways to accomplish this from basic (eliminate redundancies, limit grids and the use of scales) to advanced (use techniques like Conjoint, Max-Diff and our own Bracket™ to unlock how people make decisions). We are also big believers in using incentives to drive engagement and with more reliable results. That is why a recent article in the Journal of Market Research caught my eye.

The article was about promotional lotteries. The rules tend to be simple, “send in the proof of purchase and we’ll put your name in to a drawing for a brand new car!." The odds of winning are also often very remote which might make some not bother. In theory, you could increase the chances of participation by offering a bunch of consolation prizes (free or discounted product for example). In reality, the opposite is true.

One theory would be that the consolation prizes may not interest the person and thus they are less interested in the contest as a whole.  While this might well be true, the authors (Dengfeng Yan and A.V.Muthukrishnan) found that there was more at work. Consolation prizes offer respondents a means to understand the odds of winning that doesn’t exist without them. Seeing, for example, that you have a one in ten million chance of winning may not really register because you are so focused on the car. But if you are told those odds and also the much better odds of winning the consolation prize you realize right away that at best chances are you will win the consolation prize. Since this prize isn’t likely to be as exciting (for example, an M&M contest might offer a free bag of candy for every 1000 participants), you have less interest in participating.

Since we rely so heavily on incentives to garner participation, it strikes me that these findings are worthy of consideration. A bigger “winner take all” prize drawing might draw in more respondents than paying each respondent a small amount. I can tell you from our own experimentation that this is the case. In some cases we employ a double lottery using our gaming technique Smart Incentives™  tool (including in our new ideation product Idea Mill™ ). In this case, the respondent can win one prize simply by participating and another based on the quality of their answer. Adding the second incentive brings in additional components of gaming (the first being “chance”) by adding a competitive element.

Regardless of this paper, we as an industry should be thinking through how we compensate respondents to maximize engagement.

...

We are about to launch a new product called Idea Mill™ which uses a quantitative system to generate ideas and evaluates those ideas all in one step. Our goal was to create a fast and inexpensive means to generate ideas. Since each additional interview we conduct adds cost, we wondered what the ideal number would be.

To determine that we ran a test in which we asked 400 respondents for an idea. Next, we coded the responses into four categories.  

Unique Ideas – Something that no other previous respondent had generated.

Variations on a Theme – An idea that had previously been generated but this time something unique or different was added to it.

Identical – Ideas that didn’t add anything significantly different from what we’d seen before.

...

Want to know more?

Give us a few details so we can discuss possible solutions.

Please provide your Name.
Please provide a valid Email.
Please provide your Phone.
Please provide your Comments.
Enter code below : Enter code below :
Please Enter Correct Captcha code
Our Phone Number is 1-800-275-2827
 Find TRC on facebook  Follow us on twitter  Find TRC on LinkedIn

Our Clients