Welcome visitor you can log in or create an account

800.275.2827

Consumer Insights. Market Innovation.

blog-page
Subscribe to this list via RSS Blog posts tagged in causation

UFO sighting causation correlation market researchSmallI read a blurb in The Economist about UFO sightings. They charted some 90,000 reports and found that UFO's are, as they put it, "considerate". They tend not to interrupt the work day or sleep. Rather, they tend to be seen far more often in the evening (peaking around 10PM) and more on Friday nights than other nights.
The Economist dubbed the hours of maximum UFO activity to be "drinking hours" and implied that in fact that drinking was the cause of all those sightings.
As researchers, we know that correlation does not mean causation. Of course their analysis is interesting and possibly correct, but it is superficial. One could argue (and I'm sure certain "experts" on the History Channel would) that it is in fact the UFO activity that causes people to want to drink, but by limiting their analysis to two factors (time of day/number of sightings), The Economist ignore other explanations.
For example, the low number of sightings during sleeping hours would make perfect sense (most of us sleep indoors with our eyes closed). The same might be true for the lower number during work hours (many people don't have ready access to a window and those who do are often focused on their computer screen and not the little green men taking soil samples out the window).
As researchers, we need to consider all the possibilities. Questionnaires should be constructed to include questions that help us understand all the factors that drive decision making. Analysis should, where possible, use multivariate techniques so that we can truly measure the impact of one factor over another. Of course, constructing questions that allow respondents to express their thinking is also key...while a long attribute rating battery might seem like it is being "comprehensive" it is more likely mind numbing for the respondent. We of course prefer to use techniques like Max-Diff, Bracket™ or Discrete Choice to figure out what drives behavior.
Hopefully I've given you something to think about tonight when you are sitting on the porch, having a drink and watching the skies.

Hits: 3772 0 Comments

Recently I had lunch with my colleague Michel Pham at Columbia Business School. Michel is a leading authority on the role of affect (emotions, feeling and moods) in decision making. He was telling me about a very interesting phenomenon called the Emotional Oracle Effect – where he and his colleagues had examined whether emotions can help make better predictions. I was intrigued. We tend to think of prediction as a very rational process – collect all relevant information, use some logical model for combining the information, then make the prediction. But Michel and his colleagues were drawing on a different stream of research that showed the importance of feelings. So the question was, can people make better predictions if they trust their feelings more?

To answer this question they ran a series of experiments. As we researchers know, experiments are the best way to establish a causal linkage between two phenomena. To ensure that their findings were solid, they ran eight separate studies in a wide variety of domains. This included predicting a Presidential nomination, movie box-office success, winner of American Idol, the stock market, college football and even the weather. While in most cases they employed a standard approach to manipulate people’s feelings of trust in themselves, in a couple of cases they looked at differences between people who trusted their feelings more (and less).

Across these various scenarios the results were unambiguous. When people trusted their feelings more, they made more accurate predictions. For example, box office showing of three movies (48% Vs 24%), American Idol winner (41% Vs 24%), NCAA BCS Championship (57% Vs 47%) and Democratic nomination (72% Vs 64%), weather (47% Vs 28%) were some of the cases where people who trusted their feelings predicted better than those who did not. This, of course, raises the question of why? What is it about feelings and emotion that allows a person to predict better?

The most plausible explanation they propose (tested in a couple of studies) is what they call the privileged-window hypothesis. This grows off the theoretical argument that “rather than being subjective and incomplete sources of information, feelings instead summarize large amounts of information that we acquire, consciously and unconsciously about the world around us.” In other words, we absorb a huge quantity of information but don’t really know what we know. Thinking rationally about what we know and summarizing it seems less accurate than using our feelings to express that tacit knowledge. So, when someone says that they did something because “it just felt right”, it may not be so much a subjective decision as an encapsulation of acquired knowledge. The affective/emotional system may be better at channeling the information and making the right decision than the cognitive/thinking system.

So, how does this relate to market research? When trying to understand consumer behavior through surveys, we usually try to get respondents to use their cognitive/thinking system. We explicitly ask them to think about questions, consider options and so on, before providing an apparently logical answer. This research would indicate that there is a different way to go. If we can find a way to get consumers to tap into their affective/emotional system we might better understand how they arrived at decisions.

...

The 2012 Presidential Election season is upon us. I don't know about you, but other than the barrage of commercials, the thing I like least about political campaigns is the terrible abuse of numbers. Combined with the current debate on the debt limit and we have the makings of a tsunami of misleading or outright incorrect statistics.

A  few weeks ago, Megan Holstine started a discussion about a Senator using a totally made up statistic. Sadly for him, he quoted a number that was far from accurate, but also one that was easily verified. His defense was that he didn't intend the statistic to be taken "literally".

Makes me wonder if perhaps we've got it wrong.  Think of the possibilities for us if we stopped taking numbers literally!

Want to know more?

Give us a few details so we can discuss possible solutions.

Please provide your Name.
Please provide a valid Email.
Please provide your Phone.
Please provide your Comments.
Enter code below : Enter code below :
Please Enter Correct Captcha code
Our Phone Number is 1-800-275-2827
 Find TRC on facebook  Follow us on twitter  Find TRC on LinkedIn

Our Clients