Welcome visitor you can log in or create an account

800.275.2827

Consumer Insights. Market Innovation.

blog-page
Rajan Sambandam

Rajan Sambandam

Chief Research Officer

Should Hotels Respond to Online Reviews?

Posted by on in Consumer Behavior

Online reviews pricing Market researchYou are planning to take a trip to the city of brotherly love to visit the world famous Philadelphia Flower Show, and would like to book a hotel near the Convention Center venue. If you’re like most people, you go online, perhaps to TripAdvisor or Expedia and look for a hotel. In a few clicks you find a list of hotels with star ratings, prices, amenities, distance to destination – everything you need to make a decision. Quickly you narrow your choice down to two hotels within walking distance of the Flower Show, and conveniently located near the historic Reading Terminal Market.

But how to choose between the two that seem so evenly matched? Perhaps you can take a look at some review comments that might provide more depth? There are hundreds of comments which is more than you have time for, but you quickly read a few on the first page. You are about to close the browser when you notice something. One of the hotels has responses to some of the negative comments. Hmmm…interesting. You decide to read the responses, and see some apologies, a few explanations and general earnestness. No such response for the other hotel, which now begins to seem colder and more distant. What do you do?

In effect, that’s the question Davide Proserpio and Georgios Zervas seek to answer in a recent article in the INFORMS journal Marketing Science. And it’s not hard to see why it’s an important question. Online reviews can have significant impact on a business, and unlike word of mouth they tend to stick around for years (just take a look at the dates on some reviews). Companies can’t do much to stop reviews (especially negative), and so they often try to coopt them by providing responses to selected reviews. It is a manual task, but the idea seems sound. By responding, perhaps they can take the sting out of negative reviews, appear contrite, promise to do better, or just thank the reviewer for the time they took to write the feedback – all with the objective of getting prospective customers to give them a fair chance. The question then is whether such efforts are useful or just more online clutter.

It turns out that’s not an easy question to answer, and as Proserpio and Zervas document in the article, there are several factors that first need to be controlled. But their basic approach is easy enough to understand – they examine whether TripAdvisor ratings for hotels tend to go up after management responds to online reviews. An immediate problem to overcome, ironically enough, is management response. That is, in reaction to bad reviews a hotel may actually make changes that then increases future ratings. That’s great for the hotel, but not so much for the researcher who is trying to study if the response to the online review had an impact, not whether the hotel is willing to make changes in response to the review. So, that’s an important factor that needs to be controlled. How to do that?

Enter Expedia. As it happens, hotels frequently respond to TripAdvisor reviews while they almost never do so on Expedia. So, they use Expedia as a control cell and compare the before-after difference in ratings on TripAdvisor and Expedia (the difference-in-difference approach). Hence they are able to tease out if the improvement in ratings was because of responding to reviews or real changes. Another check they use is to compare the ratings of guests who left a review shortly before a hotel began responding with those who did so shortly after the hotel began responding. Much of the article is actually devoted to several more clever and increasingly complex maneuvers they use to finally tease out just the impact of management responses. What do they find? 

...

how-to-green-marketingDo people buy green products? Yes, of course. The real question for green marketers is whether they buy enough. In other words, are green sales in line with pro-green attitudes? Not really, as huge majorities of consumers show at least some green tendencies while purchases lag far behind. Why is that? Economics tells us that consumers buy based on value (trading off cost and benefits). Since eco-friendly products are seen as being more expensive, higher prices can lower the value of a green product enough to make a conventional alternative more attractive.

While the cost trade-off is clear, it is not the only one. The benefit side has at least two major components. One is the environmental benefit, which may or may not seem tangible enough to make a difference. For instance, a dozen eggs at Acme goes for less than a dollar, while some cage-free varieties can run north of $4 at Whole Foods. So, an environmentally conscious consumer has to make a trade-off at the time of purchase – is the product worth the additional cost? For items like food, the benefits may seem small enough, and far enough out, that many may decide the value proposition does not work for them. In other product categories (say, green laundry detergent), the benefits may seem both long term and impersonal, making the trade-off even harder.

The second major component is the effectiveness of the product in performing its basic function. If consumers perceive green products as inherently inferior (in terms of conventional attributes like performance), they are less likely to buy them. So a green laundry detergent (that uses less harsh chemicals) could be seen as more expensive and less effective in cleaning clothes, further dropping its overall value. (A complicating issue is that the lack of effectiveness itself could be a perceptual rather than real problem). Unless the company is able to offset these disadvantages, the product is unlikely to succeed.

A direct way to increase demand is to offer higher performance on a compensatory attribute. In the case of LED TVs, for example, newer technology consumes less power and provides better picture quality. (Paradoxically, this can sometimes lead to the Rebound Effect, whereby greener technologies encourage higher use, thus clawing back some of the benefits). But in reality, most products are not in a position where green attributes offer performance boosts.

And of course, as it is with every other market, there are segments in this market as well. Consumers who are highly committed (dark green) are willing to buy, as the value they place on the longer term environmental benefits is high enough. And, often they are affluent enough to afford the price. But a product looking for mainstream success cannot succeed only with dark green consumers (who rarely account for more than 20% of the market). Other shades of green will also need to buy. Short of government subsidies and mandates, green marketers have to find ways to balance out the components of the value proposition for the bulk of the market.

...

Brand PerceptionsIs the Mini Cooper seen as an environmentally friendly car? What about Tesla as a luxury car? The traditional approach to understanding these questions is to conduct a survey among Mini and Tesla buyers (and perhaps non-buyers too, if budget allows). Such studies have been conducted for decades and often involve ratings of multiple attributes and brands. While certainly feasible, they can be expensive, time consuming and can get outdated over time. Is there a better way to get at attribute perceptions of brands that can be fast, economical and automated?

Aron Culotta and Jennifer Cutler describe such an approach in a recent issue of the INFORMS journal Marketing Science, and it involves the use of social media data – Twitter, in this case. Their method is novel because it does not use conventional (if one can use that term here) approaches to mining textual data, such as sentiment analysis or associative analysis. Sentiment analysis (social media monitoring) provides reports on positive and negative sentiments expressed online about a brand. In associative analysis, clustering and semantic networks are used to discover how product features or brands are perceptually clustered by consumers, often using data from online forums.

Breaking away from these approaches the authors use an innovative method to understand brand perceptions from online data. The key insight (drawn from well-established social science findings) is that proximity in a social network can be indicative of similarity. That is, understanding how closely brands are connected to exemplar organizations of certain attributes, it is possible to devise an affinity score that shows how highly a brand scores on a specific attribute. For example, when a Twitter user follows both Smart Car and Greenpeace, it likely indicates that Smart Car is seen as eco-friendly by that person. This does not have to be true for every such user, but at “big data” levels there is likely to be a strong enough association to extract signal from the noise.   

What is unique about this approach to using social media data, is that it does not really depend on what people say online (as other approaches do). It only relies on who is following a brand while also following another (exemplar) organization. The strength of the social connection becomes a signal of the brand’s strength on a specific attribute. “Using social connections rather than text allows marketers to capture information from the silent majority of brand fans, who consume rather than create content,” says Jennifer Cutler, who teaches marketing at the Kellogg School of Management in Northwestern University.

Sounds great in theory, right? But how can we be sure that it produces meaningful results? By validating it with the trusted survey data that has been used for decades. When tested across 200+ brands in four sectors (Apparel, Cars, Food & Beverage, Personal Care) and three perceptual attributes (Eco-friendliness, Luxury, Nutrition), an average correlation of 0.72 shows that social connections can provide very good information on how brands are perceived. Unlike with survey data, this approach can be run continuously, at low cost with results being spit out in real time. And there is another advantage. “The use of social networks rather than text opens the door to measuring dimensions of brand image that are rarely discussed by consumers in online spaces,” says Professor Cutler.

...

Recently I had lunch with my colleague Michel Pham at Columbia Business School. Michel is a leading authority on the role of affect (emotions, feeling and moods) in decision making. He was telling me about a very interesting phenomenon called the Emotional Oracle Effect – where he and his colleagues had examined whether emotions can help make better predictions. I was intrigued. We tend to think of prediction as a very rational process – collect all relevant information, use some logical model for combining the information, then make the prediction. But Michel and his colleagues were drawing on a different stream of research that showed the importance of feelings. So the question was, can people make better predictions if they trust their feelings more?

To answer this question they ran a series of experiments. As we researchers know, experiments are the best way to establish a causal linkage between two phenomena. To ensure that their findings were solid, they ran eight separate studies in a wide variety of domains. This included predicting a Presidential nomination, movie box-office success, winner of American Idol, the stock market, college football and even the weather. While in most cases they employed a standard approach to manipulate people’s feelings of trust in themselves, in a couple of cases they looked at differences between people who trusted their feelings more (and less).

Across these various scenarios the results were unambiguous. When people trusted their feelings more, they made more accurate predictions. For example, box office showing of three movies (48% Vs 24%), American Idol winner (41% Vs 24%), NCAA BCS Championship (57% Vs 47%) and Democratic nomination (72% Vs 64%), weather (47% Vs 28%) were some of the cases where people who trusted their feelings predicted better than those who did not. This, of course, raises the question of why? What is it about feelings and emotion that allows a person to predict better?

The most plausible explanation they propose (tested in a couple of studies) is what they call the privileged-window hypothesis. This grows off the theoretical argument that “rather than being subjective and incomplete sources of information, feelings instead summarize large amounts of information that we acquire, consciously and unconsciously about the world around us.” In other words, we absorb a huge quantity of information but don’t really know what we know. Thinking rationally about what we know and summarizing it seems less accurate than using our feelings to express that tacit knowledge. So, when someone says that they did something because “it just felt right”, it may not be so much a subjective decision as an encapsulation of acquired knowledge. The affective/emotional system may be better at channeling the information and making the right decision than the cognitive/thinking system.

So, how does this relate to market research? When trying to understand consumer behavior through surveys, we usually try to get respondents to use their cognitive/thinking system. We explicitly ask them to think about questions, consider options and so on, before providing an apparently logical answer. This research would indicate that there is a different way to go. If we can find a way to get consumers to tap into their affective/emotional system we might better understand how they arrived at decisions.

...

You may have heard about the spat between Apple and Samsung. Apple is suing Samsung for alleged patent infringements that relate to features of the iphone and ipad. The damages claimed by Apple? North of 2 billion dollars. The obvious question is how Apple came up with those numbers? The non-obvious answer is, partly by using conjoint analysis – the tried and tested approach we often use for product development work at TRC.    

Apple hired John Hauser, Professor of Marketing at MIT’s Sloan School of Management to conduct the research. Prof. Hauser is a very well known expert in the area of product management. He has mentored and coauthored several conjoint related articles with my colleague Olivier Toubia at Columbia University. For this case, Prof. Hauser conducted two online studies (n=507 for phones and n=459 for tablets) to establish that consumers indeed valued the features that Apple was arguing about. Details about the conjoint studies are hard to get, but it appears that he has used Sawtooth Software (which we use at TRC) and used the advanced statistical estimation procedure known as Hierarchical Bayes (HB) (which we also use at TRC) to get the best possible results. It also appears that he may have run a conjoint with seven features, incorporating graphical representations to enhance respondent understanding.

There are several lessons to be learnt here for those interested in conducting a conjoint study. First, conjoint sample sizes do not have to be huge. I suspect they are larger than absolutely necessary here because the studies are being used in litigation. Second, he has wisely confined the studies to just seven attributes. We repeatedly recommend to clients that conjoint studies should not be overloaded with attributes. Conjoint tasks can be taxing for survey respondents, and the more difficult they are, the less attention will be paid. Third, he is using HB estimation to obtain preferences at the individual level, which is the state of the science approach. Last, he is incorporating graphics wherever possible to ensure that respondents clearly understand the features. When designing conjoint studies it is good to take these (and other) lessons into consideration to ensure that we get robust results.

So, what was the outcome?

As a result of the conjoint study, Prof. Hauser was able to determine that consumers would be willing to spend an additional $32 to $102 for features like sliding to unlock, universal search and automatic word correction. Under cross examination he acknowledged that this was stated preference in a survey and not necessarily what Apple could charge in a competitive marketplace. This is another point that we often make to clients both in conjoint and other contexts. There is a big difference between answering a survey and actual real world behavior (where several other factors come into play). While survey results (including conjoint) can be very good comparatively, they may not be especially good absolutely. Apple used the help of another MIT trained economist to bring in outside information and finally ended up with a damage estimate of slightly more than $2 billion.

...
Recent comment in this post - Show all comments
  • Ed Olesky
    Ed Olesky says #
    how interesting! thanks for sharing this, Dr. Sambandam. i wonder how many price points they tested. and was it subsidized price,

What Does the Fox Say?

Posted by on in Market Research

Nate Silver’s much anticipated (at least by some of us) new venture, launched recently. In his manifesto he describes it as a “data journalism” effort, and for those of us who have followed his work over the last five years – from the use of sabermetrics in baseball analysis through the predictions of presidential politics – there is plenty to look forward to. Apart from the above topics, his website is focusing on other interesting areas such as science, economics and lifestyle, bringing data-driven rigor and simple explanation to the understanding of all these fields. It follows the template of the blog he ran for the New York Times as well his bestselling book, The Signal and the Noise: Why So Many Predictions Fail, But Some Don’t. As a market researcher, I found much to like in the basic framework he has laid out for his effort.

In critiquing traditional journalism, Nate describes a quadrant using two axes – Qualitative versus Quantitative, and Rigorous & Empirical versus Anecdotal & Ad-hoc.

qual quant market researchSource:www.fivethirtyeight.com

He is looking to occupy the mostly open top left quadrant, while arguing that opinion columnists too often occupy the bottom right quadrant and traditional journalism generally occupies the bottom left quadrant. For someone with such a quantitative background he is not dismissing the qualitative side at all. On the contrary, he argues that it is possible to be qualitative and rigorous and empirical, if one is careful about the observations made (and cites examples of journalists such as Ezra Klein, who occupy the top right quadrant).

For those of us in market research the qualitative versus quantitative dimension is, of course, very familiar. Somewhat less so is the second dimension – rigorous and empirical versus anecdotal and ad-hoc. But this second dimension is especially important to consider because it directly affects our ability to appropriately generalize the insights we develop. As practicing researchers, we know that qualitative research is excellent for discovery and quantitative is great for generalizations. But we also know that is not always the way things are done in practice.

...

research conference may13 2013We just wrapped up another of our client conferences and it was another successful day for all concerned. This conference stood out for the level of interaction between the speakers and the audience, a testament to the speakers, their topics, and the keen interest that practitioners have in these topics.  

The first speaker was Olivier Toubia from Columbia University. Olivier is a true leader in the area of innovation research and teaches an MBA course called Customer Centric Innovation. He gave a quick round up of four important questions that he has been able to address through his research – how to motivate consumers to generate ideas, how to structure the idea generation process, how to screen and evaluate the ideas and how to find consumers who have good ideas. By taking us through a variety of studies (including surveys and experiments) he was able to answer these questions and provoke a lot of interesting thoughts from the audience.

Next up was Vicki Morwitz from New York University. She uses surveys extensively in her research and is a leader in understanding the impact that survey responses have on subsequent behavior. She was able to present evidence about the unintended effect that surveys have on respondents, something that should be of interest to all marketing research firms and indeed all marketers. In some cases surveys have a positive impact in that they increase future purchasing behavior, but said Vicki, should be used with caution as overt efforts to influence consumers do not seem to work.

Vicki’s presentation was followed by TRC’s own Michael Sosnowski who discussed the idea of doing more with less in a mobile world. He talked about the increasing numbers of survey respondents who are attempting to get at surveys using their smartphones and why we as researchers should be aware of that. He questioned the conventional wisdom that mobile phone surveys should be short and simple and showed examples of more complex choice based surveys (using TRC’s Bracket) can be conducted on mobile phones and how it provides results similar to an online survey. We may not be ready to do conjoint studies on mobile phones, he said, but neither should we artificially constrain ourselves to extremely simple data collection. Using good design and sophisticated analysis it is possible to get good quality information from mobile surveys.

Following Michael was Joydeep Srivastava from the University of Maryland an old friend of mine from my graduate school days. He is now a leading consumer behavior researcher who has done especially interesting work in the area of pricing. His specific interest is in partitioned pricing (such as charging a separate price for shipping) and he was able to enlighten the audience with the results of his experiments. For example, he was able to counter the myth that charging a separate shipping price and then providing a price discount to offset it would stave off any damage to the company. On the contrary, it actually reduced the purchase likelihood compared to not providing a discount. This, he said, was because of people’s unwillingness to pay for shipping in the first place and the explicit reminder of it with the offsetting charge.      

...

My Evening with Daniel Kahneman

Posted by on in Consumer Behavior

Okay, so it wasn’t really just the two of us – there were a few hundred others involved. Still, it was a very memorable evening that I think is worth sharing.

The day started innocently enough. I was heading out to Yale for a guest lecture in the MBA Marketing Research class taught by Jiwoong Shin as I have done for several Spring semesters now. I like this trip a lot as it allows me to catch up with many of my friends in the Yale Marketing Department. One of those is Shane Frederick and I had emailed him to see if he was around. He replied asking if I was attending Kahneman’s lecture. I had no idea that Daniel Kahneman, Nobel Prize winner and godfather of behavioral economics was giving a lecture there. The day was already getting better! I quickly changed my Amtrak ticket to a later time and told Shane I would come by his office so we could walk over.

My guest lecture went off very well with the students asking plenty of interesting questions. Then I had lunch with Zoe Chance who is doing some very interesting work with leading companies, applying ideas from behavioral economics. After a couple more meetings, I went to see Shane and we walked over early knowing there would be a big crowd. And we were glad we did, as the auditorium was overflowing by the time the lecture started.

Daniel Kahneman (Danny to his friends) was introduced by another notable person from Yale, Professor Robert Shiller (yes, he of the Case-Shiller Index you may have heard about during the housing crisis). Shiller talked about the widespread impact of Kahneman’s work, especially after the publication of his best seller Thinking, Fast & Slow. Trying to find Kahneman’s connections to Yale, Shiller pointed out that two of his coauthors (Shane Frederick and Nathan Novemsky, both in the marketing department) were at Yale.

And then it was time for Kahneman to speak. His humility, thoughtfulness, and eloquence came through pretty much from the first few words. He started by saying that he doesn’t do university speeches anymore since he is not actively doing any research (he is retired), but could not say no to Bob Shiller. Most of his recent speeches have been about his book, and there had been so many that as a consequence he seems to have forgotten everything else he ever did (laughter!). And that, he said, makes sense because as he points out in the book, we like things that are familiar (more laughter!).

...

electoral map 2012 nov 7tWas the election outcome a surprise for you? It wasn’t for me.

In some ways election night was quite boring. And I blame Nate Silver, Sam Wang and others who predicted the outcome with such stunning accuracy that (at least for me) the drama was completely missing. While conventional pundits and partisans were making all kinds of predictions ranging from “Toss-up” to “Romney landslide”, a group of analysts (nerds, if you choose) were quietly predicting that Obama had a small but consistent and predictable lead. Turns out they were spot-on in their predictions (and were predictably smeared by vested interests).

In my last post I talked about Nate Silver and the approach he uses. This time I want to draw your attention to another analyst, Sam Wang of the Princeton Election Consortium. He is a neuroscientist who has been forecasting for the last three presidential election cycles and has been doing a remarkably good job of it. He nailed the Electoral College vote in 2004 and missed by just one in 2008. How did he do this time? Well, he had two predictions. One of them (based on his median estimator) was 303 for Obama, which is where the tally currently stands, subject to Florida being officially called. The second one (based on his modal estimator) was 332 for Obama which is where the tally is likely to end up if/when Obama wins Florida. Excellent calls whichever way you look at it, given the extremely close race in Florida.

I’ll give you the simple answer. Surveys!

No, I don’t mean looking at whatever survey happens to catch your eye or tickles your (or your favorite network or blog’s) ideological fancy. I mean, using a system that is powered by old fashioned surveys and making very, very good explanations and predictions based off that. There is someone who has been doing exactly that for several years now and it makes sense for anyone interested in surveys to understand how he is doing that. I’m talking, of course, about Nate Silver at fivethirtyeight.com.

Interestingly, Silver does not actually do a single survey himself. Instead what he has done is build a database of surveys (that contains thousands) and used some simple and clear rules to analyze them. Based on these rules and the statistical models he has built, he is able to provide the best, unbiased view of the race. All this from survey data. How does he do it? Let’s take a look at some (and by no means all) of his rules.

In Thinking, Fast & Slow, Nobel winner Daniel Kahneman (click here previous post about Thinking, Fast & Slow) talks about the two selves people have: the experiencing self and the remembering self. The terms are self-explanatory and vacations are a good way to think about them. The part of us that is enjoying the vacation is the experiencing self, while the part that is reliving it later (sometimes years later) is the remembering self. Neither one may be more important, but the emphasis we place on one or the other could determine our behavior. So, for example, you can enjoy the vacation or take plenty of pictures to relive it later, depending on the self that is more important. A way of finding out which self is more important is to ask ourselves whether we would go on a certain vacation if we could only enjoy it, but not take any pictures (or video, etc).

market research conference 2012Well, another conference is over, perhaps our best ever. A great roster of speakers, a room full of engaged attendees and a great location was a terrific formula for a memorable conference. Some highlights from the various sessions:

Lenny Murphy, Editor-in-Chief of the Greenbook blog opened with a wide sweep discussing the waves of changes rocking the market research world. Pulling from the GRIT survey, his discussion with emerging and established players, as well as his itinerant investigation, he was able to convincingly make the case that change in the MR industry is happening. Now. He talked about emerging technologies such as mobile, social media and text analytics and how academic expertise was a key to unlocking a future of new ideas. It was a perfect set-up for the group of academic presentations that were to follow.

The Outside View that Daniel Kahneman talks about in his book Thinking, Fast & Slow, is a specific remedy to a problem known as the planning fallacy (i.e.) the inability of people to make predictions. The planning fallacy is part of a larger problem of optimism bias. What is optimism bias? Simply put, people are generally more optimistic than they should be. For example, it is well known that most people think they are better than average drivers, an impossibility. It stems from a general dose of overconfidence not warranted by the situation on hand.

The best example of overconfidence is a study that Kahneman cites of CFOs of large corporations. They were asked to estimate the returns of the S&P Index over the following year. The data were collected over a number of years and hence there was ample opportunity to correlate it with the actual performance of the Index in the following year. Any guesses as to this correlation, given that the respondents should have been expected to have special insight in this matter? It was almost exactly zero, slightly less, in fact! And they seemed to have no idea their forecast was that bad.

Tagged in: Psychology

daniel-kahneman-thinking-fast-slowIn his opus Thinking, Fast & Slow, Nobel winner Daniel Kahneman (click here for previous post) relates a story from early in his career when he was leading a team to develop a curriculum and write a textbook on judgment and decision-making in high schools. He had assembled a group of experts and after working diligently for a year they had completed an outline of the syllabus and written two chapters. One fine day when discussing procedures for estimating uncertain quantities, it occurred to him that he should get an estimate from everyone on how long he thought this whole project would take. Being the clever psychologist that he was, rather than ask the group to guess publicly, he asked each person to make a confidential prediction. The mean was about two years and the range was about half a year on either side. In other words, the group was very consistent in its prediction.  

Then Kahneman had the idea of asking the curriculum expert in the group, Seymour Fox, for his specific opinion. Only this time he asked Seymour to think about other teams like theirs and asked how long it had taken them to finish. After a long silence the astonishing answer came out. Nearly half the groups never even finished the project. Among those who did the average time taken was about seven years! Seymour Fox also estimated that this group was slightly below average in terms of the skill set it possessed compared to the other groups. The killer, of course, was how long it actually took Kahneman’s group to complete their project. Eight years!

Effectively what had happened was that a group of experts in judgment and decision-making had somehow fooled themselves into thinking way too optimistically about the future and had made predictions based on it. This included the expert who in spite of having the best information somehow ignored that in favor of an optimism bias. As Kahneman graciously adds, it also included a leader who did not pull the plug on a project that would likely take another six years and was a coin toss as to whether it would even be completed.  

The biggest lesson Kahneman draws from this episode is that there are two approaches to forecasting which he labels the inside view and the outside view. The inside view is when we focus on the specifics of our own situation, try to form a coherent story and somehow convince ourselves that given the “special” nature of our situation success is just around the corner. In some ways this probably explains the enormously high failure rates of new products and the only slightly lower failure rates of new small businesses. The outside view is one that takes into account the general failure rate of the reference class of objects. Assuming the reference class is properly chosen, the outside view should provide a nice ballpark of where the estimate is going to be. In practice it is better to start there and adjust it using the special knowledge of the inside view and thus avoid embarrassing predictions. Not following this kind of procedure is why we routinely read about say, large transportation projects often running over by years and into several times the original projected cost. It is also why kitchen renovations routinely cost twice the initial estimate for the average household.

So are there specific lessons for market researchers? Of course. One is with the likelihood of success of any kind of new technological advance (mobile, neuro, text analytics, social media monitoring, whatever). Without understanding the reference information for how such new technologies can ultimately fare, we can too easily get caught up in the fanciful nature of a specific technology and make prognostications not just about success, but also about time frames within which such things can come true. On the flip side the death of older technologies can be too gleefully forecast (“Surveys will die in a year!”) because of the glamour of newer techniques if the reference cases are not carefully analyzed.

...

The Nobel Prize winner and the intellectual godfather of behavioral economics, Daniel Kahneman, has summarized a lifetime of research in his recent book Thinking, Fast & Slow. In the next few blog posts I will be drawing upon some concepts that he espouses and link them up to research to see what practitioners can take away from his four decades of work.

This post goes directly to the title of the work; fast and slow thinking. This is the foundation of his work. He and his great collaborator Amos Tversky, (who passed away and therefore could not receive the Nobel) see human thinking in two forms that they call System 1 and System 2. More aptly they could be called “automatic” and “effortful” systems, but Fast and Slow is a good shorthand description. According to Kahneman’s description,

System 1 operates automatically and quickly, with little or no effort and no sense of voluntary control

System 2 allocates attention to the effortful mental activities that demand it, including complex computations”

blackswanThe Black Swan is a book that was published a few years ago and generated much publicity and at least some controversy. It occurred to me that there are lessons market researchers can learn from that book, particularly about the relationship between qualitative and quantitative data obtained from a survey format. The idea is that the framework used to analyze such data is different from that used for directly obtained qualitative data through methods such as IDIs and focus groups. Understanding the difference between quantitative and qualitative frameworks for data analysis (and in particular, the difference between statistical and managerial outliers) can help derive more value when the qualitative data are collected in a regular survey. But first, let's take a detour.

A Brief Tour of The Black Swan

In his informative (and entertaining) book, Nassim Nicholas Taleb argues that real data are either distributed normally (from "mediocristan") or not (from "extremistan"). The former are characterized by data that follow the traditional normal distribution (or bell curve). The majority of the distribution is near the middle surrounding the average and as we venture further out the number of observations becomes increasingly scarce. It is a distribution that defines many phenomena in the natural world. In fact, basic statistics shows that with a reasonable number of observations most distributions start approximating the normal.

what am i supposed to doYes, it is a rather important issue and can be approached in a variety of ways. My purpose with this post is not to provide a comprehensive answer, but look at one specific solution based on what I recently read. The book is Thinking, Fast and Slow, the Nobel Prize winner Daniel Kahneman's excellent summary of a lifetime of research. He is perhaps the most accomplished psychologist around and could (among other things) justifiably be called the intellectual godfather of behavioral economics. It is always worth listening to what he says and in this particular case, it seems to me there is a nugget that applies to making quantitative research more actionable.

How to Be Happy by Spending Money Wisely

Posted by on in Consumer Behavior

It is that time of year when many people's thoughts turn towards buying gifts for loved ones. More generally it is a time when thoughts related to money and happiness occupy our attention. When thinking of ways to spend money either on oneself, for loved ones or even for complete strangers wouldn't it be nice if there was some actual research to provide data-based guidance on the topic? As it happens, there is. Researchers Elizabeth Dunn of the University of British Columbia, Daniel Gilbert of Harvard and Timothy Wilson of the University of Virginia have identified, through their research, eight principles designed to help consumers get more happiness for their money. Follow them as you will to enhance your life.

Tagged in: Psychology

Thoughts on TMRE 2011

Posted by on in Conferences

tmre_banner_250x250_nodisI recently came back from the 2011 The Market Research Event (TMRE) conference in Orlando, the biggest marketing research conference of the year. There was plenty to like, not the least of which was the scale of the event. Rarely, if ever, do we get to see an exclusively market research event that is so big. Kudos to IIR for putting it together.

The highlight of the event for me was the Keynotes, of which there were eight. I couldn't catch all of them, but my favorite was Sheena Iyengar from Columbia, author of the best seller The Art of Choosing (and sister-in-law of my friend Raghu Iyengar from Wharton). In a beautifully choreographed and clear presentation, Sheena (who is blind) talked about the problem of plenty in consumer choice and ways to avoid it for both sellers and buyers. The Keynotes were all held in a massive room and very entertainingly emceed by Cayne Collier, an actor and improv artist from Second City Chicago. Discussions with a variety of people indicated that the Keynotes were the favorite part of the conference for many.

Tagged in: Market Research
Recent comment in this post - Show all comments
  • Ed Olesky
    Ed Olesky says #
    Thanks for the comments Rajan! I agree with you, though I do think TMRE did a fairly good job reporting on the NGMR Disruptive In

As I sat down to write I realized that this is not a simple question. Consider the conventional meaning of necessities (defined as must-haves) and luxuries (defined as nice-to-haves). Which category market research falls into may depend on the eye of the beholder.

Researchers (or more accurately research sellers) may want to think of themselves as producing necessities rather than luxuries. But in the consumer world necessities are also generally commodities and often sold based on price. Researchers of course want to be seen as producing something valuable, something that is worth a premium -- in other words, a luxury.  So, which is it?

Now let's look at it from a research buyer's perspective. The buyer may think of research as a necessity, something that is indispensible for making good business decisions. But in keeping with the popular perception of necessities, perhaps they feel that more than one company can provide it and are hence unwilling to pay much of a premium for it. This view would support the many research sellers who complain about the commoditization of research.

Want to know more?

Give us a few details so we can discuss possible solutions.

Please provide your Name.
Please provide a valid Email.
Please provide your Phone.
Please provide your Comments.
Enter code below : Enter code below :
Please Enter Correct Captcha code
Our Phone Number is 1-800-275-2827
 Find TRC on facebook  Follow us on twitter  Find TRC on LinkedIn

Our Clients