Welcome visitor you can log in or create an account

800.275.2827

Consumer Insights. Market Innovation.

blog-page

Should Hotels Respond to Online Reviews?

Posted by on in Consumer Behavior

Online reviews pricing Market researchYou are planning to take a trip to the city of brotherly love to visit the world famous Philadelphia Flower Show, and would like to book a hotel near the Convention Center venue. If you’re like most people, you go online, perhaps to TripAdvisor or Expedia and look for a hotel. In a few clicks you find a list of hotels with star ratings, prices, amenities, distance to destination – everything you need to make a decision. Quickly you narrow your choice down to two hotels within walking distance of the Flower Show, and conveniently located near the historic Reading Terminal Market.

But how to choose between the two that seem so evenly matched? Perhaps you can take a look at some review comments that might provide more depth? There are hundreds of comments which is more than you have time for, but you quickly read a few on the first page. You are about to close the browser when you notice something. One of the hotels has responses to some of the negative comments. Hmmm…interesting. You decide to read the responses, and see some apologies, a few explanations and general earnestness. No such response for the other hotel, which now begins to seem colder and more distant. What do you do?

In effect, that’s the question Davide Proserpio and Georgios Zervas seek to answer in a recent article in the INFORMS journal Marketing Science. And it’s not hard to see why it’s an important question. Online reviews can have significant impact on a business, and unlike word of mouth they tend to stick around for years (just take a look at the dates on some reviews). Companies can’t do much to stop reviews (especially negative), and so they often try to coopt them by providing responses to selected reviews. It is a manual task, but the idea seems sound. By responding, perhaps they can take the sting out of negative reviews, appear contrite, promise to do better, or just thank the reviewer for the time they took to write the feedback – all with the objective of getting prospective customers to give them a fair chance. The question then is whether such efforts are useful or just more online clutter.

It turns out that’s not an easy question to answer, and as Proserpio and Zervas document in the article, there are several factors that first need to be controlled. But their basic approach is easy enough to understand – they examine whether TripAdvisor ratings for hotels tend to go up after management responds to online reviews. An immediate problem to overcome, ironically enough, is management response. That is, in reaction to bad reviews a hotel may actually make changes that then increases future ratings. That’s great for the hotel, but not so much for the researcher who is trying to study if the response to the online review had an impact, not whether the hotel is willing to make changes in response to the review. So, that’s an important factor that needs to be controlled. How to do that?

Enter Expedia. As it happens, hotels frequently respond to TripAdvisor reviews while they almost never do so on Expedia. So, they use Expedia as a control cell and compare the before-after difference in ratings on TripAdvisor and Expedia (the difference-in-difference approach). Hence they are able to tease out if the improvement in ratings was because of responding to reviews or real changes. Another check they use is to compare the ratings of guests who left a review shortly before a hotel began responding with those who did so shortly after the hotel began responding. Much of the article is actually devoted to several more clever and increasingly complex maneuvers they use to finally tease out just the impact of management responses. What do they find? 

...
conjoint-modern-market-research-In my last blog I referenced an article about design elements that no longer serve a purpose and I argued that techniques like Max-Diff and conjoint can help determine whether these elements are really necessary or not. Today I’d like to ask the question “What do we as researchers use that are still useless?”
 
For many years the answer would have been telephone interviewing.  We continued to use telephone interviewing long after it became clear that web was a better answer. The common defense was “it is not representative”, which was true, but telephone data collection was no longer representative either. I’m not saying that we should abandon telephone interviewing…there are certainly times when it is a better option (for example, when talking to your clients customers and you don’t have email addresses). I’m just saying that the notion that we need to have a phone sample to make it representative is unfounded.
 
I think though we need to go further. We still routinely use cross tabs to ferret out interesting information. The fact that these interesting tidbits might be nothing more than noise doesn’t stop us from doing so. Further, the many “significant differences” we uncover are often not significant at all…they are statistically discernable, but not significant from a business decision making standpoint. Still the automatic sig testing makes us pause to think about them.
 
Wouldn’t it be better to dig into the data and see what it tells us about our starting hypothesis? Good design means we thought about the hypothesis and the direction we needed during the questionnaire development process so we know what questions to start with and then we can follow the data wherever it leads. While in the past this was impractical, we not live in a world where analysis packages are easy to use. So why are we wasting time looking through decks of tables?
 
There are of course times when having a deck of tables could be a time saver, but like telephone interviewing, I would argue we should limit their use to those times and not simply produce tables because “that’s the way we have always done it”.  
Hits: 129 0 Comments
new-product-research-car-grilleI read an interesting article about design elements that no longer serve a purpose, but continue to exist. One of the most interesting one is the presence of a grille on electric cars. 
 
Conventional internal combustion engine cars need a grille because the engine needs air to flow over the radiator which cools the engine. No grille would mean the car would eventually overheat and stop working. Electric cars, however, don’t have a conventional radiator and don’t need the air flow. The grille is there because designers fear that the car would look too weird without it.  It is not clear from the article if that is just a hunch or if it has been tested.
   
It would be easy enough to test this out. We could simply show some pictures of cars and ask people which design they like best. A Max-Diff approach or an agile product like Idea Magnet™ (which uses our proprietary Bracket™ prioritization tool) could handle such a task. If the top choices were all pictures that did not include a grille we might conclude that this is the design we should use. There is a risk in this conclusion.
 
To really understand preference, we need to use a discrete choice conjoint. The exercise I envision would combine the pictures with other key features of the car (price, gas mileage, color…). We might include several pictures taken from different angles that highlight other design features (being careful to not have pictures that contradict each other…for example, one showing a spoiler on the back and another not). By mixing up these features we can determine how important each is to the purchase decision.  
It is possible that the results of the conjoint would indicate that people prefer not having a grille AND that the most popular models always include a grille. How?
 
Imagine a situation in which 80% of people prefer “no grille” and 20% prefer “grille”. The “no grille” people prefer it, but it is not the most important thing in their decision. They are more interested in gas mileage and car color than anything else. The “grille” folks, however, are very strong in their belief. They simply won’t buy a car if it doesn’t have one. As such, cars without a grille start with 20% of the market off limits. Cars with a grille, however, attract a good number of “no grille” consumers as well as those for whom it is non-negotiable.
 
Conjoint might also find that the size of the grille or alternatives to it can overcome even hard core “grille” loving consumers. Also worth consideration that preferences will change over time. For example, it isn’t hard to imagine that early automobiles (horseless carriages as they were called originally) had a place to hold a buggy whip (common on horse drawn carriages), but over time, consumers determined they were not necessary (or perhaps that is how the cup holder was born :)).
 
In short, conjoint is a critical tool to insure that new technologies have a chance to take hold. 
 
Hits: 200 0 Comments

market-research-without-biasThe Economist Magazine did an analysis of political book sales on Amazon to see if there were any patterns. Anyone who uses social media will not be surprised that readers tended to buy books from either the left or the right...not both. This follows an increasing pattern of people looking for validation rather than education and of course it adds to the growing divide in our country. A few books managed a good mix of readers from both sides, though often these were books where the author found fault with his or her own side (meaning a conservative trashing conservatives or a liberal trashing liberals).

I love this use of big data and hopefully it will lead some to seek out facts and opinions that differ from their own. These facts and opinions need not completely change an individual's own thinking, but at the very least they should give one a deeper understanding of the issue, including an understanding of what drives others' thinking.

In other words, hopefully the public will start thinking more like effective market researchers.

We could easily design research that validates the conventional wisdom of our clients.

• We can frame opinions by the way we ask questions or by the questions we asked before.
• We can omit ideas from a max-diff exercise simply because our "gut" tells us they are not viable.
• We can design a discrete choice study with features and levels that play to our client's strengths.
• We can focus exclusively on results that validate our hypothesis.

...

how-to-green-marketingDo people buy green products? Yes, of course. The real question for green marketers is whether they buy enough. In other words, are green sales in line with pro-green attitudes? Not really, as huge majorities of consumers show at least some green tendencies while purchases lag far behind. Why is that? Economics tells us that consumers buy based on value (trading off cost and benefits). Since eco-friendly products are seen as being more expensive, higher prices can lower the value of a green product enough to make a conventional alternative more attractive.

While the cost trade-off is clear, it is not the only one. The benefit side has at least two major components. One is the environmental benefit, which may or may not seem tangible enough to make a difference. For instance, a dozen eggs at Acme goes for less than a dollar, while some cage-free varieties can run north of $4 at Whole Foods. So, an environmentally conscious consumer has to make a trade-off at the time of purchase – is the product worth the additional cost? For items like food, the benefits may seem small enough, and far enough out, that many may decide the value proposition does not work for them. In other product categories (say, green laundry detergent), the benefits may seem both long term and impersonal, making the trade-off even harder.

The second major component is the effectiveness of the product in performing its basic function. If consumers perceive green products as inherently inferior (in terms of conventional attributes like performance), they are less likely to buy them. So a green laundry detergent (that uses less harsh chemicals) could be seen as more expensive and less effective in cleaning clothes, further dropping its overall value. (A complicating issue is that the lack of effectiveness itself could be a perceptual rather than real problem). Unless the company is able to offset these disadvantages, the product is unlikely to succeed.

A direct way to increase demand is to offer higher performance on a compensatory attribute. In the case of LED TVs, for example, newer technology consumes less power and provides better picture quality. (Paradoxically, this can sometimes lead to the Rebound Effect, whereby greener technologies encourage higher use, thus clawing back some of the benefits). But in reality, most products are not in a position where green attributes offer performance boosts.

And of course, as it is with every other market, there are segments in this market as well. Consumers who are highly committed (dark green) are willing to buy, as the value they place on the longer term environmental benefits is high enough. And, often they are affluent enough to afford the price. But a product looking for mainstream success cannot succeed only with dark green consumers (who rarely account for more than 20% of the market). Other shades of green will also need to buy. Short of government subsidies and mandates, green marketers have to find ways to balance out the components of the value proposition for the bulk of the market.

...

Want to know more?

Give us a few details so we can discuss possible solutions.

Please provide your Name.
Please provide a valid Email.
Please provide your Phone.
Please provide your Comments.
Enter code below : Enter code below :
Please Enter Correct Captcha code
Our Phone Number is 1-800-275-2827
 Find TRC on facebook  Follow us on twitter  Find TRC on LinkedIn

Our Clients