Welcome visitor you can log in or create an account

800.275.2827

Consumer Insights. Market Innovation.

blog-page
hq-pricing-research
 A bunch of us here at TRC enjoy trivia, so we’ve been playing HQ Trivia using their online app for the past few months. HQ is a 12-question multiple choice quiz that requires a correct answer to move on to the next question. As a group, we have yet to get through all 12 questions and win our share of the prize pool. But it’s a nice team-building exercise and we like learning new things (who knew that 2 US Presidents were born in Vermont).  
 
Given the fun we have playing it, I can understand HQ’s success from the player perspective. Where I am a bit confused is the value proposition for its creators. Venture capital funding provides the prize money.  But there are no ads, so I’m not sure how anybody’s actually making money. There are occasional tie-in partnerships (The awesome Dwayne Johnson hosted one of the gaming sessions to promote his newest movie release, “Rampage”.)  But I suppose the biggest question is, will interest in HQ still be there when they’ve finally signed on enough sponsors to be profitable?  
 
We do a lot of pricing research at TRC, and can model on a variety of variables. But predicting the direction of demand is nearly impossible for certain products. For consumables and many services, product demand is predictable. How your product fares compared to the competition may have its ups and downs, but you can assume that people who bought toilet paper 2 weeks ago will be in the market for toilet paper again soon.
 
But with something like HQ Trivia, product demand is much more difficult to determine in advance, especially more than a few weeks from now. Right now it’s still hot – routinely attracting 700,000 – 1,000,000+ players (HQers) in a given game. How do the creators – and investors and potential sponsors – know whether it’s a good investment?  What if interest suddenly declines, either because the novelty has worn off or because something better comes along?  
 
One way to find out is through longitudinal research. Routinely check in with HQers over time to determine their likelihood to play the next week, their likelihood to recommend to their friends, and their attitudes toward the game itself. This information can be overlaid with the raw data HQ collects through game play every day – number of players, number of referrals, and number of first-time players. This information can not only help shed light on player interest, but players could also weigh in on changes the creators are considering to keep the game fresh.
 
HQers are engaging in a free activity which gives them the opportunity to win cash prizes.  But just because it’s free to play doesn’t mean the HQ powers-that-be couldn’t do pricing research (more on that in a future blog).  
 
For now, I’ll keep on playing HQ hoping I can answer all the questions, not the least of which is: when will I – and the other million HQers – no longer care? 
 
 
Hits: 47 0 Comments

nouns-vs-verbs-in-market-research

I’ve written many times about the importance of “knowing where your data has been”. The most advanced discrete choice conjoint, segmentation or regression is only as good as the data it relies on.  In the past I’ve written about many ways that we can bias respondents from question ordering to badly worded questions and even to push polling techniques. A new study published in Psychological Science would seem to indicate that bias can be created much more subtly than that.
 
Dr. Michael Reifen-Tagar and Dr. Orly Idan determined that you can reduce tension by relying on nouns rather than verbs. They are from Israel so they were not lacking in “high tension” things to ask. For example, half of respondents were asked their level of agreement (on a six point scale) with the “noun focused” statement “I support the division of Jerusalem” and the other half with the “verb focused” statement “I support dividing Jerusalem”.   
 
Consistent and statistically significant differences were found with the verb form garnering less support than the noun form. Follow-up questions also indicated that those who saw the verb form were angrier and showed less support for concessions toward the Palestinians.  
 
Is this a potential problem for researchers? My answer would be “potentially”. 
 
The obvious example might be in published opinion polls. One can imagine a crafty person creating a questionnaire in which issues they agree with are presented in noun form (thus garnering higher agreement from the general public) and ones they disagree with in verb forms (thus garnering lower agreement). It is unlikely that anyone would challenge those results (except for those of you clever enough to read my blog).   
It might also be the case on more consumer-oriented studies, though it is unclear whether the same effect would be felt in situations where tension levels are not so high. In our clients’ best interest, however, it makes sense to be consistent and with that eliminate another form of bias.  
 
Hits: 131 0 Comments

Market-Research-Prioritization-email-violations

I work in a business that depends heavily on email. We use it to ask and answer questions, share work product, and engage our clients, vendors, co-workers and peers on a daily basis. When email goes down – and thankfully it doesn't happen that often – we feel anything from mildly annoyed to downright panic-stricken.

So business email is ubiquitous. But not everyone follows the same rules of engagement – which can make for some very frustrating exchanges.

We assembled a list of 21 "violations" we experienced (or committed) and set out to find out which ones are considered the most bothersome.

Research panelists who say they use email for business purposes were administered our Bracket™ prioritization exercise to determine which email scenario is the "most irritating".

...

Should Hotels Respond to Online Reviews?

Posted by on in Consumer Behavior

Online reviews pricing Market researchYou are planning to take a trip to the city of brotherly love to visit the world famous Philadelphia Flower Show, and would like to book a hotel near the Convention Center venue. If you’re like most people, you go online, perhaps to TripAdvisor or Expedia and look for a hotel. In a few clicks you find a list of hotels with star ratings, prices, amenities, distance to destination – everything you need to make a decision. Quickly you narrow your choice down to two hotels within walking distance of the Flower Show, and conveniently located near the historic Reading Terminal Market.

But how to choose between the two that seem so evenly matched? Perhaps you can take a look at some review comments that might provide more depth? There are hundreds of comments which is more than you have time for, but you quickly read a few on the first page. You are about to close the browser when you notice something. One of the hotels has responses to some of the negative comments. Hmmm…interesting. You decide to read the responses, and see some apologies, a few explanations and general earnestness. No such response for the other hotel, which now begins to seem colder and more distant. What do you do?

In effect, that’s the question Davide Proserpio and Georgios Zervas seek to answer in a recent article in the INFORMS journal Marketing Science. And it’s not hard to see why it’s an important question. Online reviews can have significant impact on a business, and unlike word of mouth they tend to stick around for years (just take a look at the dates on some reviews). Companies can’t do much to stop reviews (especially negative), and so they often try to coopt them by providing responses to selected reviews. It is a manual task, but the idea seems sound. By responding, perhaps they can take the sting out of negative reviews, appear contrite, promise to do better, or just thank the reviewer for the time they took to write the feedback – all with the objective of getting prospective customers to give them a fair chance. The question then is whether such efforts are useful or just more online clutter.

It turns out that’s not an easy question to answer, and as Proserpio and Zervas document in the article, there are several factors that first need to be controlled. But their basic approach is easy enough to understand – they examine whether TripAdvisor ratings for hotels tend to go up after management responds to online reviews. An immediate problem to overcome, ironically enough, is management response. That is, in reaction to bad reviews a hotel may actually make changes that then increases future ratings. That’s great for the hotel, but not so much for the researcher who is trying to study if the response to the online review had an impact, not whether the hotel is willing to make changes in response to the review. So, that’s an important factor that needs to be controlled. How to do that?

Enter Expedia. As it happens, hotels frequently respond to TripAdvisor reviews while they almost never do so on Expedia. So, they use Expedia as a control cell and compare the before-after difference in ratings on TripAdvisor and Expedia (the difference-in-difference approach). Hence they are able to tease out if the improvement in ratings was because of responding to reviews or real changes. Another check they use is to compare the ratings of guests who left a review shortly before a hotel began responding with those who did so shortly after the hotel began responding. Much of the article is actually devoted to several more clever and increasingly complex maneuvers they use to finally tease out just the impact of management responses. What do they find? 

...
conjoint-modern-market-research-In my last blog I referenced an article about design elements that no longer serve a purpose and I argued that techniques like Max-Diff and conjoint can help determine whether these elements are really necessary or not. Today I’d like to ask the question “What do we as researchers use that are still useless?”
 
For many years the answer would have been telephone interviewing.  We continued to use telephone interviewing long after it became clear that web was a better answer. The common defense was “it is not representative”, which was true, but telephone data collection was no longer representative either. I’m not saying that we should abandon telephone interviewing…there are certainly times when it is a better option (for example, when talking to your clients customers and you don’t have email addresses). I’m just saying that the notion that we need to have a phone sample to make it representative is unfounded.
 
I think though we need to go further. We still routinely use cross tabs to ferret out interesting information. The fact that these interesting tidbits might be nothing more than noise doesn’t stop us from doing so. Further, the many “significant differences” we uncover are often not significant at all…they are statistically discernable, but not significant from a business decision making standpoint. Still the automatic sig testing makes us pause to think about them.
 
Wouldn’t it be better to dig into the data and see what it tells us about our starting hypothesis? Good design means we thought about the hypothesis and the direction we needed during the questionnaire development process so we know what questions to start with and then we can follow the data wherever it leads. While in the past this was impractical, we not live in a world where analysis packages are easy to use. So why are we wasting time looking through decks of tables?
 
There are of course times when having a deck of tables could be a time saver, but like telephone interviewing, I would argue we should limit their use to those times and not simply produce tables because “that’s the way we have always done it”.  
Hits: 831 0 Comments

Want to know more?

Give us a few details so we can discuss possible solutions.

Please provide your Name.
Please provide a valid Email.
Please provide your Phone.
Please provide your Comments.
Enter code below : Enter code below :
Please Enter Correct Captcha code
Our Phone Number is 1-800-275-2827
 Find TRC on facebook  Follow us on twitter  Find TRC on LinkedIn

Our Clients