Welcome visitor you can log in or create an account


Consumer Insights. Market Innovation.

bias in market research two soccer playersIn market research we often discuss the topic of bias, though typically these discussions revolve around issues like sample selection (representativeness, non-response, etc.) but what about methodological or analysis bias? Is it possible that we impact results by choosing the wrong methods to collect the data or to analyze the results?

A recent article in the Economist presented an interesting study in which the same data set and the same objective was given to 29 different researchers. The objective was to determine if dark skinned soccer players were more likely to get a red card than light skinned players. Each researcher was free to use whatever methods they thought best to answer the question.

Both statistical methods (Bayesian Clustering, logistic regression, linear modeling...) and analysis techniques (some for example considered that some positions might be more likely to get red cards and thus data needed to be adjusted for that) differed from one researcher to the next. No surprise then that results varied as well. One found that dark skinned players were only 89% as likely to get a red card as light skinned players while another found dark skinned players were three times MORE likely to get the red card. So who is right?

There is no easy way to answer that question. I'm sure some of the analysis can easily be dismissed as too superficial, but in other cases the "correct" method is not obvious. The article suggests that when important decisions regarding public policy are being considered the government should contract with multiple researchers and then compare and contrast their results to gain a fuller understanding of what policy should be adapted. I'm not convinced this is such a great idea for public policy (seems like it would only lead to more polarization as groups pick the results they most agreed with going in), but the more important question is, what can we as researchers learn from this?

In custom market research the potential for different results is even greater. We are not limited to existing data. Sure we might use that data (customer purchase behavior for example), but we can and will supplement it with data that we collect. These data can be gathered using a variety of techniques and question types. Once the data are collected we have the same potential to come up with different results as the study above.


wawa app market research surveyAbout a decade ago, if someone would have mentioned the words "mobile app", anyone would have looked at them with a very puzzled expression. Nowadays, we hear about these apps everywhere. There are commercials for them on television, ads in magazines, billboard posts, etc. It's truly amazing to see how advanced technology has become and what can be accomplished by using it.

In this technology-based era, the smartphone is becoming increasingly popular among a wide variety of ages. In my opinion, the biggest perk of smartphones is that we almost always have access to the Internet. Being that the Internet is one of the most efficient tools that retailers and businesses use to create, retain, and obtain business, why wouldn't they capitalize on the popularity and functionality of smartphones and use it to their advantage to do even more creating, obtaining and refining of their business? One of the best ways for a company to remain competitive in this smartphone era is to create a mobile app specific to the company.

Take Wawa for example. For those who are not on the East coast and may be unfamiliar with Wawa, it is a wonderful place that offers gasoline, freshly prepared foods, snacks, coffee and more. Okay, yes, ultimately it's a convenience store/gas station. However, to many of us on the East coast, it's much more. Anyway, if you download the Wawa app, you can link it up with your credit card or a Wawa gift card, which means you don't even have to bring your wallet into the store. The app includes a rewards system, in which you receive points for your purchases, which can be used to receive a free coffee or tea, or something of similar value. While Wawa offers many benefits to its customers through its mobile app, such as locating a nearby Wawa, checking gasoline prices or having easy access to nutrition info, it also gives app users the chance to provide feedback by means of an open-end suggestion form. It would benefit the company to implement a survey within the app instead of an open-end feedback form to gain insights about customers' transactions, experiences, and their overall opinions.

Fielding surveys within mobile apps provides a quick and easy way to reach customers and gain useful feedback. So, how do you get app users to actually participate in the survey? Simple. When the app is first opened or closed, add a pop-up message with a link to the survey that encourages the user to take the survey. Also, go ahead and add the survey as an item on the app's navigation menu. While it's not ideal to conduct surveys on mobile devices that contain something as intricate as conjoint analysis, companies can still create a simple survey that can be used to gain valuable insights about current products, potential products, customer satisfaction and an abundance of other consumer-related topics.

In order to create the best experience for the app user and get the most out of the data that is collected, companies should consider these five tips when developing a mobile survey:


Conjoint Analysis Home buyingDuring my recent first time home buying experience I learned there are many, often competing, factors to consider.   My last blog discussed how I used Bracket™, a tournament-based analytic approach, to determine what homebuyers find most important when considering a home. My list of 13 items did not include standard house stats like # of bedrooms, # of baths, etc. To measure preference for those items I used a conjoint design.

I framed up the conjoint exercise by asking homebuyers to imagine they were shopping for a home and to assume it is located in their ideal location. Using our online panel of consumers, we showed recent or soon-to-be homebuyers 2 house listings side by side, plus an “I wouldn’t choose either of these” option. Each listing included the following:

        • Number of bedrooms: 1, 2, 3 or 4
        • Number of bathrooms: 1 full, 1 full/1 half, 2 full, 2 full/1 half or 3 full
        • House style: Single Family, Townhouse, Condominium, or Multi-Family
        • House condition: Move-in ready, Some work required or Gut job
        • Price: $150,000, $200,000, $250,000, $350,000 or $450,000

I felt a conjoint was best suited here, because in addition to importance, I wanted to see what trade-offs homebuyers were willing to make between these 5 items that are highly important in home buying. Are homebuyers willing to give up a bedroom to get the right price? Are they willing to do some sweat equity to get the number of bedrooms and/or bathrooms they want?

We found the top three most important factors are # of bedrooms, price and house condition. This made perfect sense to me as I would not consider any house with less than 3 bedrooms. Price and house condition were the next two key pieces. Was the house in my price range? How much work was needed? Did the price give me enough wiggle room for repairs? I was curious to see the play between price and house condition among the recent and soon-to-be homebuyers we interviewed.

Using the simulator I selected a 3 bedroom , 2 full baths, Single Family home. I picked 3 price points ($150,000, $300,000, $450,000) and then varied the house condition. Overall, homebuyers are less interested in a "gut job" compared to "move-in-ready". However, at the $150,000 price point, share of preference drops more drastically going from "move-in-ready/some work required" to "gut job" compared to higher price points.


whats important homebuying market researchThe weather is starting to warm up and more of us are venturing outside, myself included. Walking my dog around the neighborhood I’ve noticed a number of for-sale signs and it reminds me of my own recent home buying experience. It was exciting and at the same time stressful. Once I made the decision to buy I started watching all the home buying shows and attending open houses to figure out my list of must-haves and nice to haves. I wondered how my list stacked up against others who went through or are going through the home buying process.

Using our online panel of consumers, I employed TRC’s proprietary Bracket™ exercise to find out what homebuyers find most important when considering buying a home. Bracket™ is a tournament-based analytic approach to understanding priorities. For each participant, Bracket™ randomly assigns the items being evaluated into pairs. Participants choose the winning item from each pair; that item moves on to the next round. Rounds continue until there is one “winner” per participant. Bracket™ uses this information to prioritize the remaining items, and calculate the relative distance between them.

I created a list of 13 things to consider. I didn’t include standard house stats: # of bedrooms, # of baths, etc. as I tested those separately using a conjoint analysis (my next blog will dive into what I did there).

Proximity to work

Proximity to family


Catalog Cover TestingVery few clients will go to market with a new concept without some form of market research to test it first. Others will use some real world substitutes (such as A/B Mail tests) to accomplish the same end. No one would argue against the effectiveness of things like this...they provide a scientific basis for making the right decision. Why is it then that in early stage decision-making science is often replaced with their gut?

Consider this...an innovation department cooks up a dozen or more ideas for new or improved products and services. At this point they are nothing more than ideas with perhaps some crude mock-ups to go along with them. Doing full out concept testing would be costly for this number of ideas and a real world test is certainly not in the cards. Instead, a "team" which might include product managers, marketing folks, researchers and even some of the innovation people who came up with the concepts are brought together to wean the ideas down to a more manageable level.

The team carefully evaluates each concept, perhaps ranks them and provides their thinking on why they liked certain ones. These "independent' evaluations are tallied and the dozen concepts are reduced to two or three. These two or three are then developed further and put through a more rigorous and costly process - in-market testing. The concept or concepts that score best in this process are then launched to the entire market.

This process produces a result, but also some level of doubt. Perhaps the concept that the team thought was best scored badly in the more rigorous research or the winning concept just didn't perform as well as the team thought it would. Does anyone wonder if perhaps some of the ideas that the team weaned out might have performed even better than the "winners" they picked? What opportunities might have been lost if the best ideas were left on the drawing board?

The initial weaning process is susceptible to various forms of error including group think. The less rigorous process is used not because it is seen as best, but because the rigorous methods normally used are too costly to employ on a large list of items. Does that mean going with your gut is the only option?


Want to know more?

Give us a few details so we can discuss possible solutions.

Please provide your Name.
Please provide a valid Email.
Please provide your Phone.
Please provide your Comments.
Enter code below : Enter code below :
Please Enter Correct Captcha code
Our Phone Number is 1-800-275-2827
 Find TRC on facebook  Follow us on twitter  Find TRC on LinkedIn

Our Clients