Welcome visitor you can log in or create an account

800.275.2827

Consumer Insights. Market Innovation.

blog-page
Rich Raquet

Rich Raquet

President, TRC


Rich brings a passion for quantitative data and the use of choice to understand consumer behavior to his blog entries. His unique perspective has allowed him to muse on subjects as far afield as Dinosaurs and advanced technology with insight into what each can teach us about doing better research.  

GRIT-TOP-50-report

I appreciate that we are once again in the GRIT 50 Most Innovative Research Agencies. Innovation has always been important to me and so I am quite gratified when I see our efforts being recognized. What I don't know is how people are defining innovation.

I think as an industry we sometimes label things as innovative that are not while failing to recognize some things that are genuinely innovative. In my view, innovation requires that we provide something of value that wasn't available before. Anything short of that may be 'interesting' but not 'innovative'.

I would put things like neuroscience or most AI into the "interesting" category. There is a lot of potential but so far little so show in terms of tangible benefits. Over the years at TRC we've had many ideas that showed promise, but ultimately didn't prove out (my favorite being "Conjoint Poker"). Ultimately it is the nature of innovation that some things will never leave the drawing board or 'laboratory', but without them there would be no innovation.

On the other side, I think ideas that save time and money are often not viewed as innovative unless they involve something totally new. I disagree. If I can figure out a way to do the same process faster and/or cheaper then I'm innovating. It may not look flashy, but if it allows clients to do something they couldn't otherwise do it is innovation.

...

nouns-vs-verbs-in-market-research

I’ve written many times about the importance of “knowing where your data has been”. The most advanced discrete choice conjoint, segmentation or regression is only as good as the data it relies on.  In the past I’ve written about many ways that we can bias respondents from question ordering to badly worded questions and even to push polling techniques. A new study published in Psychological Science would seem to indicate that bias can be created much more subtly than that.
 
Dr. Michael Reifen-Tagar and Dr. Orly Idan determined that you can reduce tension by relying on nouns rather than verbs. They are from Israel so they were not lacking in “high tension” things to ask. For example, half of respondents were asked their level of agreement (on a six point scale) with the “noun focused” statement “I support the division of Jerusalem” and the other half with the “verb focused” statement “I support dividing Jerusalem”.   
 
Consistent and statistically significant differences were found with the verb form garnering less support than the noun form. Follow-up questions also indicated that those who saw the verb form were angrier and showed less support for concessions toward the Palestinians.  
 
Is this a potential problem for researchers? My answer would be “potentially”. 
 
The obvious example might be in published opinion polls. One can imagine a crafty person creating a questionnaire in which issues they agree with are presented in noun form (thus garnering higher agreement from the general public) and ones they disagree with in verb forms (thus garnering lower agreement). It is unlikely that anyone would challenge those results (except for those of you clever enough to read my blog).   
It might also be the case on more consumer-oriented studies, though it is unclear whether the same effect would be felt in situations where tension levels are not so high. In our clients’ best interest, however, it makes sense to be consistent and with that eliminate another form of bias.  
 
Hits: 268 0 Comments
conjoint-modern-market-research-In my last blog I referenced an article about design elements that no longer serve a purpose and I argued that techniques like Max-Diff and conjoint can help determine whether these elements are really necessary or not. Today I’d like to ask the question “What do we as researchers use that are still useless?”
 
For many years the answer would have been telephone interviewing.  We continued to use telephone interviewing long after it became clear that web was a better answer. The common defense was “it is not representative”, which was true, but telephone data collection was no longer representative either. I’m not saying that we should abandon telephone interviewing…there are certainly times when it is a better option (for example, when talking to your clients customers and you don’t have email addresses). I’m just saying that the notion that we need to have a phone sample to make it representative is unfounded.
 
I think though we need to go further. We still routinely use cross tabs to ferret out interesting information. The fact that these interesting tidbits might be nothing more than noise doesn’t stop us from doing so. Further, the many “significant differences” we uncover are often not significant at all…they are statistically discernable, but not significant from a business decision making standpoint. Still the automatic sig testing makes us pause to think about them.
 
Wouldn’t it be better to dig into the data and see what it tells us about our starting hypothesis? Good design means we thought about the hypothesis and the direction we needed during the questionnaire development process so we know what questions to start with and then we can follow the data wherever it leads. While in the past this was impractical, we not live in a world where analysis packages are easy to use. So why are we wasting time looking through decks of tables?
 
There are of course times when having a deck of tables could be a time saver, but like telephone interviewing, I would argue we should limit their use to those times and not simply produce tables because “that’s the way we have always done it”.  
Hits: 995 0 Comments
new-product-research-car-grilleI read an interesting article about design elements that no longer serve a purpose, but continue to exist. One of the most interesting one is the presence of a grille on electric cars. 
 
Conventional internal combustion engine cars need a grille because the engine needs air to flow over the radiator which cools the engine. No grille would mean the car would eventually overheat and stop working. Electric cars, however, don’t have a conventional radiator and don’t need the air flow. The grille is there because designers fear that the car would look too weird without it.  It is not clear from the article if that is just a hunch or if it has been tested.
   
It would be easy enough to test this out. We could simply show some pictures of cars and ask people which design they like best. A Max-Diff approach or an agile product like Idea Magnet™ (which uses our proprietary Bracket™ prioritization tool) could handle such a task. If the top choices were all pictures that did not include a grille we might conclude that this is the design we should use. There is a risk in this conclusion.
 
To really understand preference, we need to use a discrete choice conjoint. The exercise I envision would combine the pictures with other key features of the car (price, gas mileage, color…). We might include several pictures taken from different angles that highlight other design features (being careful to not have pictures that contradict each other…for example, one showing a spoiler on the back and another not). By mixing up these features we can determine how important each is to the purchase decision.  
It is possible that the results of the conjoint would indicate that people prefer not having a grille AND that the most popular models always include a grille. How?
 
Imagine a situation in which 80% of people prefer “no grille” and 20% prefer “grille”. The “no grille” people prefer it, but it is not the most important thing in their decision. They are more interested in gas mileage and car color than anything else. The “grille” folks, however, are very strong in their belief. They simply won’t buy a car if it doesn’t have one. As such, cars without a grille start with 20% of the market off limits. Cars with a grille, however, attract a good number of “no grille” consumers as well as those for whom it is non-negotiable.
 
Conjoint might also find that the size of the grille or alternatives to it can overcome even hard core “grille” loving consumers. Also worth consideration that preferences will change over time. For example, it isn’t hard to imagine that early automobiles (horseless carriages as they were called originally) had a place to hold a buggy whip (common on horse drawn carriages), but over time, consumers determined they were not necessary (or perhaps that is how the cup holder was born :)).
 
In short, conjoint is a critical tool to insure that new technologies have a chance to take hold. 
 
Hits: 1013 0 Comments

market-research-without-biasThe Economist Magazine did an analysis of political book sales on Amazon to see if there were any patterns. Anyone who uses social media will not be surprised that readers tended to buy books from either the left or the right...not both. This follows an increasing pattern of people looking for validation rather than education and of course it adds to the growing divide in our country. A few books managed a good mix of readers from both sides, though often these were books where the author found fault with his or her own side (meaning a conservative trashing conservatives or a liberal trashing liberals).

I love this use of big data and hopefully it will lead some to seek out facts and opinions that differ from their own. These facts and opinions need not completely change an individual's own thinking, but at the very least they should give one a deeper understanding of the issue, including an understanding of what drives others' thinking.

In other words, hopefully the public will start thinking more like effective market researchers.

We could easily design research that validates the conventional wisdom of our clients.

• We can frame opinions by the way we ask questions or by the questions we asked before.
• We can omit ideas from a max-diff exercise simply because our "gut" tells us they are not viable.
• We can design a discrete choice study with features and levels that play to our client's strengths.
• We can focus exclusively on results that validate our hypothesis.

...
3 mistakes-conjoint-in-new-product-research-pricing
Discrete Choice Conjoint is a powerful tool for among other things conducting pricing and product development research. It is flexible and can handle even the most complex of products. With that said, it requires thoughtful design with an understanding of how design will impact results. Here are three mistakes that often lead to flawed design:

 

Making the exercise too complex

The flexibility of conjoint means you can include large numbers of features and levels. The argument for doing so is a strong one…including everything will ensure the choices being made are as accurate as possible. In reality, however, respondents are consumers and consumers don’t like complexity. Walk down the isle of any store and note that the front of the package doesn’t tell you everything about a product…just the most important things.   Retailers know that too much complexity actually lowers sales. Our own research shows that as you add complexity, the importance of the easiest to evaluate feature (normally price) rises…in other words, respondents ignore the wealth of information and focus more on price. 
 

What to do:

Limit the conjoint to the most critical features needed to meet the objectives of the research. If you can’t predict those in advance, then do research to figure it out. A custom Max-Diff to prioritize features or a product like our Idea Magnet (which uses Bracket) will tell you what to include. Other features can be asked about outside the conjoint.  

 

Having unbalanced numbers of levels

Some features only have two levels (for example, on a car conjoint we might have a feature for “Cruise Control” that is either present or not present). Others, however have many levels (again on a car conjoint we might offer 15 different color choices). Not only can including too many levels increase complexity (see point 1), but it can actually skew results. If one feature has many more levels than the rest, the importance of that feature will almost certainly be overstated.  
 

What to do:

As with point one, try to limit the levels to those most critical to the research.  For example, if you are using conjoint to determine brand value you don’t need to include 15 colors…five or six will do the job.  If you can’t limit things, then at least understand that the importance of the feature is being overstated and consider that as you make decisions.  
 

Not focusing on what the respondent sees

Conjoint requires a level of engagement that most questions do not. The respondent has to consider multiple products, each with multiple features and make a reasoned choice. Ultimately they will make choices, but without engagement we can’t be sure those choices represent anything more than random button pushing. Limiting complexity (point 1 again) helps, but it isn’t always enough.   
 

What to do:

Bring out your creative side…make the exercise look attractive. Include graphics (logos for example). If you can make the choice exercise look more like the real world then do so. For example, if the conjoint is about apparel, present the choices on simulated “hang tags”, so consumers see something like they would see in a store. As long as your presentation is not biasing results (for example, making one product look nicer than another) then anything goes. 
 
These are three of the most common design errors, but there are of course many more. I’m tempted to offer a fourth, “Not working with an experienced conjoint firm”, but that of course would be too self-serving!
 
Hits: 1210 0 Comments

3-tips-for-30-in-new-product-research

TRC is celebrating 30 years in business…a milestone to be sure.  

Being a numbers guy, I did a quick search to see how likely it is for a business to survive 30 years. Only about 1 in 5 make it to 15 years, but there isn’t much data beyond that. Extrapolation beyond the available data range is dangerous, but it seems likely that less than 10% of businesses ever get to where we are. To what do I owe this success then?  

It goes without saying that building strong client relationships and having great employees are critical. But I think there are three things that are key to having both those things:

Remaining Curious

I’ve always felt that researchers need to be curious and I’d say the same for Entrepreneurs. Obviously being curious about your industry will bring value, but even curiosity about subjects that have no obvious tie in can lead to innovation. For example, by learning more about telemarketing I discovered digital recording technology and applied it to our business to improve quality.

...
new-product-resesarch-development-inventorA few times a week I get the privilege of talking to an inventor/entrepreneur. The products they call about range from pet toys to sophisticated electronic devices, but they all have one thing in common…they want a proof of concept for their invention. In most cases they want it in order to attract investors or to sell their invention to corporate entities.   
 
Of course, unlike our fortune 500 clients, they also have limited budgets. They’ve often tapped their savings testing prototypes and trying get a patent so they are weary of spending a lot to do consumer research. Even though only about a third of these conversations end up in our doing work for them, I enjoy them all.
 
First off, it is fun educating people on the various tools available for studying concepts. I typically start off telling them about the range of techniques from simple concept evaluations (like our Idea Audit) to more complex conjoint studies. I succinctly outline the additional learning you get as the budget increases. These little five to ten minute symposiums help me become better at talking about what we do.
 
Second, talking to someone as committed to a product as an inventor is infectious. They can articulate exactly how they intend to use the results in a way that some corporate researchers can’t (because they are not always told). While some of their needs are pretty typical (pricing research for example), others are very unique. I enjoy trying to find a range of solutions for them (from various new product research methods) that will answer the question at a budget they can afford. 
 
In many cases, I even steer them away from research. For many inventions something like Kickstarter is all they need.  In essence the market decides if the concept has merit. If that is all they need then why waste money on primary research? My hope is that they succeed and return to us when they have more sophisticated needs down the road.
 
Of course, I particularly enjoy it when the inventor engages us for research. Often the product is different than anything else we’ve researched and there is just something special about helping out a budding entrepreneur. The fact that these engagements make us better researchers for our corporate research clients is just a bonus.   
 
Hits: 1226 0 Comments

new-product-research-floating-grilleI recently heard an old John Oliver comedy routine in which he talked about a product he'd stumbled upon...a floating barbeque grille. He hilariously makes the case that it is nearly impossible to find a rationale for such a product and I have to agree with him. Things like that can make one wonder if in fact we've pretty well invented everything that can be invented.

A famous quote attributed to Charles Holland Duell makes the same case: "Everything that can be invented has been invented". He headed up the Patent Office from 1898 to 1901 so it's not hard to see why he might have felt that way. It was an era of incredible invention which took the world that was largely driven by human and animal power into one in which engines and motors completely changed everything.

It is easy for us to laugh at such stupidity, but I suspect marketers of the future might laugh at the notion that we live in a particularly hard era for new product innovation. In fact, we have many advantages over our ancestors 100+ years ago. First, the range of possibilities is far broader. Not only do we have fields that didn't exist then (such as information technology), but we also have new challenges that they couldn't anticipate. For example, coming up with greener ways to deliver the same or better standard of living.

Second, we have tools at our disposal that they didn't have. Vast data streams provide insight into the consumer mind that Edison couldn't dream of. Of course I'd selfishly point out that tools like conjoint analysis or consumer driven innovation (using tools like our own Idea Mill) further make innovation easier.

The key is to use these tools to drive true innovation. Don't just settle for slight improvements to what already exists....great ideas are out there.

...

economistOver the years our clients have increasingly looked to us to condense results. Their internal stakeholders often only read the executive summary and even then they might only focus on headlines and bold print. Where in the past they might have had time to review hundreds of splits of Max-Diff data or simulations in a conjoint, they now want us to focus our market research reporting on their business question and to answer it as concisely as possible. All of that makes perfect sense. For example, wouldn’t you rather read a headline like “the Eight Richest People in the World Have More Wealth than Half the World’s Population” than endless data tables that lay out all the ways that wealth is unfairly distributed? I know I would…if it were true.

The Economist Magazine did an analysis of the analysis that went into that headline-grabbing statement from Oxfam (a charity). The results indicate a number of flaws that are well worth understanding.

•    They included negative wealth. Some 400 million people have negative wealth (they owe more than they own). So it requires lots of people with very low positive net worth to match the negative wealth of these 400 million people…thus making the overall group much larger than it might have been.    

•    For example, there are 21 million Americans with a net worth of over $350 Billion. Most of them would not be people you might associate with being very poor…rather they have borrowed money to make their lives better now with the plan to pay it off later.

•    They were looking at only material wealth…meaning hard assets like property and cash. Even ignoring wealth like that of George Baily (“The richest man in town!”), each of us possesses wealth in terms of future earning potential. Bill Gates will still have more wealth than a farmer in sub-Saharan Africa, but collectively half the world’s population has a lot of earnings potential.

...

pollsters-went-wrongThe surprising result of the election has lots of people questioning the validity of polls…how could they have so consistently predicted a Clinton victory? Further, if the polls were wrong, how can we trust survey research to answer business questions? Ultimately even sophisticated techniques like discrete choice conjoint or max-diff rely upon these data so this is not an insignificant question. 

 
As someone whose firm conducts thousands and thousands of surveys annually, I thought it made sense to offer my perspective. So here are five reasons that I think the polls were “wrong” and how I think that problem could impact our work.

 

 

5 Reasons Why the Polls Went 'Wrong'


1) People Don’t Know How to Read Results
Most polls had the race in the 2-5% range and the final tally had it nearly dead even (Secretary Clinton winning the popular vote by a slight margin). At the low end, this range is within the margin of error. At the high end, it is not far outside of it. Thus, even if everything else were perfect, we would expect that the election might well have been very close.  

...

new product pricing research ebayI’ve become a huge fan of podcasts, downloading dozens every week and listening to them on the drive to and from work. The quantity and quality of material available is incredible. This week another podcast turned me on to eBay’s podcast “Open for Business”. Specifically the title of episode three “Price is Right” caught my ear.   
While the episode was of more use to someone selling a consumer product than to someone selling professional services, I got a lot out of it.
First off, they highlighted their “Terapeak” product which offers free information culled from the massive data set of eBay buyers and sellers. For this episode they featured how you can use this to figure out how the market values products like yours. They used this to demonstrate the idea that you should not be pricing on a “cost plus” basis but rather on a “value” basis.
From there they talked about how positioning matters and gave a glimpse of a couple market research techniques for pricing. In one case, it seemed like they were using the Van Westendorp. The results indicated a range of prices that was far below where they wanted to price things. This led to a discussion of positioning (in this case, the product was an electronic picture frame which they hoped to be positioned not as a consumer electronic product but as home décor). The researchers here didn’t do anything to position the product and so consumers compared it to an iPad which led to the unfavorable view of pricing.  
Finally, they talked to another researcher who indicated that she uses a simple “yes/no” technique…essentially “would you buy it for $XYZ?” She said that this matched the marketplace better than asking people to “name their price”.  
Of the two methods cited I tend to go with the latter. Any reader of this blog knows that I favor questions that mimic the market place vs. asking strange questions that you wouldn’t consider in real life (what’s the most you would pay for this?”). Of course, there are a ton of choices that were not covered including conjoint analysis which I think is often the most effective means to set prices (see our White Paper - How to Conduct Pricing Research for more).
Still there was much that we as researchers can take from this. As noted, it is important to frame things properly. If the product will be sold in the home décor department, it is important to set the table along those lines and not allow the respondent to see it as something else. I have little doubt if the Van Westendorp questions were preceded by proper framing and messaging the results would have been different.
I also think the use of big data tools like Terapeak and Google analytics are something we should make more use of.  Secondary research has never been easier!  In the case of pricing research, knowing the range of prices being paid now can provide a good guide on what range of prices to include in, say, a Discrete Choice exercise. This is true even if the product has a new feature not currently available. Terapeak allows you to view prices over time so you can see the impact of the last big innovation, for example.
Overall, I commend eBay for their podcast. It is quite entertaining and provides a lot of useful information…especially for someone starting a new business.

Hits: 2453 0 Comments

storytelling market researchMany researchers are by nature math geeks. We are comfortable with numbers and statistical methods like regression or max-diff. Some find the inclusion of fancy graphics as just being a distraction...just wasted space on the page that could be used to show more numbers! I've even heard infographics defined as "information lite". Surely top academics think differently!
No doubt if you asked top academics they might well tell you that they prefer to see the formulas and the numbers and not graphics. This is no different than respondents who tend to tell us that things like celebrity endorsements don't matter until we use an advanced method like discrete choice conjoint to prove otherwise.
Bill Howe and his colleagues at the University of Washington in Seattle, figured out a way to test the power of graphics without asking. They built an algorithm that could distinguish, with a high degree of success, between diagrams, equations, photographs and plots (bar charts for example) and tables. They then exposed the algorithm to 650,000 papers with over 10 Million figures in them.
For each paper they also calculated an Eigenfactor score (similar to what Google uses for search) to rate the importance of each paper (by looking at how often the paper is cited).
On average papers had 1 diagram for every three pages and 1.67 citations. Papers with more diagrams per page tended to get 2 extra citations for every additional diagram per page. So clearly, even among academics, diagrams seemed to increase the chances that the papers were read and the information was used.
Now we can of course say that this is "correlation" and not "causation" and that would be correct. It will take further research to truly validate the notion that graphics increase interest AND comprehension.
I'm not waiting for more research. These findings validate where the industry has been going. Clients are busy and their stakeholders are not as engaged as they might have been in the past. They don't care about the numbers or the formulas (by the way, formulas in academic papers reduced the frequency with which they were cited)...they care about what the data are telling them. If we can deliver those results in a clear graphical manner it saves them time, helps them internalize the results and because of that increases the likelihood that the results will be used.

So while graphics might not make us feel smart...they actually should.

Hits: 2531 0 Comments
TRC is proud to announce that it was voted as one of the top 50 most innovative firms on the market research supplier side. We’re big believers in trying to advance the business of research and we’re excited to see that the GRIT study recognized that. 
 
Our philosophy is to engage respondents using a combination of advanced techniques and better interfaces. Asking respondents what they want or why without context leads to results that over state real preferences (consumers after all want “everything”) and often miss what is driving those decisions (Behavioral Economics tells us that we often don’t know why we buy what we buy).
 
Through the use of off the shelf tools like Max-Diff or the entire family of conjoint methods, we can better engage respondents AND gather much more actionable data. Through these tools and some of our own innovations like Bracket™ we can efficiently understand real preference and use analytics to tell us what is driving them.  
 
Our ongoing long-terms partnerships with top academics at universities throughout the country also help us stay innovative. By collaborating with them we are able to drive new innovations that better unlock what drives consumers. 
 
The GRIT study tracks which supplier firms are perceived as most innovative within the global market research industry. It’s a brand tracker using the attribute of ‘innovation’ as the key metric. The answers are gathered on an unaided basis. The survey essentially asks to list top 3 (need to check) research companies respondents consider innovative, then asks to rank them from least to most innovative and finally asks for explanation why they think they are innovative. Given the unaided nature of the study, it is quite an achievement for a firm like TRC to make the same list as firms hundreds of times our size.  
 
Again, we’re excited to be recognized and hope you’ll be able to experience the innovative benefits we offer for yourself.
Recent comment in this post - Show all comments
  • Andre
    Andre says #
    Congrats to TRC, The greatest group of people I've ever worked with and for. Well deserved !!!!!
Hits: 2563 1 Comment

Future new product researchDecember and January are full of articles that tell us what to expect in the New Year. There is certainly nothing wrong with thinking about the future (far from it), but it is important that we do so with a few things in mind. Predications are easy to make, but hard to get right, at least consistently.


First, to some extent we all suffer from the “past results predict the future” model. We do so because quite often they do, but there is no way to know when they no longer will. As such, be wary of predictions that say something like “last year neuro research was used by 5% of fortune 500 companies…web panels hit the 5% mark and then exploded to more than 50% within three years.” It might be right to assume the two will have similar outcomes, or it might be that the two situations (both in terms of the technique and in terms of the market at the time) are quite different.


Second, we all bring a bias to our thinking. We have made business decisions based on where we think the market is going and so it is only natural that our predictions might line up with that. At TRC we’ve invested in agile products to aid in the early stage product development process. I did so because I believe the market is looking for rigorous, fast and inexpensive ways to solve problems like ideation, prioritization and concept evaluation. Quite naturally if I’m asked to predict the future I’ll tend to see these as having great potential.


Third, some people will be completely self-serving in their predictions. So, for example, we do a tremendous amount of discrete choice conjoint work. I certainly would like to think that this area will grow in the next year so I might be tempted to make the prediction in the hopes that readers will suddenly start thinking about doing a conjoint study.   


Fourth, an expert isn’t always right. Hearing predictions is useful, but ultimately you have to consider the reasoning behind them, seek out your own sources of information and consider things that you already know. Just because someone has a prediction published, doesn’t mean they know the future any better than you do. 

...

Curious mind new product ResearchI recently finished Brian Grazer’s book A Curious Mind and I enjoyed it immensely. I was attracted to the book both because I have enjoyed many of the movies he made with Ron Howard (Apollo 13 being among my favorites) and because of the subject…curiosity.

I have long believed that curiosity is a critical trait for a good researcher. We have to be curious about our clients’ needs, new research methods and most important the data itself. While a cursory review of cross tabs will produce some useful information, it is digging deeper that allows us to make the connections that tell a coherent story. Without curiosity analytical techniques like conjoint or max diff don’t help.

The book shows how Mr. Grazer’s insatiable curiosity had brought him into what he calls “curiosity conversations” with a wide array of individuals from Fidel Castro to Jonas Salk. He had these conversations not because he thought there might be a movie in it, but because he wanted to know more about these individuals. He often came out of the conversations with a new perspective and yes, sometimes even ideas for a movie.

One example was with regards to Apollo 13. He had met Jim Lovell (the commander of that fateful mission) and found his story to be interesting, but he wasn’t sure how to make it into a movie. The technical details were just too complicated.

Later he was introduced by Sting to Veronica de Negri.  If you don’t know who she is (I didn’t), she was a political prisoner in Chile for 8 months during which she was brutally tortured. To survive she had to create for herself an alternate reality. In essence by focusing on the one thing she still had control of (her mind) she was able to endure the things she could not control. Mr. Grazer used that logic to help craft Apollo 13. Instead of being a movie about technical challenges it became a movie about the human spirit and its ability to overcome even the most difficult circumstances.

...

bias in market research two soccer playersIn new product market research we often discuss the topic of bias, though typically these discussions revolve around issues like sample selection (representativeness, non-response, etc.) but what about methodological or analysis bias? Is it possible that we impact results by choosing the wrong market research methods to collect the data or to analyze the results?


A recent article in the Economist presented an interesting study in which the same data set and the same objective was given to 29 different researchers. The objective was to determine if dark skinned soccer players were more likely to get a red card than light skinned players. Each researcher was free to use whatever methods they thought best to answer the question.


Both statistical methods (Bayesian Clustering, logistic regression, linear modeling...) and analysis techniques (some for example considered that some positions might be more likely to get red cards and thus data needed to be adjusted for that) differed from one researcher to the next. No surprise then that results varied as well. One found that dark skinned players were only 89% as likely to get a red card as light skinned players while another found dark skinned players were three times MORE likely to get the red card. So who is right?


There is no easy way to answer that question. I'm sure some of the analysis can easily be dismissed as too superficial, but in other cases the "correct" method is not obvious. The article suggests that when important decisions regarding public policy are being considered the government should contract with multiple researchers and then compare and contrast their results to gain a fuller understanding of what policy should be adapted. I'm not convinced this is such a great idea for public policy (seems like it would only lead to more polarization as groups pick the results they most agreed with going in), but the more important question is, what can we as researchers learn from this?


In custom new product market research the potential for different results is even greater. We are not limited to existing data. Sure we might use that data (customer purchase behavior for example), but we can and will supplement it with data that we collect. These data can be gathered using a variety of techniques and question types. Once the data are collected we have the same potential to come up with different results as the study above.

...

Catalog Cover TestingVery few clients will go to market with a new concept without some form of market research to test it first. Others will use some real world substitutes (such as A/B Mail tests) to accomplish the same end. No one would argue against the effectiveness of things like this...they provide a scientific basis for making the right decision. Why is it then that in early stage decision-making science is often replaced with their gut?

Consider this...an innovation department cooks up a dozen or more ideas for new or improved products and services. At this point they are nothing more than ideas with perhaps some crude mock-ups to go along with them. Doing full out concept testing would be costly for this number of ideas and a real world test is certainly not in the cards. Instead, a "team" which might include product managers, marketing folks, researchers and even some of the innovation people who came up with the concepts are brought together to wean the ideas down to a more manageable level.

The team carefully evaluates each concept, perhaps ranks them and provides their thinking on why they liked certain ones. These "independent' evaluations are tallied and the dozen concepts are reduced to two or three. These two or three are then developed further and put through a more rigorous and costly process - in-market testing. The concept or concepts that score best in this process are then launched to the entire market.

This process produces a result, but also some level of doubt. Perhaps the concept that the team thought was best scored badly in the more rigorous research or the winning concept just didn't perform as well as the team thought it would. Does anyone wonder if perhaps some of the ideas that the team weaned out might have performed even better than the "winners" they picked? What opportunities might have been lost if the best ideas were left on the drawing board?

The initial weaning process is susceptible to various forms of error including group think. The less rigorous process is used not because it is seen as best, but because the rigorous methods normally used are too costly to employ on a large list of items. Does that mean going with your gut is the only option?

...

Market Research Event conjoint AnalysisLast week we held an event in New York in which Mark Broadie from Columbia University talked about his book “Every Shot Counts”. The talk and the book detail his analysis of a very large and complex data set…specifically the “ShotLine” data collected for over a decade by the PGA. It details every shot taken by every pro at every PGA tournament. He was able to use it to challenge some long held assumptions about golf…such as “Do you drive for show and putt for dough?”

On the surface the data set was not easy to work with. Sure it had numbers like how long the hole was, how far the shot went, how far it was from the hole and so on. It also had data like whether it ended up in the fairway, on the green, in the rough, in a trap or the dreaded out of bounds. Every pro has a different set of skills and there were a surprising range of abilities even in this set, but he added the same data on tens of thousands of amateur golfers of various skill levels. So how can anyone make sense of such a wide range of data and do it in a way that the amateur who scores 100 can be compared to the pro who frequently scores n the 60’-s?

You might be tempted to say that he would use a regression analysis, but he did not. You might assume he used Hierarchical Bayesian estimation as it has become more commonplace (it drives discrete choice conjoint, Max Diff and our own Bracket™), he didn’t use it here either.

Instead, he used simple arithmetic. No HB, no calculus, no Greek letters, just simple addition, subtraction, multiplication and division. At the base level, he simply averaged similar scores. Specifically he determined how many strokes it took on average for players to go from where they were to the hole. These averages were further broken down to account for where the ball started (not just distance, but rough, sand, fairway, etc) and how good the golfer was.

These simple averages allow him to answer any number of “what if” questions. For example, he can see on average how many strokes are saved by going an extra 50 yards off the tee (which turns out to be more than for being better at putting). He can also show that in fact neither driving nor putting is as important as the approach shot (the last full swing before putting the ball on the green). The ability to put the ball close to the hole on this shot is the biggest factor in scoring low.

...

An issue that comes up quite a bit when doing research is the proper way to frame questions. In my last blog I reported on our Super Bowl ad test in which we surveyed viewers to rank 36 ads based on their “entertainment value”. We did a second survey that framed the question differently to see if we could determine which ads were most effective at driving consideration of the product…in other words, the ads that did what ads are supposed to do!

As with life, framing, or context, is critical in research. First off, the nature of questions is important. Where possible the use of choice questions will work better than say rating scales. The reason is that consumers are used to making choices...ratings are more abstract. Techniques like Max-Diff, Conjoint (typically Discrete Choice these days) or our own proprietary new product research technique Bracket™ get at what is important in a way that ratings can’t.

Second, the environment you create when asking the question must seek to put consumers in the same mindset they would be in when they make decisions in real life. For example, if you are testing slogans for the outside of a direct mail envelope, you should show the slogans on an envelope rather than just in text form.

Finally, you need to frame the question in a way that matches the real world result you want. In the case of a direct mail piece, you should frame the question along the lines of “which of these would you most likely open?” rather than “which of these slogans is most important?”. In the case of a Super Bowl ad (or any ad for that matter), asking about entertainment value is less important than asking about things like “consideration” or even “likelihood to tell others about it”.  

So, we polled a second group of people and asked them “which one made you most interested in considering the product as advertised?” The results were quite interesting.

...
Recent comment in this post - Show all comments
  • Dave
    Dave says #
    The consideration vs entertainment angle is an interesting take.

Want to know more?

Give us a few details so we can discuss possible solutions.

Please provide your Name.
Please provide a valid Email.
Please provide your Phone.
Please provide your Comments.
Enter code below : Enter code below :
Please Enter Correct Captcha code
Our Phone Number is 1-800-275-2827
 Find TRC on facebook  Follow us on twitter  Find TRC on LinkedIn

Our Clients