Welcome visitor you can log in or create an account

800.275.2827

Consumer Insights. Market Innovation.

blog-page
Rich Raquet

Rich Raquet

President, TRC


Rich brings a passion for quantitative data and the use of choice to understand consumer behavior to his blog entries. His unique perspective has allowed him to muse on subjects as far afield as Dinosaurs and advanced technology with insight into what each can teach us about doing better research.  

statistics-in-market-research-
Last week I was watching the CBS morning news and they had a story about a new study that indicated that your attitude toward gym class as a child shaped your attitude toward exercise your entire life. After watching the story I am convinced that it is another case of causation being confused for correlation.
 
The basics were that kids who reported loving gym class were far more active decades later than kids who reported finding it stressful (“I was always picked last”).  I don’t doubt this correlation. My problem is they spoke of a need to make gym class more inclusive so that all kids grow up to exercise more. In other words, if we can take the stress out of gym for kids who are not good at sports, we can get them to love exercise more. I’m not sure achieving the first part of this is possible and I’m even more certain that even if we do it will not alter future behavior. 
 
I don’t see how you can eliminate the anxiety about gym class without eliminating the physical activity. Sure you could eliminate picking teams and save that humiliation, but once the games begin the kids who are poor at sports will continue to feel anxiety. Even if you simply make it an exercise class, the kids who are out of shape will stand out. Short of one-on-one classes I don’t see how you can fix the problem.
 
Doing so is also not likely to make us more active as adults. The kids who were not good at gym were not good for a variety of reasons but likely they either lacked the natural talent OR more likely the interest in sports that the athletes had. Gym class wasn’t the cause of this! If there had been no gym class I would bet that the kids who didn’t like gym class would still be less active than those that did (those who loved it and/or were good at it). I’d point the “causation arrow” backwards….if you love sports as an adult you probably liked gym class.  
 
It is easy to forget that we have to play a role in explaining statistical principles in our reporting and not just when doing sophisticated work like Discrete Choice Conjoint, Max-Diff, Segmentations and Regressions. Our direct clients likely understand causation and correlation issues, but it is important to know that their internal clients may not. Clear justification for pointing the “causation arrow” must be provided in reports and presentations. Just as important is knocking down attempts to point the arrow based solely on correlation. Otherwise they may walk away with a completely false assumption and not double back with researchers to validate it.  
 
This study is also useful in highlighting another common mistake made by internal clients. I was telling an old friend about the study and he said “that can’t be right, you hated gym class and you are far more active now than when we were kids”. Imagine me in a focus group telling my story of how much I hated waiting to be picked for a team and how my memory of that humiliation caused me to exercise more and more as an adult. The internal client stands up and says “That’s how we make people healthier…more humiliation in gym class!” In that case, someone will be in the room to point out that one person’s story is not projectable to the population…but that’s another blog. 
 
Hits: 593 0 Comments
advanced-market-research-methods-and-candyAt TRC, the most popular spot in the office is our snack shelf. It features an array of sugary, salty and carb heavy treats. The contents vary and are determined by one person (Ruth, who stocks the shelf) with influence from the rest of us (based on past usage and suggestions). Sometimes the shelf has exactly what you’re looking for. Other times, not so much. But what if instead of relying on Ruth’s powers of deduction we were to use research to figure out the optimal shelf configuration?  We’re researchers, after all. 
 
We would start out by using our Idea Mill™ product to generate ideas on which snacks people want to have. It uses incentive alignment and gamification to bring out the most creative ideas and provide direction on the favorites. It is likely that this will create too long a list of ideas (the candy shelf is only so large) and while we can toss out ideas that are not feasible, we believe it is best not to toss out ideas just because you personally don’t like them (I’m looking at you Mr. Goodbar). Far better to get more consumer input…this time to narrow the list. 
 
We could ask our folks to rate all the suggested snacks and then use that to figure out which ones should make the cut. Ratings might be good enough to eliminate some things (my guess is that despite what people claim, healthy snacks would bite the dust), but among popular snacks (like different types of pretzels) we are not likely to see clear differentiation.
 
A choice method like Max-Diff could help but if the list was long it would require a lot of work on the part of our employee respondents. A method like our proprietary Bracket™  would do the job in a faster and more engaging fashion while still finding clear winners and losers.  
 
Stocking the winners would therefore make the most sense…but would it please the most people?
Currently the shelf features five types of M&M’s (original, almond, caramel, dark and strawberry nut). If dark chocolate was the least preferred it might get cut. But what if those who like almond, caramel and strawberry nut also liked original, but those who like dark only liked it. For situations like this we can take the results of the Bracket™ (or Max-Diff) and use TURF  to find the combination that would please the most people.   
 
Of course, another factor is positioning. The shelf is only so large. M&M’s can be dispensed from any size canister (in fact Ruth has one that spins so that it can dispense three types) while Pretzels tend to come in large bins that take up a lot of room. In addition, not all of the snacks cost the same. In an effort to keep our expenses and waistline under control we follow a strict budget. Might I trade off having greater quantity of a lesser snack in exchange for an expensive favorite? 
 
For these kinds of questions a discrete choice conjoint is the answer. We can include a variety of candy types and constraints related to the room they take up as well as cost. Simulations can then optimize how to spend our candy budget.  
Despite our love of research and wide array of tools though, I think in this case they would be overkill (we have a very small population of around 40 employees). So I think we’ll stick with Ruth’s instincts. I never go wanting….
 
Hits: 352 0 Comments

Half life-Market-Research

I heard a great episode of the “You Are Not so Smart” podcast in which Sam Arbesman talked about his book called “The Half Life of Facts”. This book has nothing to do with “truthiness”, “fake news” or any accusation that someone is or is not a liar, but it does provide some context for the world we live in.
 
The book’s title is taken from a scientific term (the time it takes an isotope to lose half of its radioactivity) and the notion that as we learn more, some things we took as “fact” will turn out to be wrong. Newton’s laws, for example, were supplanted by Einstein. The point of the book is not that we shouldn’t bother learning facts, but rather that we should be open to the possibility that they might be wrong. Modern medicine acknowledges that they don’t know everything and that some things they “know” will prove to be false. At the same time, they must treat patients based on what is known or thought to be known.
 
It got me thinking about our business. What is the half-life of facts here? You might be tempted to take comfort in the fact that things like margin of error have not changed. While technically true, this ignores that academia is facing a crisis of confidence over statistically significant findings that don’t hold up in subsequent studies. One cause for this is they run lots of cuts of the data and look for anything statistically significant and then build a rationale for that finding. They ignore that with so many cuts of the data they are likely to find some statistical noise. Don’t we run the same risk with each additional banner we run?
 
There is a known problem with Discrete Choice Conjoint that is often ignored. If you have a product made up of say 8 features each with three levels and 1 with 150 the importance of the feature with 150 levels will be overstated by the model. Still, the model will run, utilities will be calculated and a simulator can be constructed…all of which provide a sense of precision that is not warranted. A researcher who knows about it will guide the client either by changing the design OR by putting the results into their proper perspective. There are many other ways that a complex model like this can produce skewed results and I have little doubt more will be found in the future. 
 
This is not to say that we can’t trust results. Doctors have to treat patients based on what is known today and we must do the same for our clients. The important thing is that we have to acknowledge we have things to learn. As researchers that should be easy for us…
 
Hits: 443 0 Comments

GRIT-TOP-50-report

I appreciate that we are once again in the GRIT 50 Most Innovative Research Agencies. Innovation has always been important to me and so I am quite gratified when I see our efforts being recognized. What I don't know is how people are defining innovation.

I think as an industry we sometimes label things as innovative that are not while failing to recognize some things that are genuinely innovative. In my view, innovation requires that we provide something of value that wasn't available before. Anything short of that may be 'interesting' but not 'innovative'.

I would put things like neuroscience or most AI into the "interesting" category. There is a lot of potential but so far little so show in terms of tangible benefits. Over the years at TRC we've had many ideas that showed promise, but ultimately didn't prove out (my favorite being "Conjoint Poker"). Ultimately it is the nature of innovation that some things will never leave the drawing board or 'laboratory', but without them there would be no innovation.

On the other side, I think ideas that save time and money are often not viewed as innovative unless they involve something totally new. I disagree. If I can figure out a way to do the same process faster and/or cheaper then I'm innovating. It may not look flashy, but if it allows clients to do something they couldn't otherwise do it is innovation.

...
Tagged in: Market Research

nouns-vs-verbs-in-market-research

I’ve written many times about the importance of “knowing where your data has been”. The most advanced discrete choice conjoint, segmentation or regression is only as good as the data it relies on.  In the past I’ve written about many ways that we can bias respondents from question ordering to badly worded questions and even to push polling techniques. A new study published in Psychological Science would seem to indicate that bias can be created much more subtly than that.
 
Dr. Michael Reifen-Tagar and Dr. Orly Idan determined that you can reduce tension by relying on nouns rather than verbs. They are from Israel so they were not lacking in “high tension” things to ask. For example, half of respondents were asked their level of agreement (on a six point scale) with the “noun focused” statement “I support the division of Jerusalem” and the other half with the “verb focused” statement “I support dividing Jerusalem”.   
 
Consistent and statistically significant differences were found with the verb form garnering less support than the noun form. Follow-up questions also indicated that those who saw the verb form were angrier and showed less support for concessions toward the Palestinians.  
 
Is this a potential problem for researchers? My answer would be “potentially”. 
 
The obvious example might be in published opinion polls. One can imagine a crafty person creating a questionnaire in which issues they agree with are presented in noun form (thus garnering higher agreement from the general public) and ones they disagree with in verb forms (thus garnering lower agreement). It is unlikely that anyone would challenge those results (except for those of you clever enough to read my blog).   
It might also be the case on more consumer-oriented studies, though it is unclear whether the same effect would be felt in situations where tension levels are not so high. In our clients’ best interest, however, it makes sense to be consistent and with that eliminate another form of bias.  
 
Tagged in: Consumer Behavior
Hits: 741 0 Comments
conjoint-modern-market-research-In my last blog I referenced an article about design elements that no longer serve a purpose and I argued that techniques like Max-Diff and conjoint can help determine whether these elements are really necessary or not. Today I’d like to ask the question “What do we as researchers use that are still useless?”
 
For many years the answer would have been telephone interviewing. We continued to use telephone interviewing long after it became clear that web was a better answer. The common defense was “it is not representative”, which was true, but telephone data collection was no longer representative either. I’m not saying that we should abandon telephone interviewing…there are certainly times when it is a better option (for example, when talking to your clients customers and you don’t have email addresses). I’m just saying that the notion that we need to have a phone sample to make it representative is unfounded.
 
I think though we need to go further. We still routinely use cross tabs to ferret out interesting information. The fact that these interesting tidbits might be nothing more than noise doesn’t stop us from doing so. Further, the many “significant differences” we uncover are often not significant at all…they are statistically discernable, but not significant from a business decision making standpoint. Still the automatic sig testing makes us pause to think about them.
 
Wouldn’t it be better to dig into the data and see what it tells us about our starting hypothesis? Good design means we thought about the hypothesis and the direction we needed during the questionnaire development process so we know what questions to start with and then we can follow the data wherever it leads. While in the past this was impractical, we not live in a world where analysis packages are easy to use. So why are we wasting time looking through decks of tables?
 
There are of course times when having a deck of tables could be a time saver, but like telephone interviewing, I would argue we should limit their use to those times and not simply produce tables because “that’s the way we have always done it”.  
Hits: 1597 0 Comments
new-product-research-car-grilleI read an interesting article about design elements that no longer serve a purpose, but continue to exist. One of the most interesting one is the presence of a grille on electric cars. 
 
Conventional internal combustion engine cars need a grille because the engine needs air to flow over the radiator which cools the engine. No grille would mean the car would eventually overheat and stop working. Electric cars, however, don’t have a conventional radiator and don’t need the air flow. The grille is there because designers fear that the car would look too weird without it.  It is not clear from the article if that is just a hunch or if it has been tested.
   
It would be easy enough to test this out. We could simply show some pictures of cars and ask people which design they like best. A Max-Diff approach or an agile product like Idea Magnet™ (which uses our proprietary Bracket™ prioritization tool) could handle such a task. If the top choices were all pictures that did not include a grille we might conclude that this is the design we should use. There is a risk in this conclusion.
 
To really understand preference, we need to use a discrete choice conjoint. The exercise I envision would combine the pictures with other key features of the car (price, gas mileage, color…). We might include several pictures taken from different angles that highlight other design features (being careful to not have pictures that contradict each other…for example, one showing a spoiler on the back and another not). By mixing up these features we can determine how important each is to the purchase decision.  
It is possible that the results of the conjoint would indicate that people prefer not having a grille AND that the most popular models always include a grille. How?
 
Imagine a situation in which 80% of people prefer “no grille” and 20% prefer “grille”. The “no grille” people prefer it, but it is not the most important thing in their decision. They are more interested in gas mileage and car color than anything else. The “grille” folks, however, are very strong in their belief. They simply won’t buy a car if it doesn’t have one. As such, cars without a grille start with 20% of the market off limits. Cars with a grille, however, attract a good number of “no grille” consumers as well as those for whom it is non-negotiable.
 
Conjoint might also find that the size of the grille or alternatives to it can overcome even hard core “grille” loving consumers. Also worth consideration that preferences will change over time. For example, it isn’t hard to imagine that early automobiles (horseless carriages as they were called originally) had a place to hold a buggy whip (common on horse drawn carriages), but over time, consumers determined they were not necessary (or perhaps that is how the cup holder was born :)).
 
In short, conjoint is a critical tool to insure that new technologies have a chance to take hold. 
 
Hits: 1594 0 Comments

market-research-without-biasThe Economist Magazine did an analysis of political book sales on Amazon to see if there were any patterns. Anyone who uses social media will not be surprised that readers tended to buy books from either the left or the right...not both. This follows an increasing pattern of people looking for validation rather than education and of course it adds to the growing divide in our country. A few books managed a good mix of readers from both sides, though often these were books where the author found fault with his or her own side (meaning a conservative trashing conservatives or a liberal trashing liberals).

I love this use of big data and hopefully it will lead some to seek out facts and opinions that differ from their own. These facts and opinions need not completely change an individual's own thinking, but at the very least they should give one a deeper understanding of the issue, including an understanding of what drives others' thinking.

In other words, hopefully the public will start thinking more like effective market researchers.

We could easily design research that validates the conventional wisdom of our clients.

• We can frame opinions by the way we ask questions or by the questions we asked before.
• We can omit ideas from a max-diff exercise simply because our "gut" tells us they are not viable.
• We can design a discrete choice study with features and levels that play to our client's strengths.
• We can focus exclusively on results that validate our hypothesis.

...
3 mistakes-conjoint-in-new-product-research-pricing
Discrete Choice Conjoint is a powerful tool for among other things conducting pricing and product development research. It is flexible and can handle even the most complex of products. With that said, it requires thoughtful design with an understanding of how design will impact results. Here are three mistakes that often lead to flawed design:

 

Making the exercise too complex

The flexibility of conjoint means you can include large numbers of features and levels. The argument for doing so is a strong one…including everything will ensure the choices being made are as accurate as possible. In reality, however, respondents are consumers and consumers don’t like complexity. Walk down the isle of any store and note that the front of the package doesn’t tell you everything about a product…just the most important things.   Retailers know that too much complexity actually lowers sales. Our own research shows that as you add complexity, the importance of the easiest to evaluate feature (normally price) rises…in other words, respondents ignore the wealth of information and focus more on price. 
 

What to do:

Limit the conjoint to the most critical features needed to meet the objectives of the research. If you can’t predict those in advance, then do research to figure it out. A custom Max-Diff to prioritize features or a product like our Idea Magnet (which uses Bracket) will tell you what to include. Other features can be asked about outside the conjoint.  

 

Having unbalanced numbers of levels

Some features only have two levels (for example, on a car conjoint we might have a feature for “Cruise Control” that is either present or not present). Others, however have many levels (again on a car conjoint we might offer 15 different color choices). Not only can including too many levels increase complexity (see point 1), but it can actually skew results. If one feature has many more levels than the rest, the importance of that feature will almost certainly be overstated.  
 

What to do:

As with point one, try to limit the levels to those most critical to the research.  For example, if you are using conjoint to determine brand value you don’t need to include 15 colors…five or six will do the job.  If you can’t limit things, then at least understand that the importance of the feature is being overstated and consider that as you make decisions.  
 

Not focusing on what the respondent sees

Conjoint requires a level of engagement that most questions do not. The respondent has to consider multiple products, each with multiple features and make a reasoned choice. Ultimately they will make choices, but without engagement we can’t be sure those choices represent anything more than random button pushing. Limiting complexity (point 1 again) helps, but it isn’t always enough.   
 

What to do:

Bring out your creative side…make the exercise look attractive. Include graphics (logos for example). If you can make the choice exercise look more like the real world then do so. For example, if the conjoint is about apparel, present the choices on simulated “hang tags”, so consumers see something like they would see in a store. As long as your presentation is not biasing results (for example, making one product look nicer than another) then anything goes. 
 
These are three of the most common design errors, but there are of course many more. I’m tempted to offer a fourth, “Not working with an experienced conjoint firm”, but that of course would be too self-serving!
 
Hits: 1789 0 Comments

3-tips-for-30-in-new-product-research

TRC is celebrating 30 years in business…a milestone to be sure.  

Being a numbers guy, I did a quick search to see how likely it is for a business to survive 30 years. Only about 1 in 5 make it to 15 years, but there isn’t much data beyond that. Extrapolation beyond the available data range is dangerous, but it seems likely that less than 10% of businesses ever get to where we are. To what do I owe this success then?  

It goes without saying that building strong client relationships and having great employees are critical. But I think there are three things that are key to having both those things:

Remaining Curious

I’ve always felt that researchers need to be curious and I’d say the same for Entrepreneurs. Obviously being curious about your industry will bring value, but even curiosity about subjects that have no obvious tie in can lead to innovation. For example, by learning more about telemarketing I discovered digital recording technology and applied it to our business to improve quality.

...
new-product-resesarch-development-inventorA few times a week I get the privilege of talking to an inventor/entrepreneur. The products they call about range from pet toys to sophisticated electronic devices, but they all have one thing in common…they want a proof of concept for their invention. In most cases they want it in order to attract investors or to sell their invention to corporate entities.   
 
Of course, unlike our fortune 500 clients, they also have limited budgets. They’ve often tapped their savings testing prototypes and trying get a patent so they are weary of spending a lot to do consumer research. Even though only about a third of these conversations end up in our doing work for them, I enjoy them all.
 
First off, it is fun educating people on the various tools available for studying concepts. I typically start off telling them about the range of techniques from simple concept evaluations (like our Idea Audit) to more complex conjoint studies. I succinctly outline the additional learning you get as the budget increases. These little five to ten minute symposiums help me become better at talking about what we do.
 
Second, talking to someone as committed to a product as an inventor is infectious. They can articulate exactly how they intend to use the results in a way that some corporate researchers can’t (because they are not always told). While some of their needs are pretty typical (pricing research for example), others are very unique. I enjoy trying to find a range of solutions for them (from various new product research methods) that will answer the question at a budget they can afford. 
 
In many cases, I even steer them away from research. For many inventions something like Kickstarter is all they need.  In essence the market decides if the concept has merit. If that is all they need then why waste money on primary research? My hope is that they succeed and return to us when they have more sophisticated needs down the road.
 
Of course, I particularly enjoy it when the inventor engages us for research. Often the product is different than anything else we’ve researched and there is just something special about helping out a budding entrepreneur. The fact that these engagements make us better researchers for our corporate research clients is just a bonus.   
 
Hits: 1825 0 Comments

new-product-research-floating-grilleI recently heard an old John Oliver comedy routine in which he talked about a product he'd stumbled upon...a floating barbeque grille. He hilariously makes the case that it is nearly impossible to find a rationale for such a product and I have to agree with him. Things like that can make one wonder if in fact we've pretty well invented everything that can be invented.

A famous quote attributed to Charles Holland Duell makes the same case: "Everything that can be invented has been invented". He headed up the Patent Office from 1898 to 1901 so it's not hard to see why he might have felt that way. It was an era of incredible invention which took the world that was largely driven by human and animal power into one in which engines and motors completely changed everything.

It is easy for us to laugh at such stupidity, but I suspect marketers of the future might laugh at the notion that we live in a particularly hard era for new product innovation. In fact, we have many advantages over our ancestors 100+ years ago. First, the range of possibilities is far broader. Not only do we have fields that didn't exist then (such as information technology), but we also have new challenges that they couldn't anticipate. For example, coming up with greener ways to deliver the same or better standard of living.

Second, we have tools at our disposal that they didn't have. Vast data streams provide insight into the consumer mind that Edison couldn't dream of. Of course I'd selfishly point out that tools like conjoint analysis or consumer driven innovation (using tools like our own Idea Mill) further make innovation easier.

The key is to use these tools to drive true innovation. Don't just settle for slight improvements to what already exists....great ideas are out there.

...

economistOver the years our clients have increasingly looked to us to condense results. Their internal stakeholders often only read the executive summary and even then they might only focus on headlines and bold print. Where in the past they might have had time to review hundreds of splits of Max-Diff data or simulations in a conjoint, they now want us to focus our market research reporting on their business question and to answer it as concisely as possible. All of that makes perfect sense. For example, wouldn’t you rather read a headline like “the Eight Richest People in the World Have More Wealth than Half the World’s Population” than endless data tables that lay out all the ways that wealth is unfairly distributed? I know I would…if it were true.

The Economist Magazine did an analysis of the analysis that went into that headline-grabbing statement from Oxfam (a charity). The results indicate a number of flaws that are well worth understanding.

•    They included negative wealth. Some 400 million people have negative wealth (they owe more than they own). So it requires lots of people with very low positive net worth to match the negative wealth of these 400 million people…thus making the overall group much larger than it might have been.    

•    For example, there are 21 million Americans with a net worth of over $350 Billion. Most of them would not be people you might associate with being very poor…rather they have borrowed money to make their lives better now with the plan to pay it off later.

•    They were looking at only material wealth…meaning hard assets like property and cash. Even ignoring wealth like that of George Baily (“The richest man in town!”), each of us possesses wealth in terms of future earning potential. Bill Gates will still have more wealth than a farmer in sub-Saharan Africa, but collectively half the world’s population has a lot of earnings potential.

...

pollsters-went-wrongThe surprising result of the election has lots of people questioning the validity of polls…how could they have so consistently predicted a Clinton victory? Further, if the polls were wrong, how can we trust survey research to answer business questions? Ultimately even sophisticated techniques like discrete choice conjoint or max-diff rely upon these data so this is not an insignificant question. 

 
As someone whose firm conducts thousands and thousands of surveys annually, I thought it made sense to offer my perspective. So here are five reasons that I think the polls were “wrong” and how I think that problem could impact our work.

 

 

5 Reasons Why the Polls Went 'Wrong'


1) People Don’t Know How to Read Results
Most polls had the race in the 2-5% range and the final tally had it nearly dead even (Secretary Clinton winning the popular vote by a slight margin). At the low end, this range is within the margin of error. At the high end, it is not far outside of it. Thus, even if everything else were perfect, we would expect that the election might well have been very close.  

...

new product pricing research ebayI’ve become a huge fan of podcasts, downloading dozens every week and listening to them on the drive to and from work. The quantity and quality of material available is incredible. This week another podcast turned me on to eBay’s podcast “Open for Business”. Specifically the title of episode three “Price is Right” caught my ear.   
While the episode was of more use to someone selling a consumer product than to someone selling professional services, I got a lot out of it.
First off, they highlighted their “Terapeak” product which offers free information culled from the massive data set of eBay buyers and sellers. For this episode they featured how you can use this to figure out how the market values products like yours. They used this to demonstrate the idea that you should not be pricing on a “cost plus” basis but rather on a “value” basis.
From there they talked about how positioning matters and gave a glimpse of a couple market research techniques for pricing. In one case, it seemed like they were using the Van Westendorp. The results indicated a range of prices that was far below where they wanted to price things. This led to a discussion of positioning (in this case, the product was an electronic picture frame which they hoped to be positioned not as a consumer electronic product but as home décor). The researchers here didn’t do anything to position the product and so consumers compared it to an iPad which led to the unfavorable view of pricing.  
Finally, they talked to another researcher who indicated that she uses a simple “yes/no” technique…essentially “would you buy it for $XYZ?” She said that this matched the marketplace better than asking people to “name their price”.  
Of the two methods cited I tend to go with the latter. Any reader of this blog knows that I favor questions that mimic the market place vs. asking strange questions that you wouldn’t consider in real life (what’s the most you would pay for this?”). Of course, there are a ton of choices that were not covered including conjoint analysis which I think is often the most effective means to set prices (see our White Paper - How to Conduct Pricing Research for more).
Still there was much that we as researchers can take from this. As noted, it is important to frame things properly. If the product will be sold in the home décor department, it is important to set the table along those lines and not allow the respondent to see it as something else. I have little doubt if the Van Westendorp questions were preceded by proper framing and messaging the results would have been different.
I also think the use of big data tools like Terapeak and Google analytics are something we should make more use of.  Secondary research has never been easier!  In the case of pricing research, knowing the range of prices being paid now can provide a good guide on what range of prices to include in, say, a Discrete Choice exercise. This is true even if the product has a new feature not currently available. Terapeak allows you to view prices over time so you can see the impact of the last big innovation, for example.
Overall, I commend eBay for their podcast. It is quite entertaining and provides a lot of useful information…especially for someone starting a new business.

Hits: 2808 0 Comments

storytelling market researchMany researchers are by nature math geeks. We are comfortable with numbers and statistical methods like regression or max-diff. Some find the inclusion of fancy graphics as just being a distraction...just wasted space on the page that could be used to show more numbers! I've even heard infographics defined as "information lite". Surely top academics think differently!
No doubt if you asked top academics they might well tell you that they prefer to see the formulas and the numbers and not graphics. This is no different than respondents who tend to tell us that things like celebrity endorsements don't matter until we use an advanced method like discrete choice conjoint to prove otherwise.
Bill Howe and his colleagues at the University of Washington in Seattle, figured out a way to test the power of graphics without asking. They built an algorithm that could distinguish, with a high degree of success, between diagrams, equations, photographs and plots (bar charts for example) and tables. They then exposed the algorithm to 650,000 papers with over 10 Million figures in them.
For each paper they also calculated an Eigenfactor score (similar to what Google uses for search) to rate the importance of each paper (by looking at how often the paper is cited).
On average papers had 1 diagram for every three pages and 1.67 citations. Papers with more diagrams per page tended to get 2 extra citations for every additional diagram per page. So clearly, even among academics, diagrams seemed to increase the chances that the papers were read and the information was used.
Now we can of course say that this is "correlation" and not "causation" and that would be correct. It will take further research to truly validate the notion that graphics increase interest AND comprehension.
I'm not waiting for more research. These findings validate where the industry has been going. Clients are busy and their stakeholders are not as engaged as they might have been in the past. They don't care about the numbers or the formulas (by the way, formulas in academic papers reduced the frequency with which they were cited)...they care about what the data are telling them. If we can deliver those results in a clear graphical manner it saves them time, helps them internalize the results and because of that increases the likelihood that the results will be used.

So while graphics might not make us feel smart...they actually should.

Hits: 2926 0 Comments
TRC is proud to announce that it was voted as one of the top 50 most innovative firms on the market research supplier side. We’re big believers in trying to advance the business of research and we’re excited to see that the GRIT study recognized that. 
 
Our philosophy is to engage respondents using a combination of advanced techniques and better interfaces. Asking respondents what they want or why without context leads to results that over state real preferences (consumers after all want “everything”) and often miss what is driving those decisions (Behavioral Economics tells us that we often don’t know why we buy what we buy).
 
Through the use of off the shelf tools like Max-Diff or the entire family of conjoint methods, we can better engage respondents AND gather much more actionable data. Through these tools and some of our own innovations like Bracket™ we can efficiently understand real preference and use analytics to tell us what is driving them.  
 
Our ongoing long-terms partnerships with top academics at universities throughout the country also help us stay innovative. By collaborating with them we are able to drive new innovations that better unlock what drives consumers. 
 
The GRIT study tracks which supplier firms are perceived as most innovative within the global market research industry. It’s a brand tracker using the attribute of ‘innovation’ as the key metric. The answers are gathered on an unaided basis. The survey essentially asks to list top 3 (need to check) research companies respondents consider innovative, then asks to rank them from least to most innovative and finally asks for explanation why they think they are innovative. Given the unaided nature of the study, it is quite an achievement for a firm like TRC to make the same list as firms hundreds of times our size.  
 
Again, we’re excited to be recognized and hope you’ll be able to experience the innovative benefits we offer for yourself.
Recent comment in this post - Show all comments
  • Andre
    Andre says #
    Congrats to TRC, The greatest group of people I've ever worked with and for. Well deserved !!!!!
Hits: 2997 1 Comment

Future new product researchDecember and January are full of articles that tell us what to expect in the New Year. There is certainly nothing wrong with thinking about the future (far from it), but it is important that we do so with a few things in mind. Predications are easy to make, but hard to get right, at least consistently.


First, to some extent we all suffer from the “past results predict the future” model. We do so because quite often they do, but there is no way to know when they no longer will. As such, be wary of predictions that say something like “last year neuro research was used by 5% of fortune 500 companies…web panels hit the 5% mark and then exploded to more than 50% within three years.” It might be right to assume the two will have similar outcomes, or it might be that the two situations (both in terms of the technique and in terms of the market at the time) are quite different.


Second, we all bring a bias to our thinking. We have made business decisions based on where we think the market is going and so it is only natural that our predictions might line up with that. At TRC we’ve invested in agile products to aid in the early stage product development process. I did so because I believe the market is looking for rigorous, fast and inexpensive ways to solve problems like ideation, prioritization and concept evaluation. Quite naturally if I’m asked to predict the future I’ll tend to see these as having great potential.


Third, some people will be completely self-serving in their predictions. So, for example, we do a tremendous amount of discrete choice conjoint work. I certainly would like to think that this area will grow in the next year so I might be tempted to make the prediction in the hopes that readers will suddenly start thinking about doing a conjoint study.   


Fourth, an expert isn’t always right. Hearing predictions is useful, but ultimately you have to consider the reasoning behind them, seek out your own sources of information and consider things that you already know. Just because someone has a prediction published, doesn’t mean they know the future any better than you do. 

...

Curious mind new product ResearchI recently finished Brian Grazer’s book A Curious Mind and I enjoyed it immensely. I was attracted to the book both because I have enjoyed many of the movies he made with Ron Howard (Apollo 13 being among my favorites) and because of the subject…curiosity.

I have long believed that curiosity is a critical trait for a good researcher. We have to be curious about our clients’ needs, new research methods and most important the data itself. While a cursory review of cross tabs will produce some useful information, it is digging deeper that allows us to make the connections that tell a coherent story. Without curiosity analytical techniques like conjoint or max diff don’t help.

The book shows how Mr. Grazer’s insatiable curiosity had brought him into what he calls “curiosity conversations” with a wide array of individuals from Fidel Castro to Jonas Salk. He had these conversations not because he thought there might be a movie in it, but because he wanted to know more about these individuals. He often came out of the conversations with a new perspective and yes, sometimes even ideas for a movie.

One example was with regards to Apollo 13. He had met Jim Lovell (the commander of that fateful mission) and found his story to be interesting, but he wasn’t sure how to make it into a movie. The technical details were just too complicated.

Later he was introduced by Sting to Veronica de Negri.  If you don’t know who she is (I didn’t), she was a political prisoner in Chile for 8 months during which she was brutally tortured. To survive she had to create for herself an alternate reality. In essence by focusing on the one thing she still had control of (her mind) she was able to endure the things she could not control. Mr. Grazer used that logic to help craft Apollo 13. Instead of being a movie about technical challenges it became a movie about the human spirit and its ability to overcome even the most difficult circumstances.

...

bias in market research two soccer playersIn new product market research we often discuss the topic of bias, though typically these discussions revolve around issues like sample selection (representativeness, non-response, etc.) but what about methodological or analysis bias? Is it possible that we impact results by choosing the wrong market research methods to collect the data or to analyze the results?


A recent article in the Economist presented an interesting study in which the same data set and the same objective was given to 29 different researchers. The objective was to determine if dark skinned soccer players were more likely to get a red card than light skinned players. Each researcher was free to use whatever methods they thought best to answer the question.


Both statistical methods (Bayesian Clustering, logistic regression, linear modeling...) and analysis techniques (some for example considered that some positions might be more likely to get red cards and thus data needed to be adjusted for that) differed from one researcher to the next. No surprise then that results varied as well. One found that dark skinned players were only 89% as likely to get a red card as light skinned players while another found dark skinned players were three times MORE likely to get the red card. So who is right?


There is no easy way to answer that question. I'm sure some of the analysis can easily be dismissed as too superficial, but in other cases the "correct" method is not obvious. The article suggests that when important decisions regarding public policy are being considered the government should contract with multiple researchers and then compare and contrast their results to gain a fuller understanding of what policy should be adapted. I'm not convinced this is such a great idea for public policy (seems like it would only lead to more polarization as groups pick the results they most agreed with going in), but the more important question is, what can we as researchers learn from this?


In custom new product market research the potential for different results is even greater. We are not limited to existing data. Sure we might use that data (customer purchase behavior for example), but we can and will supplement it with data that we collect. These data can be gathered using a variety of techniques and question types. Once the data are collected we have the same potential to come up with different results as the study above.

...

Want to know more?

Give us a few details so we can discuss possible solutions.

Please provide your Name.
Please provide a valid Email.
Please provide your Phone.
Please provide your Comments.
Enter code below : Enter code below :
Please Enter Correct Captcha code
Our Phone Number is 1-800-275-2827
 Find TRC on facebook  Follow us on twitter  Find TRC on LinkedIn

Our Clients