Welcome visitor you can log in or create an account

800.275.2827

Consumer Insights. Market Innovation.

blog-page
Rich Raquet

Rich Raquet

President, TRC


Rich brings a passion for quantitative data and the use of choice to understand consumer behavior to his blog entries. His unique perspective has allowed him to muse on subjects as far afield as Dinosaurs and advanced technology with insight into what each can teach us about doing better research.  

new-product-resesarch-development-inventorA few times a week I get the privilege of talking to an inventor/entrepreneur. The products they call about range from pet toys to sophisticated electronic devices, but they all have one thing in common…they want a proof of concept for their invention. In most cases they want it in order to attract investors or to sell their invention to corporate entities.   
 
Of course, unlike our fortune 500 clients, they also have limited budgets. They’ve often tapped their savings testing prototypes and trying get a patent so they are weary of spending a lot to do consumer research. Even though only about a third of these conversations end up in our doing work for them, I enjoy them all.
 
First off, it is fun educating people on the various tools available for studying concepts. I typically start off telling them about the range of techniques from simple concept evaluations (like our Idea Audit) to more complex conjoint studies. I succinctly outline the additional learning you get as the budget increases. These little five to ten minute symposiums help me become better at talking about what we do.
 
Second, talking to someone as committed to a product as an inventor is infectious. They can articulate exactly how they intend to use the results in a way that some corporate researchers can’t (because they are not always told). While some of their needs are pretty typical (pricing research for example), others are very unique. I enjoy trying to find a range of solutions for them (from various new product research methods) that will answer the question at a budget they can afford. 
 
In many cases, I even steer them away from research. For many inventions something like Kickstarter is all they need.  In essence the market decides if the concept has merit. If that is all they need then why waste money on primary research? My hope is that they succeed and return to us when they have more sophisticated needs down the road.
 
Of course, I particularly enjoy it when the inventor engages us for research. Often the product is different than anything else we’ve researched and there is just something special about helping out a budding entrepreneur. The fact that these engagements make us better researchers for our corporate research clients is just a bonus.   
 
Hits: 33 0 Comments

new-product-research-floating-grilleI recently heard an old John Oliver comedy routine in which he talked about a product he'd stumbled upon...a floating barbeque grille. He hilariously makes the case that it is nearly impossible to find a rationale for such a product and I have to agree with him. Things like that can make one wonder if in fact we've pretty well invented everything that can be invented.

A famous quote attributed to Charles Holland Duell makes the same case: "Everything that can be invented has been invented". He headed up the Patent Office from 1898 to 1901 so it's not hard to see why he might have felt that way. It was an era of incredible invention which took the world that was largely driven by human and animal power into one in which engines and motors completely changed everything.

It is easy for us to laugh at such stupidity, but I suspect marketers of the future might laugh at the notion that we live in a particularly hard era for new product innovation. In fact, we have many advantages over our ancestors 100+ years ago. First, the range of possibilities is far broader. Not only do we have fields that didn't exist then (such as information technology), but we also have new challenges that they couldn't anticipate. For example, coming up with greener ways to deliver the same or better standard of living.

Second, we have tools at our disposal that they didn't have. Vast data streams provide insight into the consumer mind that Edison couldn't dream of. Of course I'd selfishly point out that tools like conjoint analysis or consumer driven innovation (using tools like our own Idea Mill) further make innovation easier.

The key is to use these tools to drive true innovation. Don't just settle for slight improvements to what already exists....great ideas are out there.

...

economistOver the years our clients have increasingly looked to us to condense results. Their internal stakeholders often only read the executive summary and even then they might only focus on headlines and bold print. Where in the past they might have had time to review hundreds of splits of Max-Diff data or simulations in a conjoint, they now want us to focus our market research reporting on their business question and to answer it as concisely as possible. All of that makes perfect sense. For example, wouldn’t you rather read a headline like “the Eight Richest People in the World Have More Wealth than Half the World’s Population” than endless data tables that lay out all the ways that wealth is unfairly distributed? I know I would…if it were true.

The Economist Magazine did an analysis of the analysis that went into that headline-grabbing statement from Oxfam (a charity). The results indicate a number of flaws that are well worth understanding.

•    They included negative wealth. Some 400 million people have negative wealth (they owe more than they own). So it requires lots of people with very low positive net worth to match the negative wealth of these 400 million people…thus making the overall group much larger than it might have been.    

•    For example, there are 21 million Americans with a net worth of over $350 Billion. Most of them would not be people you might associate with being very poor…rather they have borrowed money to make their lives better now with the plan to pay it off later.

•    They were looking at only material wealth…meaning hard assets like property and cash. Even ignoring wealth like that of George Baily (“The richest man in town!”), each of us possesses wealth in terms of future earning potential. Bill Gates will still have more wealth than a farmer in sub-Saharan Africa, but collectively half the world’s population has a lot of earnings potential.

...

pollsters-went-wrongThe surprising result of the election has lots of people questioning the validity of polls…how could they have so consistently predicted a Clinton victory? Further, if the polls were wrong, how can we trust survey research to answer business questions? Ultimately even sophisticated techniques like discrete choice conjoint or max-diff rely upon these data so this is not an insignificant question. 

 
As someone whose firm conducts thousands and thousands of surveys annually, I thought it made sense to offer my perspective. So here are five reasons that I think the polls were “wrong” and how I think that problem could impact our work.

 

 

5 Reasons Why the Polls Went 'Wrong'


1) People Don’t Know How to Read Results
Most polls had the race in the 2-5% range and the final tally had it nearly dead even (Secretary Clinton winning the popular vote by a slight margin). At the low end, this range is within the margin of error. At the high end, it is not far outside of it. Thus, even if everything else were perfect, we would expect that the election might well have been very close.  

...

new product pricing research ebayI’ve become a huge fan of podcasts, downloading dozens every week and listening to them on the drive to and from work. The quantity and quality of material available is incredible. This week another podcast turned me on to eBay’s podcast “Open for Business”. Specifically the title of episode three “Price is Right” caught my ear.   
While the episode was of more use to someone selling a consumer product than to someone selling professional services, I got a lot out of it.
First off, they highlighted their “Terapeak” product which offers free information culled from the massive data set of eBay buyers and sellers. For this episode they featured how you can use this to figure out how the market values products like yours. They used this to demonstrate the idea that you should not be pricing on a “cost plus” basis but rather on a “value” basis.
From there they talked about how positioning matters and gave a glimpse of a couple market research techniques for pricing. In one case, it seemed like they were using the Van Westendorp. The results indicated a range of prices that was far below where they wanted to price things. This led to a discussion of positioning (in this case, the product was an electronic picture frame which they hoped to be positioned not as a consumer electronic product but as home décor). The researchers here didn’t do anything to position the product and so consumers compared it to an iPad which led to the unfavorable view of pricing.  
Finally, they talked to another researcher who indicated that she uses a simple “yes/no” technique…essentially “would you buy it for $XYZ?” She said that this matched the marketplace better than asking people to “name their price”.  
Of the two methods cited I tend to go with the latter. Any reader of this blog knows that I favor questions that mimic the market place vs. asking strange questions that you wouldn’t consider in real life (what’s the most you would pay for this?”). Of course, there are a ton of choices that were not covered including conjoint analysis which I think is often the most effective means to set prices (see our White Paper - How to Conduct Pricing Research for more).
Still there was much that we as researchers can take from this. As noted, it is important to frame things properly. If the product will be sold in the home décor department, it is important to set the table along those lines and not allow the respondent to see it as something else. I have little doubt if the Van Westendorp questions were preceded by proper framing and messaging the results would have been different.
I also think the use of big data tools like Terapeak and Google analytics are something we should make more use of.  Secondary research has never been easier!  In the case of pricing research, knowing the range of prices being paid now can provide a good guide on what range of prices to include in, say, a Discrete Choice exercise. This is true even if the product has a new feature not currently available. Terapeak allows you to view prices over time so you can see the impact of the last big innovation, for example.
Overall, I commend eBay for their podcast. It is quite entertaining and provides a lot of useful information…especially for someone starting a new business.

Hits: 1339 0 Comments

storytelling market researchMany researchers are by nature math geeks. We are comfortable with numbers and statistical methods like regression or max-diff. Some find the inclusion of fancy graphics as just being a distraction...just wasted space on the page that could be used to show more numbers! I've even heard infographics defined as "information lite". Surely top academics think differently!
No doubt if you asked top academics they might well tell you that they prefer to see the formulas and the numbers and not graphics. This is no different than respondents who tend to tell us that things like celebrity endorsements don't matter until we use an advanced method like discrete choice conjoint to prove otherwise.
Bill Howe and his colleagues at the University of Washington in Seattle, figured out a way to test the power of graphics without asking. They built an algorithm that could distinguish, with a high degree of success, between diagrams, equations, photographs and plots (bar charts for example) and tables. They then exposed the algorithm to 650,000 papers with over 10 Million figures in them.
For each paper they also calculated an Eigenfactor score (similar to what Google uses for search) to rate the importance of each paper (by looking at how often the paper is cited).
On average papers had 1 diagram for every three pages and 1.67 citations. Papers with more diagrams per page tended to get 2 extra citations for every additional diagram per page. So clearly, even among academics, diagrams seemed to increase the chances that the papers were read and the information was used.
Now we can of course say that this is "correlation" and not "causation" and that would be correct. It will take further research to truly validate the notion that graphics increase interest AND comprehension.
I'm not waiting for more research. These findings validate where the industry has been going. Clients are busy and their stakeholders are not as engaged as they might have been in the past. They don't care about the numbers or the formulas (by the way, formulas in academic papers reduced the frequency with which they were cited)...they care about what the data are telling them. If we can deliver those results in a clear graphical manner it saves them time, helps them internalize the results and because of that increases the likelihood that the results will be used.

So while graphics might not make us feel smart...they actually should.

Hits: 1512 0 Comments

GRIT-50-LogoTRC is proud to announce that it was voted as one of the top 50 innovative firms on the market research supplier side. We’re big believers in trying to advance the business of research and we’re excited to see that the GRIT study recognized that.

Our philosophy is to engage respondents using a combination of advanced techniques and better interfaces. Asking respondents what they want or why without context leads to results that overstate real preferences (consumers, after all, want “everything”) and often miss what is driving those decisions (Behavioral Economics tells us that we often don’t know why we buy what we buy).

Through the use of off-the-shelf tools like Max-Diff or the entire family of conjoint methods, we can better engage respondents AND gather much more actionable data. Through these tools and some of our own innovations like Bracket™ we can efficiently understand real preference and use analytics to tell us what is driving them.

Our ongoing long-terms partnerships with top academics at universities throughout the country also help us stay innovative. By collaborating with them we are able to drive new innovations that better unlock what drives consumers.

The GRIT study tracks which supplier firms are perceived as most innovative within the global market research industry. It’s a brand tracker using the attribute of ‘innovation’ as the key metric. The answers are gathered on an unaided basis. The survey asks to list top 3 research companies respondents consider innovative, then asks to rank the companies from least to most innovative and finally asks for explanation why they think they are innovative. Given the unaided nature of the study, it is quite an achievement for a firm like TRC to make the same list as firms hundreds of times our size.

...
Recent comment in this post - Show all comments
  • Andre
    Andre says #
    Congrats to TRC, The greatest group of people I've ever worked with and for. Well deserved !!!!!

Future new product researchDecember and January are full of articles that tell us what to expect in the New Year. There is certainly nothing wrong with thinking about the future (far from it), but it is important that we do so with a few things in mind. Predications are easy to make, but hard to get right, at least consistently.


First, to some extent we all suffer from the “past results predict the future” model. We do so because quite often they do, but there is no way to know when they no longer will. As such, be wary of predictions that say something like “last year neuro research was used by 5% of fortune 500 companies…web panels hit the 5% mark and then exploded to more than 50% within three years.” It might be right to assume the two will have similar outcomes, or it might be that the two situations (both in terms of the technique and in terms of the market at the time) are quite different.


Second, we all bring a bias to our thinking. We have made business decisions based on where we think the market is going and so it is only natural that our predictions might line up with that. At TRC we’ve invested in agile products to aid in the early stage product development process. I did so because I believe the market is looking for rigorous, fast and inexpensive ways to solve problems like ideation, prioritization and concept evaluation. Quite naturally if I’m asked to predict the future I’ll tend to see these as having great potential.


Third, some people will be completely self-serving in their predictions. So, for example, we do a tremendous amount of discrete choice conjoint work. I certainly would like to think that this area will grow in the next year so I might be tempted to make the prediction in the hopes that readers will suddenly start thinking about doing a conjoint study.   


Fourth, an expert isn’t always right. Hearing predictions is useful, but ultimately you have to consider the reasoning behind them, seek out your own sources of information and consider things that you already know. Just because someone has a prediction published, doesn’t mean they know the future any better than you do. 

...

Curious mind new product ResearchI recently finished Brian Grazer’s book A Curious Mind and I enjoyed it immensely. I was attracted to the book both because I have enjoyed many of the movies he made with Ron Howard (Apollo 13 being among my favorites) and because of the subject…curiosity.

I have long believed that curiosity is a critical trait for a good researcher. We have to be curious about our clients’ needs, new research methods and most important the data itself. While a cursory review of cross tabs will produce some useful information, it is digging deeper that allows us to make the connections that tell a coherent story. Without curiosity analytical techniques like conjoint or max diff don’t help.

The book shows how Mr. Grazer’s insatiable curiosity had brought him into what he calls “curiosity conversations” with a wide array of individuals from Fidel Castro to Jonas Salk. He had these conversations not because he thought there might be a movie in it, but because he wanted to know more about these individuals. He often came out of the conversations with a new perspective and yes, sometimes even ideas for a movie.

One example was with regards to Apollo 13. He had met Jim Lovell (the commander of that fateful mission) and found his story to be interesting, but he wasn’t sure how to make it into a movie. The technical details were just too complicated.

Later he was introduced by Sting to Veronica de Negri.  If you don’t know who she is (I didn’t), she was a political prisoner in Chile for 8 months during which she was brutally tortured. To survive she had to create for herself an alternate reality. In essence by focusing on the one thing she still had control of (her mind) she was able to endure the things she could not control. Mr. Grazer used that logic to help craft Apollo 13. Instead of being a movie about technical challenges it became a movie about the human spirit and its ability to overcome even the most difficult circumstances.

...

bias in market research two soccer playersIn new product market research we often discuss the topic of bias, though typically these discussions revolve around issues like sample selection (representativeness, non-response, etc.) but what about methodological or analysis bias? Is it possible that we impact results by choosing the wrong market research methods to collect the data or to analyze the results?


A recent article in the Economist presented an interesting study in which the same data set and the same objective was given to 29 different researchers. The objective was to determine if dark skinned soccer players were more likely to get a red card than light skinned players. Each researcher was free to use whatever methods they thought best to answer the question.


Both statistical methods (Bayesian Clustering, logistic regression, linear modeling...) and analysis techniques (some for example considered that some positions might be more likely to get red cards and thus data needed to be adjusted for that) differed from one researcher to the next. No surprise then that results varied as well. One found that dark skinned players were only 89% as likely to get a red card as light skinned players while another found dark skinned players were three times MORE likely to get the red card. So who is right?


There is no easy way to answer that question. I'm sure some of the analysis can easily be dismissed as too superficial, but in other cases the "correct" method is not obvious. The article suggests that when important decisions regarding public policy are being considered the government should contract with multiple researchers and then compare and contrast their results to gain a fuller understanding of what policy should be adapted. I'm not convinced this is such a great idea for public policy (seems like it would only lead to more polarization as groups pick the results they most agreed with going in), but the more important question is, what can we as researchers learn from this?


In custom new product market research the potential for different results is even greater. We are not limited to existing data. Sure we might use that data (customer purchase behavior for example), but we can and will supplement it with data that we collect. These data can be gathered using a variety of techniques and question types. Once the data are collected we have the same potential to come up with different results as the study above.

...

Catalog Cover TestingVery few clients will go to market with a new concept without some form of market research to test it first. Others will use some real world substitutes (such as A/B Mail tests) to accomplish the same end. No one would argue against the effectiveness of things like this...they provide a scientific basis for making the right decision. Why is it then that in early stage decision-making science is often replaced with their gut?

Consider this...an innovation department cooks up a dozen or more ideas for new or improved products and services. At this point they are nothing more than ideas with perhaps some crude mock-ups to go along with them. Doing full out concept testing would be costly for this number of ideas and a real world test is certainly not in the cards. Instead, a "team" which might include product managers, marketing folks, researchers and even some of the innovation people who came up with the concepts are brought together to wean the ideas down to a more manageable level.

The team carefully evaluates each concept, perhaps ranks them and provides their thinking on why they liked certain ones. These "independent' evaluations are tallied and the dozen concepts are reduced to two or three. These two or three are then developed further and put through a more rigorous and costly process - in-market testing. The concept or concepts that score best in this process are then launched to the entire market.

This process produces a result, but also some level of doubt. Perhaps the concept that the team thought was best scored badly in the more rigorous research or the winning concept just didn't perform as well as the team thought it would. Does anyone wonder if perhaps some of the ideas that the team weaned out might have performed even better than the "winners" they picked? What opportunities might have been lost if the best ideas were left on the drawing board?

The initial weaning process is susceptible to various forms of error including group think. The less rigorous process is used not because it is seen as best, but because the rigorous methods normally used are too costly to employ on a large list of items. Does that mean going with your gut is the only option?

...

Market Research Event conjoint AnalysisLast week we held an event in New York in which Mark Broadie from Columbia University talked about his book “Every Shot Counts”. The talk and the book detail his analysis of a very large and complex data set…specifically the “ShotLine” data collected for over a decade by the PGA. It details every shot taken by every pro at every PGA tournament. He was able to use it to challenge some long held assumptions about golf…such as “Do you drive for show and putt for dough?”

On the surface the data set was not easy to work with. Sure it had numbers like how long the hole was, how far the shot went, how far it was from the hole and so on. It also had data like whether it ended up in the fairway, on the green, in the rough, in a trap or the dreaded out of bounds. Every pro has a different set of skills and there were a surprising range of abilities even in this set, but he added the same data on tens of thousands of amateur golfers of various skill levels. So how can anyone make sense of such a wide range of data and do it in a way that the amateur who scores 100 can be compared to the pro who frequently scores n the 60’-s?

You might be tempted to say that he would use a regression analysis, but he did not. You might assume he used Hierarchical Bayesian estimation as it has become more commonplace (it drives discrete choice conjoint, Max Diff and our own Bracket™), he didn’t use it here either.

Instead, he used simple arithmetic. No HB, no calculus, no Greek letters, just simple addition, subtraction, multiplication and division. At the base level, he simply averaged similar scores. Specifically he determined how many strokes it took on average for players to go from where they were to the hole. These averages were further broken down to account for where the ball started (not just distance, but rough, sand, fairway, etc) and how good the golfer was.

These simple averages allow him to answer any number of “what if” questions. For example, he can see on average how many strokes are saved by going an extra 50 yards off the tee (which turns out to be more than for being better at putting). He can also show that in fact neither driving nor putting is as important as the approach shot (the last full swing before putting the ball on the green). The ability to put the ball close to the hole on this shot is the biggest factor in scoring low.

...

An issue that comes up quite a bit when doing research is the proper way to frame questions. In my last blog I reported on our Super Bowl ad test in which we surveyed viewers to rank 36 ads based on their “entertainment value”. We did a second survey that framed the question differently to see if we could determine which ads were most effective at driving consideration of the product…in other words, the ads that did what ads are supposed to do!

As with life, framing, or context, is critical in research. First off, the nature of questions is important. Where possible the use of choice questions will work better than say rating scales. The reason is that consumers are used to making choices...ratings are more abstract. Techniques like Max-Diff, Conjoint (typically Discrete Choice these days) or our own proprietary new product research technique Bracket™ get at what is important in a way that ratings can’t.

Second, the environment you create when asking the question must seek to put consumers in the same mindset they would be in when they make decisions in real life. For example, if you are testing slogans for the outside of a direct mail envelope, you should show the slogans on an envelope rather than just in text form.

Finally, you need to frame the question in a way that matches the real world result you want. In the case of a direct mail piece, you should frame the question along the lines of “which of these would you most likely open?” rather than “which of these slogans is most important?”. In the case of a Super Bowl ad (or any ad for that matter), asking about entertainment value is less important than asking about things like “consideration” or even “likelihood to tell others about it”.  

So, we polled a second group of people and asked them “which one made you most interested in considering the product as advertised?” The results were quite interesting.

...
Recent comment in this post - Show all comments
  • Dave
    Dave says #
    The consideration vs entertainment angle is an interesting take.

Budweiser puppyWell, it is the time of year when America’s greatest sporting event takes place. I speak of course about the race to determine which Super Bowl ad is the best. Over the years there have been many ways to accomplish this, but like so often happens in research today, the methods are flawed.

First there is the “party consensus method”. Here people gathered to watch the big game call out their approval or disapproval of various ads. Beyond the fact that the “sample” is clearly not representative, this method has other flaws. At the party I was at we had a Nationwide agent, so criticism of the “dead kid” ad was muted. This is just one example of how people in the group can influence each other (anyone who has watched a focus group has seen this in action). The most popular ad was the Fiat ad with the Viagra pill…not because it was perhaps the favorite, but because parties are noisy and this ad was largely a silent picture.

Second, there is the “opinion leaders” method. The folks who have a platform to spout their opinion (be it TV, YouTube, Twitter or Facebook) tell us what to think. While certainly this will influence opinions, I don’t think tallying up their opinions really gets at the truth. They might be right some of the time, but listening to them is like going with your gut…likely you are missing something.

Third, there is the “focus group” approach. In this method a group of typical people is shuffled off to a room to watch the game and turn dials to rate the commercials they see.   So, like any focus group, these “typical” people are of course atypical.   In exchange for some money they were willing to spend four hours watching the game with perfect strangers.   Further, are focus groups really the way to measure something like which is best? Focus groups can be outstanding at drawing out ideas, providing rich understandings of products and so on, but they are not (nor are they intended to be) quantitative measures.

The use of imperfect means to measure quantitative problems is not unique to Super Bowl ads. I’ve been told by many clients that budget and timing concerns require that they answer some quantitative questions with the opinions of their internal team, or their own gut or qualitative research. That is why we developed our agile and rigorous tools, including Message Test Express™ (MTE™).

...

conjoint analysis blizzardHere in Philly we are recovering from the blizzard that wasn’t. For days we’d been warned of snow falling multiple inches per hour, winds causing massive drifts and the likelihood of it taking days to clear out. The warnings continued right up until we were just hours away from this weather Armageddon. In the end, only New England really got the brunt of the storm. We ended up with a few inches. So how could the weather forecasters have been this wrong?

The simple answer is of course that weather forecasting is complicated. There are so many factors that impact the weather…in this case an “inverted trough” caused the storm to develop differently than expected. So even with the massive historical data available and the variety of data points at their disposal the weather forecasters can be surprised.  

At TRC we do an awful lot of conjoint research…a sort of product forecast if you will. It got me thinking about some keys to avoiding making the same kinds of mistakes as the weather forecasters made on this storm:

  1. Understand the limitations of your data. A conjoint or discrete choice conjoint can obviously only inform on things included in the model. It should be obvious that you can’t model features or levels you didn’t test (such as say a price that falls outside the range tested). Beyond that however, you might be tempted to infer things that are not true. For example, if you were using the conjoint to test a CPG package and one feature was “health benefits” with levels such as “Low in Fat”, “Low in carbs” and so on you might be tempted to assume that the two levels with the highest utilities should both be included on the package since logically both benefits were positive. The trouble is that you don’t know if some respondents prefer high fat and low carbs and others the complete opposite. You can only determine the impact of combinations of a single level of each feature so you must make sure that anything you want to combine are in separate features. This might lead to a lot of “present/not present” features which might overcomplicate the respondent’s choices. In the end you may have to compromise, but best to make those compromises in a thoughtful and informed way.
  2. Understand that the data were collected in an artificial framework. The respondents are fully versed on the features and product choices…in the market that may or may not be the case. The store I go to may not offer one or more of the products modeled or I may not be aware of the unique benefits one product offers because advertising and promotion failed to get the message to me. Conjoint can tell you what will succeed and why but the hard work of actually delivering on those recommendations still has to be done. Failing to recognize that is no better than recognizing the possibility of an inverted trough.
  3. Understand that you don’t have all the information. Consumer decisions are complex. In a conjoint analysis you might test 7 or 8 product features but in reality there are dozens more that consumers will take into account in their decision making. As noted in number 1, the model can’t account for what is not tested. I may choose a car based on it having adaptive cruise control, but if you didn’t test that feature my choices will only reflect other factors in my decision. Often we test a hold out card (a choice respondents made that is not used in calculating the utilities, but rather to see how well our predictions do) and in a good result we find we are right about 60% of the time (This is good because if a respondent has four choices random chance would dictate being right just 25% of the time). Weather forecasters are not pointing out that they probably should have explained their level of certainty about the storm (specifically that they knew there was a decent chance they would be wrong).

So, with all these limitations is conjoint worth it? Well, I would suggest that even though the weather forecasters can be spectacularly wrong, I doubt many of us ignore them. Who sets out for work when snow is falling without checking to see if things will improve? Who heads off on a winter business trip without checking to see what clothes to pack? The same is true for conjoint. With all the limitations it has, a well executed model (and executing well takes knowledge, experience and skill) will provide clear guidance on marketing decisions.  

Hits: 3469 0 Comments

Last year Time Magazine featured a cover story about fat…specifically that fat has been unfairly vilified and that in fact carbs and sugars are the real danger. They were not the first with the story nor will they be the last. The question is, how will this impact the food products on the market?

The idea that carbs and sugar were the worst things you could eat would not have surprised a dieter in say 1970. It was in the 1980’s that conventional wisdom moved toward the notion that fat caused weight gain and with that heart disease and thus should be avoided. Over time the public came to accept this wisdom (after all the idea that fat causes fat isn’t hard to accept) and the market responded with a bunch of low fat products. Unfortunately those products were higher in sugar and carbs and the net result is that Americans have grown heavier.  

If the public buys into this new thinking we should expect the market to respond. To see how well the message has gotten out, we conducted a national survey with two goals in mind:

  • Determine awareness of the sugar/carbs being worse than fat thinking.
  • Determine if it would change behavior.

About a third of respondents said they were aware of the new dietary thinking. While still a minority, a third is nothing to be sneezed at. Especially when you consider that the vast majority of advertising still focus on the low fat message and food nutrition labels still highlight fat calories at the top. It took time for the “low fat” message to take hold and clearly it will take time for this to take hold as well.

Already there is evidence of change. Those aware of the message prior to the survey were far more likely to recommend changes to people’s diets (38%) than those who were not aware prior to the survey (11%). Clearly it takes more than being informed in a survey to change 30 years of conventional wisdom, but once the message takes hole, expect changes. In fact, two thirds of those aware of the message before doing the survey have already made changes to behavior:

...

Truth or Research

Posted by on in New Research Methods

respondents telling truth in surveysI read an interesting story about a survey done to determine if people are honest with pollsters. Of course such a study is flawed by definition (how can we be sure those who say they always tell the truth, are not lying?). Still, the results do back up what I’ve long suspected…getting at the truth in a survey is hard.

The study indicates that most people claim to be honest, even about very personal things (like financing). Younger people, however, are less likely to be honest with survey takers than others. As noted above, I suspect that if anything, the results understate the potential problem.

To be clear, I don’t think that people are just being dishonest for the sake of being dishonest….I think it flows from a few factors.

First, some questions are too personal to answer, even on a web survey. With all the stories of personal financial data being stolen or compromising pictures being hacked, it shouldn’t surprise us that some people might not want to answer some kinds of questions. We should really think about that as we design questions. For example, while it might be easy to ask for a lot of detail, we might not always need it (income ranges for example). To the extent we do need it, finding ways to build credibility with the respondent are critical.

Second, some questions might create a conflict between what people want to believe about themselves and the truth. People might want to think of themselves as being “outgoing” and so if you ask them they might say they are. But their behavior might not line up with reality. The simple solution is to ask questions related to behavior without ascribing a term like “outgoing”. Of course, it is always worth asking it directly as well (knowing the self image AND behavior could make for interesting segmentations variables for example).

...

My daughter was performing in The Music Man this summer and after seeing the show a number of times, I realized it speaks to the perils of poor planning…in forming a boys band and in conducting complex research.  

For those of you who have not seen it, the show is about a con artist who gets a town to buy instruments and uniforms for a boys band in exchange for which he promises he’ll teach them all how to play. When they discover he is a fraud they threaten to tar and feather him, but (spoiler alert) his girl friend gets the boys together to march into town and play. Despite the fact that they are awful, the parents can’t help but be proud and everyone lives happily ever after.

It is to some extent another example of how good we are at rationalizing. The parents wanted the band to be good and so they convinced themselves that they were. The same thing can happen with research…everyone wants to believe the results so they do…even when perhaps they should not.

I’ve spent my career talking about how important it is to know where your data have been. Bias introduced by poor interviewers, poorly written scripts, unrepresentative sample and so on will impact results AND yet these flawed data will still produce cross tabs and analytics. Rarely will they be so far off that the results can be dismissed out of hand.

The problem only gets worse when using advanced methods. A poorly designed conjoint will still produce results. Again, more often than not these results will be such that the great rationalization ability of humans will make them seem reasonable.

...

big league research conjointWhile there is so much bad news in the world of late, here in Philly we’ve been captivated by the success of the Taney Dragons in the Little League World Series. While the team was sadly eliminated, they continue to dominate the local news. It got me thinking about what it is that makes a story like theirs so compelling and of course, how we could employ research to sort it out.

There are any number of reasons why the story is so engrossing (especially here in Philly). Is it the star player Mo’ne Davis, the most successful girl ever to compete in the Little League World Series or perhaps the fact that the Phillies are doing so poorly this year or maybe we just like seeing a team from various ethnicities and socio-economic levels working together and achieving success? Of course it might also be that we are tired of bad news and enjoy having something positive to focus on (even in defeat the team fought hard and exhibited tremendous sportsmanship).

The easiest thing to do is to simply ask people why they find the story compelling. This might get at the truth, but it is also possible that people will not be totally honest (for example, the disgruntled Phillies fan might not want to admit that) or they don’t really know what it is that has drawn them in. It might also identify the most important factor but not make note of other critical factors.

We could employ a technique like Max-Diff and ask them to choose which features of the story they find most compelling. This would provide a fuller picture, but is still open to the kinds of biases noted above.

Perhaps the best method would be to use a discrete choice approach. We take all the features of the story and either include them or don’t include them in a “story description” then ask people which story they would most likely read. We can then use analytics on the back end to sort out what really drove the decision.  

...

UFO sighting causation correlation market researchSmallI read a blurb in The Economist about UFO sightings. They charted some 90,000 reports and found that UFO's are, as they put it, "considerate". They tend not to interrupt the work day or sleep. Rather, they tend to be seen far more often in the evening (peaking around 10PM) and more on Friday nights than other nights.
The Economist dubbed the hours of maximum UFO activity to be "drinking hours" and implied that in fact that drinking was the cause of all those sightings.
As researchers, we know that correlation does not mean causation. Of course their analysis is interesting and possibly correct, but it is superficial. One could argue (and I'm sure certain "experts" on the History Channel would) that it is in fact the UFO activity that causes people to want to drink, but by limiting their analysis to two factors (time of day/number of sightings), The Economist ignore other explanations.
For example, the low number of sightings during sleeping hours would make perfect sense (most of us sleep indoors with our eyes closed). The same might be true for the lower number during work hours (many people don't have ready access to a window and those who do are often focused on their computer screen and not the little green men taking soil samples out the window).
As researchers, we need to consider all the possibilities. Questionnaires should be constructed to include questions that help us understand all the factors that drive decision making. Analysis should, where possible, use multivariate techniques so that we can truly measure the impact of one factor over another. Of course, constructing questions that allow respondents to express their thinking is also key...while a long attribute rating battery might seem like it is being "comprehensive" it is more likely mind numbing for the respondent. We of course prefer to use techniques like Max-Diff, Bracket™ or Discrete Choice to figure out what drives behavior.
Hopefully I've given you something to think about tonight when you are sitting on the porch, having a drink and watching the skies.

Hits: 3565 0 Comments

Want to know more?

Give us a few details so we can discuss possible solutions.

Please provide your Name.
Please provide a valid Email.
Please provide your Phone.
Please provide your Comments.
Enter code below : Enter code below :
Please Enter Correct Captcha code
Our Phone Number is 1-800-275-2827
 Find TRC on facebook  Follow us on twitter  Find TRC on LinkedIn

Our Clients