Welcome visitor you can log in or create an account

800.275.2827

Consumer Insights. Market Innovation.
blog-page
  • market research philadelphia farmersA recent post on my Facebook timeline boasted that Lansdale Farmers Market was voted the Best of Montgomery County, PA two years in a row. That’s the market I patronize, and I’d like to feel a bit of pride for it. But I’m a researcher and I know better.

Lansdale Farmers Market is a nice little market in the Philadelphia outskirts, but is it truly the best in the entire county? Possibly, but you can’t tell from this poll. Lansdale Farmers Market solicited my participation by directing me to a site that would register my vote for them (Heaven only knows how much personal information “The Happening List” gains access to).  I’m sure that the other farmers markets solicited their voters in the same or similar ways. This amounts to little more than a popularity contest. Therefore, the only “best” that my market can claim is that it is the best in the county at getting its patrons to vote for it.

But if you have more patrons voting for you, shouldn’t that mean that you truly are the best? Not necessarily. It’s possible that the “best” market serves a smaller geographic area, doesn’t maintain a customer list, or isn’t as good at using social media, to name a few.

  • A legitimate research poll would seek to overcome these biases. So what are the markers of a legitimate research poll? Here are a few:
  1. You’re solicited by a neutral third party. Sometimes the survey sponsors identify themselves up front and that’s okay. But usually if a competitive assessment is being conducted, the sponsor remains anonymous so as not to bias the results.
  2. You’re given competitive choices, not just a plea to “vote for me”.  
  3. You may not be able to tell this, but there should be some attempt to uphold scientific sampling rigor. For example, if the only people included in the farmers market survey were residents of Lansdale, you could see how the sampling method would introduce an insurmountable bias.

The market opens for the summer season in a few weeks, and you can bet that I’ll be there. But I won’t stop to admire the inevitable banner touting their victory.

Hits: 233 0 Comments

SolarPanel conjoint AnalysisWe marketing research types like to think of the purchase funnel in terms of brand purchase. A consumer wants to purchase a new tablet. What brands is he aware of? Which ones would he consider? Which would he ultimately purchase? And would he repeat that purchase the next time?

Some products have a more complex purchase funnel, one in which the consumer must first determine whether the purchase itself – regardless of brand – is a “fit” for him. One such case is solar home energy.

Solar is a really great idea, at least according to our intrepid research panelists. Two-thirds of them say they would be interested in installing solar panels on their home to help offset energy costs. There are a lot of different ways that consumers can make solar work for them – and conjoint analysis would be a terrific way to design optimal products for the marketplace.

But getting from “interest” to “consideration” to “purchase” in the solar arena isn’t as easy as just deciding to purchase. Anyone in the solar business will tell you there are significant hurdles, not the least of which is that a consumer needs to be free and clear to make the purchase – renters, condo owners, people with homeowners associations or strict local ordinances may be prohibited from installing them.

Even if you’re a homeowner with no limitations on how you can manage your property, there are physical factors that determine whether your home is an “ideal” candidate for solar. They vary by region and different installers have different requirements, but here’s a short list:

...

An issue that comes up quite a bit when doing research is the proper way to frame questions. In my last blog I reported on our Super Bowl ad test in which we surveyed viewers to rank 36 ads based on their “entertainment value”. We did a second survey that framed the question differently to see if we could determine which ads were most effective at driving consideration of the product…in other words, the ads that did what ads are supposed to do!

As with life, framing, or context, is critical in research. First off, the nature of questions is important. Where possible the use of choice questions will work better than say rating scales. The reason is that consumers are used to making choices...ratings are more abstract. Techniques like Max-Diff, Conjoint (typically Discrete Choice these days) or our own proprietary new product research technique Bracket™ get at what is important in a way that ratings can’t.

Second, the environment you create when asking the question must seek to put consumers in the same mindset they would be in when they make decisions in real life. For example, if you are testing slogans for the outside of a direct mail envelope, you should show the slogans on an envelope rather than just in text form.

Finally, you need to frame the question in a way that matches the real world result you want. In the case of a direct mail piece, you should frame the question along the lines of “which of these would you most likely open?” rather than “which of these slogans is most important?”. In the case of a Super Bowl ad (or any ad for that matter), asking about entertainment value is less important than asking about things like “consideration” or even “likelihood to tell others about it”.  

So, we polled a second group of people and asked them “which one made you most interested in considering the product as advertised?” The results were quite interesting.

...
Recent comment in this post - Show all comments
  • Dave
    Dave says #
    The consideration vs entertainment angle is an interesting take.

Budweiser puppyWell, it is the time of year when America’s greatest sporting event takes place. I speak of course about the race to determine which Super Bowl ad is the best. Over the years there have been many ways to accomplish this, but like so often happens in research today, the methods are flawed.

First there is the “party consensus method”. Here people gathered to watch the big game call out their approval or disapproval of various ads. Beyond the fact that the “sample” is clearly not representative, this method has other flaws. At the party I was at we had a Nationwide agent, so criticism of the “dead kid” ad was muted. This is just one example of how people in the group can influence each other (anyone who has watched a focus group has seen this in action). The most popular ad was the Fiat ad with the Viagra pill…not because it was perhaps the favorite, but because parties are noisy and this ad was largely a silent picture.

Second, there is the “opinion leaders” method. The folks who have a platform to spout their opinion (be it TV, YouTube, Twitter or Facebook) tell us what to think. While certainly this will influence opinions, I don’t think tallying up their opinions really gets at the truth. They might be right some of the time, but listening to them is like going with your gut…likely you are missing something.

Third, there is the “focus group” approach. In this method a group of typical people is shuffled off to a room to watch the game and turn dials to rate the commercials they see.   So, like any focus group, these “typical” people are of course atypical.   In exchange for some money they were willing to spend four hours watching the game with perfect strangers.   Further, are focus groups really the way to measure something like which is best? Focus groups can be outstanding at drawing out ideas, providing rich understandings of products and so on, but they are not (nor are they intended to be) quantitative measures.

The use of imperfect means to measure quantitative problems is not unique to Super Bowl ads. I’ve been told by many clients that budget and timing concerns require that they answer some quantitative questions with the opinions of their internal team, or their own gut or qualitative research. That is why we developed our agile and rigorous tools, including Message Test Express™ (MTE™).

...

conjoint analysis blizzardHere in Philly we are recovering from the blizzard that wasn’t. For days we’d been warned of snow falling multiple inches per hour, winds causing massive drifts and the likelihood of it taking days to clear out. The warnings continued right up until we were just hours away from this weather Armageddon. In the end, only New England really got the brunt of the storm. We ended up with a few inches. So how could the weather forecasters have been this wrong?

The simple answer is of course that weather forecasting is complicated. There are so many factors that impact the weather…in this case an “inverted trough” caused the storm to develop differently than expected. So even with the massive historical data available and the variety of data points at their disposal the weather forecasters can be surprised.  

At TRC we do an awful lot of conjoint research…a sort of product forecast if you will. It got me thinking about some keys to avoiding making the same kinds of mistakes as the weather forecasters made on this storm:

  1. Understand the limitations of your data. A conjoint or discrete choice conjoint can obviously only inform on things included in the model. It should be obvious that you can’t model features or levels you didn’t test (such as say a price that falls outside the range tested). Beyond that however, you might be tempted to infer things that are not true. For example, if you were using the conjoint to test a CPG package and one feature was “health benefits” with levels such as “Low in Fat”, “Low in carbs” and so on you might be tempted to assume that the two levels with the highest utilities should both be included on the package since logically both benefits were positive. The trouble is that you don’t know if some respondents prefer high fat and low carbs and others the complete opposite. You can only determine the impact of combinations of a single level of each feature so you must make sure that anything you want to combine are in separate features. This might lead to a lot of “present/not present” features which might overcomplicate the respondent’s choices. In the end you may have to compromise, but best to make those compromises in a thoughtful and informed way.
  2. Understand that the data were collected in an artificial framework. The respondents are fully versed on the features and product choices…in the market that may or may not be the case. The store I go to may not offer one or more of the products modeled or I may not be aware of the unique benefits one product offers because advertising and promotion failed to get the message to me. Conjoint can tell you what will succeed and why but the hard work of actually delivering on those recommendations still has to be done. Failing to recognize that is no better than recognizing the possibility of an inverted trough.
  3. Understand that you don’t have all the information. Consumer decisions are complex. In a conjoint analysis you might test 7 or 8 product features but in reality there are dozens more that consumers will take into account in their decision making. As noted in number 1, the model can’t account for what is not tested. I may choose a car based on it having adaptive cruise control, but if you didn’t test that feature my choices will only reflect other factors in my decision. Often we test a hold out card (a choice respondents made that is not used in calculating the utilities, but rather to see how well our predictions do) and in a good result we find we are right about 60% of the time (This is good because if a respondent has four choices random chance would dictate being right just 25% of the time). Weather forecasters are not pointing out that they probably should have explained their level of certainty about the storm (specifically that they knew there was a decent chance they would be wrong).

So, with all these limitations is conjoint worth it? Well, I would suggest that even though the weather forecasters can be spectacularly wrong, I doubt many of us ignore them. Who sets out for work when snow is falling without checking to see if things will improve? Who heads off on a winter business trip without checking to see what clothes to pack? The same is true for conjoint. With all the limitations it has, a well executed model (and executing well takes knowledge, experience and skill) will provide clear guidance on marketing decisions.  

Hits: 890 0 Comments

Want to know more?

Give us a few details so we can discuss possible solutions.

Please provide your Name.
Please provide a valid Email.
Please provide your Phone.
Please provide your Comments.
Enter code below : Enter code below :
Please Enter Correct Captcha code
Our Phone Number is 1-800-275-2827
 Find TRC on facebook  Follow us on twitter  Find TRC on LinkedIn

Our Clients