Welcome visitor you can log in or create an account

800.275.2827

Consumer Insights. Market Innovation.
blog-page

Market Research Event conjoint AnalysisLast week we held an event in New York in which Mark Broadie from Columbia University talked about his book “Every Shot Counts”. The talk and the book detail his analysis of a very large and complex data set…specifically the “ShotLine” data collected for over a decade by the PGA. It details every shot taken by every pro at every PGA tournament. He was able to use it to challenge some long held assumptions about golf…such as “Do you drive for show and put for dough?”

On the surface the data set was not easy to work with. Sure it had numbers like how long the hole was, how far the shot went, how far it was from the hole and so on. It also had data like whether it ended up in the fairway, on the green, in the rough, in a trap or the dreaded out of bounds. Every pro has a different set of skills and there were a surprising range of abilities even in this set, but he added the same data on tens of thousands of amateur golfers of various skill levels. So how can anyone make sense of such a wide range of data and do it in a way that the amateur who scores 100 can be compared to the pro who frequently scores n the 60’-s?

You might be tempted to say that he would use a regression analysis, but he did not. You might assume that because the concept of “Money Ball” used Hierarchical Bayesian estimation he did as well. In fact, while HB has become more common place (it drives discrete choice conjoint, Max Diff and our own Bracket™), he didn’t use it here.

Instead, he used simple arithmetic. No HB, no calculus, no Greek letters, just simple addition, subtraction, multiplication and division. At the base level, he simply averaged similar scores. Specifically he determined how many strokes it took on average for players to go from where they were to the hole. These averages were further broken down to account for where the ball started (not just distance, but rough, sand, fairway, etc) and how good the golfer was.

These simple averages allow him to answer any number of “what if” questions. For example, he can see on average how many strokes are saved by going an extra 50 yards off the tee (which turns out to be more than for being better at putting). He can also show that in fact neither driving nor putting is as important as the approach shot (the last full swing before putting the ball on the green). The ability to put the ball close to the hole on this shot is the biggest factor in scoring low.

...
  • market research philadelphia farmersA recent post on my Facebook timeline boasted that Lansdale Farmers Market was voted the Best of Montgomery County, PA two years in a row. That’s the market I patronize, and I’d like to feel a bit of pride for it. But I’m a researcher and I know better.

Lansdale Farmers Market is a nice little market in the Philadelphia outskirts, but is it truly the best in the entire county? Possibly, but you can’t tell from this poll. Lansdale Farmers Market solicited my participation by directing me to a site that would register my vote for them (Heaven only knows how much personal information “The Happening List” gains access to).  I’m sure that the other farmers markets solicited their voters in the same or similar ways. This amounts to little more than a popularity contest. Therefore, the only “best” that my market can claim is that it is the best in the county at getting its patrons to vote for it.

But if you have more patrons voting for you, shouldn’t that mean that you truly are the best? Not necessarily. It’s possible that the “best” market serves a smaller geographic area, doesn’t maintain a customer list, or isn’t as good at using social media, to name a few.

  • A legitimate research poll would seek to overcome these biases. So what are the markers of a legitimate research poll? Here are a few:
  1. You’re solicited by a neutral third party. Sometimes the survey sponsors identify themselves up front and that’s okay. But usually if a competitive assessment is being conducted, the sponsor remains anonymous so as not to bias the results.
  2. You’re given competitive choices, not just a plea to “vote for me”.  
  3. You may not be able to tell this, but there should be some attempt to uphold scientific sampling rigor. For example, if the only people included in the farmers market survey were residents of Lansdale, you could see how the sampling method would introduce an insurmountable bias.

The market opens for the summer season in a few weeks, and you can bet that I’ll be there. But I won’t stop to admire the inevitable banner touting their victory.

Hits: 445 0 Comments

SolarPanel conjoint AnalysisWe marketing research types like to think of the purchase funnel in terms of brand purchase. A consumer wants to purchase a new tablet. What brands is he aware of? Which ones would he consider? Which would he ultimately purchase? And would he repeat that purchase the next time?

Some products have a more complex purchase funnel, one in which the consumer must first determine whether the purchase itself – regardless of brand – is a “fit” for him. One such case is solar home energy.

Solar is a really great idea, at least according to our intrepid research panelists. Two-thirds of them say they would be interested in installing solar panels on their home to help offset energy costs. There are a lot of different ways that consumers can make solar work for them – and conjoint analysis would be a terrific way to design optimal products for the marketplace.

But getting from “interest” to “consideration” to “purchase” in the solar arena isn’t as easy as just deciding to purchase. Anyone in the solar business will tell you there are significant hurdles, not the least of which is that a consumer needs to be free and clear to make the purchase – renters, condo owners, people with homeowners associations or strict local ordinances may be prohibited from installing them.

Even if you’re a homeowner with no limitations on how you can manage your property, there are physical factors that determine whether your home is an “ideal” candidate for solar. They vary by region and different installers have different requirements, but here’s a short list:

...

An issue that comes up quite a bit when doing research is the proper way to frame questions. In my last blog I reported on our Super Bowl ad test in which we surveyed viewers to rank 36 ads based on their “entertainment value”. We did a second survey that framed the question differently to see if we could determine which ads were most effective at driving consideration of the product…in other words, the ads that did what ads are supposed to do!

As with life, framing, or context, is critical in research. First off, the nature of questions is important. Where possible the use of choice questions will work better than say rating scales. The reason is that consumers are used to making choices...ratings are more abstract. Techniques like Max-Diff, Conjoint (typically Discrete Choice these days) or our own proprietary new product research technique Bracket™ get at what is important in a way that ratings can’t.

Second, the environment you create when asking the question must seek to put consumers in the same mindset they would be in when they make decisions in real life. For example, if you are testing slogans for the outside of a direct mail envelope, you should show the slogans on an envelope rather than just in text form.

Finally, you need to frame the question in a way that matches the real world result you want. In the case of a direct mail piece, you should frame the question along the lines of “which of these would you most likely open?” rather than “which of these slogans is most important?”. In the case of a Super Bowl ad (or any ad for that matter), asking about entertainment value is less important than asking about things like “consideration” or even “likelihood to tell others about it”.  

So, we polled a second group of people and asked them “which one made you most interested in considering the product as advertised?” The results were quite interesting.

...
Recent comment in this post - Show all comments
  • Dave
    Dave says #
    The consideration vs entertainment angle is an interesting take.

Budweiser puppyWell, it is the time of year when America’s greatest sporting event takes place. I speak of course about the race to determine which Super Bowl ad is the best. Over the years there have been many ways to accomplish this, but like so often happens in research today, the methods are flawed.

First there is the “party consensus method”. Here people gathered to watch the big game call out their approval or disapproval of various ads. Beyond the fact that the “sample” is clearly not representative, this method has other flaws. At the party I was at we had a Nationwide agent, so criticism of the “dead kid” ad was muted. This is just one example of how people in the group can influence each other (anyone who has watched a focus group has seen this in action). The most popular ad was the Fiat ad with the Viagra pill…not because it was perhaps the favorite, but because parties are noisy and this ad was largely a silent picture.

Second, there is the “opinion leaders” method. The folks who have a platform to spout their opinion (be it TV, YouTube, Twitter or Facebook) tell us what to think. While certainly this will influence opinions, I don’t think tallying up their opinions really gets at the truth. They might be right some of the time, but listening to them is like going with your gut…likely you are missing something.

Third, there is the “focus group” approach. In this method a group of typical people is shuffled off to a room to watch the game and turn dials to rate the commercials they see.   So, like any focus group, these “typical” people are of course atypical.   In exchange for some money they were willing to spend four hours watching the game with perfect strangers.   Further, are focus groups really the way to measure something like which is best? Focus groups can be outstanding at drawing out ideas, providing rich understandings of products and so on, but they are not (nor are they intended to be) quantitative measures.

The use of imperfect means to measure quantitative problems is not unique to Super Bowl ads. I’ve been told by many clients that budget and timing concerns require that they answer some quantitative questions with the opinions of their internal team, or their own gut or qualitative research. That is why we developed our agile and rigorous tools, including Message Test Express™ (MTE™).

...

Want to know more?

Give us a few details so we can discuss possible solutions.

Please provide your Name.
Please provide a valid Email.
Please provide your Phone.
Please provide your Comments.
Enter code below : Enter code below :
Please Enter Correct Captcha code
Our Phone Number is 1-800-275-2827
 Find TRC on facebook  Follow us on twitter  Find TRC on LinkedIn

Our Clients