Welcome visitor you can log in or create an account

800.275.2827

Consumer Insights. Market Innovation.
blog-page

My daughter was performing in The Music Man this summer and after seeing the show a number of times, I realized it speaks to the perils of poor planning…in forming a boys band and in conducting complex research.  

For those of you who have not seen it, the show is about a con artist who gets a town to buy instruments and uniforms for a boys band in exchange for which he promises he’ll teach them all how to play. When they discover he is a fraud they threaten to tar and feather him, but (spoiler alert) his girl friend gets the boys together to march into town and play. Despite the fact that they are awful, the parents can’t help but be proud and everyone lives happily ever after.

It is to some extent another example of how good we are at rationalizing. The parents wanted the band to be good and so they convinced themselves that they were. The same thing can happen with research…everyone wants to believe the results so they do…even when perhaps they should not.

I’ve spent my career talking about how important it is to know where your data have been. Bias introduced by poor interviewers, poorly written scripts, unrepresentative sample and so on will impact results AND yet these flawed data will still produce cross tabs and analytics. Rarely will they be so far off that the results can be dismissed out of hand.

The problem only gets worse when using advanced methods. A poorly designed conjoint will still produce results. Again, more often than not these results will be such that the great rationalization ability of humans will make them seem reasonable.

...

little league baseballWhile there is so much bad news in the world of late, here in Philly we’ve been captivated by the success of the Taney Dragons in the Little League World Series. While the team was sadly eliminated, they continue to dominate the local news. It got me thinking about what it is that makes a story like theirs so compelling and of course, how we could employ research to sort it out.

There are any number of reasons why the story is so engrossing (especially here in Philly). Is it the star player Mo’ne Davis, the most successful girl ever to compete in the Little League World Series or perhaps the fact that the Phillies are doing so poorly this year or maybe we just like seeing a team from various ethnicities and socio-economic levels working together and achieving success? Of course it might also be that we are tired of bad news and enjoy having something positive to focus on (even in defeat the team fought hard and exhibited tremendous sportsmanship).

The easiest thing to do is to simply ask people why they find the story compelling. This might get at the truth, but it is also possible that people will not be totally honest (for example, the disgruntled Phillies fan might not want to admit that) or they don’t really know what it is that has drawn them in. It might also identify the most important factor but not make note of other critical factors.

We could employ a technique like Max-Diff and ask them to choose which features of the story they find most compelling. This would provide a fuller picture, but is still open to the kinds of biases noted above.

Perhaps the best method would be to use a discrete choice approach. We take all the features of the story and either include them or don’t include them in a “story description” then ask people which story they would most likely read. We can then use analytics on the back end to sort out what really drove the decision.  

...

I'm a runner and enjoy participating in races. Last May I ran the Delaware Half Marathon and had my worst race ever. What happened? Poor planning. I failed to put together a training plan to prepare me for my race.

This can sometimes happen in Market Research. Poor planning can lead to disastrous results that provide little insight or fail to answer the objectives of the research. Planning is especially important when advanced analytics are used, for example, conjoint that is often used during product development or pricing research. There are many questions to be asked during the planning phase of conjoint design. How should we frame up the exercise? How many features should be evaluated? How many levels for each feature? How many product choices should be presented to a respondent at a time? How should each feature and level be described? Should any prohibitions be used? Sometimes we can lose sight of the research objective amid all the details. A good conjoint plan will keep all parties focused on the end goal. These are all issues I'm contemplating as I design my conjoint exercise (stay tuned for results in my next blog!). I'm taking the time now to properly plan and design my conjoint.

A well thought out plan ensures quality results just as a well thought out running plan ensures a good race! After my half marathon disaster I planned for my next race the same way I would for a conjoint. I considered a number of questions while designing my training plan. How far in advance should I train? How many times a week should I run? Should I enlist a running buddy for the longer runs? My goal was to run a good race. I'm happy to report the planning paid off as I completed the Marine Corps Marathon (my first marathon!) in the time I was hoping for.

Hits: 562 0 Comments

Lilly Allen Market Research representativenessSome months ago, Lily Allen mistakenly received an email containing harsh test group feedback regarding her new album. Select audience members believed the singer to be retired and threw in some comments that I won’t quote. If you are curious, the link to her Popjustice interview will let you see them in a more raw form. Allen returned the favor with some criticism on market research itself:

“The thing is, people who take part in market research: are they really representative of the marketplace? Probably not.” –Lily Allen

The singer brings up a valid concern. One of the many questions I pondered five months ago when I first took my current researcher-in-training position with TRC. Researchers are responsible for engaging a representative sample and delivering insights. How do we uphold those standards to ensure quality? Now that I have put in some time and have a few projects under my belt, I have assembled a starter list to address those concerns:

Communicate: All Hands on Deck

In order to complete any research project, there needs to be a clear objective. What are we measuring? Are we using one of our streamlined products, such a Message Test Express™, or will there be a conjoint involved? This may seem obvious, but it is also critical. A team of people is behind each project at TRC; including account executives, research managers, project directors, and various data experts. More importantly, the client should also be on the same page and kept in the loop. Was the artist the main client for the research done? My best guess is no, the feedback given was not meant to be a tool to rework the album.

Purpose

Was the research done on Lilly Allen’s album even meant to be representative? Qualitative interviews can produce deep insights among a small, non-representative, group of people. This can be done as a starting point or a follow-up to a project, or even stand alone, depending on the project objectives.

...

UFO sighting causation correlation market researchSmallI read a blurb in The Economist about UFO sightings. They charted some 90,000 reports and found that UFO's are, as they put it, "considerate". They tend not to interrupt the work day or sleep. Rather, they tend to be seen far more often in the evening (peaking around 10PM) and more on Friday nights than other nights.
The Economist dubbed the hours of maximum UFO activity to be "drinking hours" and implied that in fact that drinking was the cause of all those sightings.
As researchers, we know that correlation does not mean causation. Of course their analysis is interesting and possibly correct, but it is superficial. One could argue (and I'm sure certain "experts" on the History Channel would) that it is in fact the UFO activity that causes people to want to drink, but by limiting their analysis to two factors (time of day/number of sightings), The Economist ignore other explanations.
For example, the low number of sightings during sleeping hours would make perfect sense (most of us sleep indoors with our eyes closed). The same might be true for the lower number during work hours (many people don't have ready access to a window and those who do are often focused on their computer screen and not the little green men taking soil samples out the window).
As researchers, we need to consider all the possibilities. Questionnaires should be constructed to include questions that help us understand all the factors that drive decision making. Analysis should, where possible, use multivariate techniques so that we can truly measure the impact of one factor over another. Of course, constructing questions that allow respondents to express their thinking is also key...while a long attribute rating battery might seem like it is being "comprehensive" it is more likely mind numbing for the respondent. We of course prefer to use techniques like Max-Diff, Bracket™ or Discrete Choice to figure out what drives behavior.
Hopefully I've given you something to think about tonight when you are sitting on the porch, having a drink and watching the skies.

Hits: 965 0 Comments

Want to know more?

Give us a few details so we can discuss possible solutions.

Please provide your Name.
Please provide a valid Email.
Please provide your Phone.
Please provide your Comments.
Enter code below : Enter code below :
Please Enter Correct Captcha code
Our Phone Number is 1-800-275-2827

Our Clients