Welcome visitor you can log in or create an account

800.275.2827

Consumer Insights. Market Innovation.

blog-page
Subscribe to this list via RSS Blog posts tagged in Asking in Market Research

2016 election sample representativenessI always dread the inevitable "What do you do?" question. When you tell someone you are in market research you can typically expect a blank stare or a polite nod; so you must be prepared to offer further explanation. Oh, to be a doctor, lawyer or auto mechanic – no explanation necessary!

Of course, as researchers, we grapple with this issue daily, but it is not often we get to hear it played out on major news networks. After one of the debates, I heard Wolf Blitzer on CNN arguing (yes arguing) with one of the campaign strategists about why the online polls being quoted were not "real" scientific polls. Wolf's point was that because the Internet polls being referenced were from a self-selected sample their results were not representative of the population in question (likely voters). Of course, Wolf was correct, and it made me smile to hear this debated on national TV.

A week or so later I heard an even more, in-depth consideration of the same issue. The story was about how the race was breaking down in key swing states. The poll representative went through the results for key states one-by-one. When she discussed Nevada she raised a red flag as to interpreting the poll (which has one candidate ahead by 2 - % points). She further explained it is difficult to obtain a representative sample in Nevada due to a number of factors (odd work hours, transient population, large Spanish speaking population). Her point was that they try to mitigate these issues, but any results must be viewed with a caveat.

Aside from my personal delight that my day-to-day market research concerns are newsworthy, what is the take-away here? For me, it reinforces how important it is to do everything in our power to ensure that for each study our sample is representative. The advent of online data collection, the proliferation of cell phone use and do-it-yourself survey tools may have made the task more difficult, but no less important. When doing sophisticated conjoint, segmentation or max-diff studies, we need to keep in mind that they are only as good as the sample that feeds them.

Hits: 618 0 Comments

Truth or Research

Posted by on in New Research Methods

respondents telling truth in surveysI read an interesting story about a survey done to determine if people are honest with pollsters. Of course such a study is flawed by definition (how can we be sure those who say they always tell the truth, are not lying?). Still, the results do back up what I’ve long suspected…getting at the truth in a survey is hard.

The study indicates that most people claim to be honest, even about very personal things (like financing). Younger people, however, are less likely to be honest with survey takers than others. As noted above, I suspect that if anything, the results understate the potential problem.

To be clear, I don’t think that people are just being dishonest for the sake of being dishonest….I think it flows from a few factors.

First, some questions are too personal to answer, even on a web survey. With all the stories of personal financial data being stolen or compromising pictures being hacked, it shouldn’t surprise us that some people might not want to answer some kinds of questions. We should really think about that as we design questions. For example, while it might be easy to ask for a lot of detail, we might not always need it (income ranges for example). To the extent we do need it, finding ways to build credibility with the respondent are critical.

Second, some questions might create a conflict between what people want to believe about themselves and the truth. People might want to think of themselves as being “outgoing” and so if you ask them they might say they are. But their behavior might not line up with reality. The simple solution is to ask questions related to behavior without ascribing a term like “outgoing”. Of course, it is always worth asking it directly as well (knowing the self image AND behavior could make for interesting segmentations variables for example).

...

My daughter was performing in The Music Man this summer and after seeing the show a number of times, I realized it speaks to the perils of poor planning…in forming a boys band and in conducting complex research.  

For those of you who have not seen it, the show is about a con artist who gets a town to buy instruments and uniforms for a boys band in exchange for which he promises he’ll teach them all how to play. When they discover he is a fraud they threaten to tar and feather him, but (spoiler alert) his girl friend gets the boys together to march into town and play. Despite the fact that they are awful, the parents can’t help but be proud and everyone lives happily ever after.

It is to some extent another example of how good we are at rationalizing. The parents wanted the band to be good and so they convinced themselves that they were. The same thing can happen with research…everyone wants to believe the results so they do…even when perhaps they should not.

I’ve spent my career talking about how important it is to know where your data have been. Bias introduced by poor interviewers, poorly written scripts, unrepresentative sample and so on will impact results AND yet these flawed data will still produce cross tabs and analytics. Rarely will they be so far off that the results can be dismissed out of hand.

The problem only gets worse when using advanced methods. A poorly designed conjoint will still produce results. Again, more often than not these results will be such that the great rationalization ability of humans will make them seem reasonable.

...

Lilly Allen Market Research representativenessSome months ago, Lily Allen mistakenly received an email containing harsh test group feedback regarding her new album. Select audience members believed the singer to be retired and threw in some comments that I won’t quote. If you are curious, the link to her Popjustice interview will let you see them in a more raw form. Allen returned the favor with some criticism on market research itself:

“The thing is, people who take part in market research: are they really representative of the marketplace? Probably not.” –Lily Allen

The singer brings up a valid concern. One of the many questions I pondered five months ago when I first took my current researcher-in-training position with TRC. Researchers are responsible for engaging a representative sample and delivering insights. How do we uphold those standards to ensure quality? Now that I have put in some time and have a few projects under my belt, I have assembled a starter list to address those concerns:

Communicate: All Hands on Deck

In order to complete any research project, there needs to be a clear objective. What are we measuring? Are we using one of our streamlined products, such a Message Test Express™, or will there be a conjoint involved? This may seem obvious, but it is also critical. A team of people is behind each project at TRC; including account executives, research managers, project directors, and various data experts. More importantly, the client should also be on the same page and kept in the loop. Was the artist the main client for the research done? My best guess is no, the feedback given was not meant to be a tool to rework the album.

Purpose

Was the research done on Lily Allen’s album even meant to be representative? Qualitative interviews can produce deep insights among a small, non-representative, group of people. This can be done as a starting point or a follow-up to a project, or even stand alone, depending on the project objectives.

...

UFO sighting causation correlation market researchSmallI read a blurb in The Economist about UFO sightings. They charted some 90,000 reports and found that UFO's are, as they put it, "considerate". They tend not to interrupt the work day or sleep. Rather, they tend to be seen far more often in the evening (peaking around 10PM) and more on Friday nights than other nights.
The Economist dubbed the hours of maximum UFO activity to be "drinking hours" and implied that in fact that drinking was the cause of all those sightings.
As researchers, we know that correlation does not mean causation. Of course their analysis is interesting and possibly correct, but it is superficial. One could argue (and I'm sure certain "experts" on the History Channel would) that it is in fact the UFO activity that causes people to want to drink, but by limiting their analysis to two factors (time of day/number of sightings), The Economist ignore other explanations.
For example, the low number of sightings during sleeping hours would make perfect sense (most of us sleep indoors with our eyes closed). The same might be true for the lower number during work hours (many people don't have ready access to a window and those who do are often focused on their computer screen and not the little green men taking soil samples out the window).
As researchers, we need to consider all the possibilities. Questionnaires should be constructed to include questions that help us understand all the factors that drive decision making. Analysis should, where possible, use multivariate techniques so that we can truly measure the impact of one factor over another. Of course, constructing questions that allow respondents to express their thinking is also key...while a long attribute rating battery might seem like it is being "comprehensive" it is more likely mind numbing for the respondent. We of course prefer to use techniques like Max-Diff, Bracket™ or Discrete Choice to figure out what drives behavior.
Hopefully I've given you something to think about tonight when you are sitting on the porch, having a drink and watching the skies.

Hits: 3382 0 Comments

Rita’s Italian Ice is a Pennsylvania-based company that sells its icy treats through franchise locations on the East Coast and several states in the Midwest and West.

Every year on the first day of spring, Rita’s gives away full-size Italian ices to its customers. For free. No coupon or other purchase required. It’s their way of thanking their customers and launching the season (most Rita’s are only open during the spring and summer months).

Wawa, another Pennsylvania company, celebrated 50 years in business with a free coffee day in April.  

Companies are giving their products away for free! What a fantastic development for consumers! I patronize both of these businesses, and yet, on their respective free give-away days, I didn’t participate. I like water ice (Philadelphia’s term for Italian ice) and I really like coffee. So what’s the problem?

In the case of Rita’s, the franchise location near me has about 5 parking spots, which on a normal day is too few. I was concerned about the crowds. On the Wawa give-away day, I forgot about it as the day wore on. That made me wonder what other people do when they learn that retailers are giving away their products. So, having access to a web-based research panel (a huge perk of my job), I asked 485 people about it. And here are the 4 things I learned:

...

In my previous post I applauded Matthew Futterman’s suggestion that two key changes to baseball’s rules will produce a shorter, faster-paced game, one that will attract younger viewers. While I may not be that young, I’m certainly on-board with speeding up the game. I believe that faster-paced play will lead to greater engagement, and greater engagement will lead to greater enjoyment.

In some sense this is similar to our position on marketing research methods. We want to engage our respondents because the more focused on the task they become, the more considered their responses will be. One of our newer tools, Bracket,TM allows respondents to prioritize a long list of items in a tournament-style approach. Bracket™has respondents make choices among items, and as the tournament progresses the choices become more relevant (and hopefully more enjoyable).

Meanwhile, back to baseball. The rule changes Futterman suggests are very simple ones:

Once batters step into the box, they shouldn't be allowed to step out. Otherwise it's a strike.

If no one is on the base, pitchers get seven seconds to throw the next pitch. Otherwise it's a ball.

...

Recently I had lunch with my colleague Michel Pham at Columbia Business School. Michel is a leading authority on the role of affect (emotions, feeling and moods) in decision making. He was telling me about a very interesting phenomenon called the Emotional Oracle Effect – where he and his colleagues had examined whether emotions can help make better predictions. I was intrigued. We tend to think of prediction as a very rational process – collect all relevant information, use some logical model for combining the information, then make the prediction. But Michel and his colleagues were drawing on a different stream of research that showed the importance of feelings. So the question was, can people make better predictions if they trust their feelings more?

To answer this question they ran a series of experiments. As we researchers know, experiments are the best way to establish a causal linkage between two phenomena. To ensure that their findings were solid, they ran eight separate studies in a wide variety of domains. This included predicting a Presidential nomination, movie box-office success, winner of American Idol, the stock market, college football and even the weather. While in most cases they employed a standard approach to manipulate people’s feelings of trust in themselves, in a couple of cases they looked at differences between people who trusted their feelings more (and less).

Across these various scenarios the results were unambiguous. When people trusted their feelings more, they made more accurate predictions. For example, box office showing of three movies (48% Vs 24%), American Idol winner (41% Vs 24%), NCAA BCS Championship (57% Vs 47%) and Democratic nomination (72% Vs 64%), weather (47% Vs 28%) were some of the cases where people who trusted their feelings predicted better than those who did not. This, of course, raises the question of why? What is it about feelings and emotion that allows a person to predict better?

The most plausible explanation they propose (tested in a couple of studies) is what they call the privileged-window hypothesis. This grows off the theoretical argument that “rather than being subjective and incomplete sources of information, feelings instead summarize large amounts of information that we acquire, consciously and unconsciously about the world around us.” In other words, we absorb a huge quantity of information but don’t really know what we know. Thinking rationally about what we know and summarizing it seems less accurate than using our feelings to express that tacit knowledge. So, when someone says that they did something because “it just felt right”, it may not be so much a subjective decision as an encapsulation of acquired knowledge. The affective/emotional system may be better at channeling the information and making the right decision than the cognitive/thinking system.

So, how does this relate to market research? When trying to understand consumer behavior through surveys, we usually try to get respondents to use their cognitive/thinking system. We explicitly ask them to think about questions, consider options and so on, before providing an apparently logical answer. This research would indicate that there is a different way to go. If we can find a way to get consumers to tap into their affective/emotional system we might better understand how they arrived at decisions.

...

As most anyone living on the East Coast can attest, the winter of 2013-2014 was, to put it nicely, crappy. Storms, outages, freezing temperatures…. We had a winter the likes of which we haven’t experienced in a while. And it wasn’t limited to the East Coast – much of the US had harsher conditions than normal.

Here in the office we did a lot of complaining. I mean a lot. Every day somebody would remark about how cold it was, how their kids were missing too much school, how potholes were killing their car’s suspension… if there was a problem we could whine about, we did.

Now that it’s spring and we’re celebrating the return of normalcy to our lives, we wonder… just what was it about this past winter that was the absolute worst part of it? Sure, taken as a whole it was pretty awful, but what was the one thing that was the most heinous?

Fortunately for us, we have a cool tool that we could use to answer this question. We enlisted the aid of our consumer panel and our agile and rigorous product Message Test Express™ to find the answer. MTE™ uses our proprietary Bracket™ tool which takes a tournament approach to prioritizing lists. Our goal; find out which item associated with winter was the most egregious.

Our 200 participants had to live in an area that experiences winter weather conditions, believe that this winter was worse or the same as previous winters, and have hated, disliked or tolerated it (no ski bums allowed).

...
Recent comment in this post - Show all comments
  • Ed Olesky
    Ed Olesky says #
    Now this is a research topic relevant to all!

Market Researchers are constantly being asked to do “more with less”. Doing so is both practical (budgets and timelines are tight) and smart (the more we ask respondents to do, the less engaged they will be). At TRC we use a variety of ways to accomplish this from basic (eliminate redundancies, limit grids and the use of scales) to advanced (use techniques like Conjoint, Max-Diff and our own Bracket™ to unlock how people make decisions). We are also big believers in using incentives to drive engagement and with more reliable results. That is why a recent article in the Journal of Market Research caught my eye.

The article was about promotional lotteries. The rules tend to be simple, “send in the proof of purchase and we’ll put your name in to a drawing for a brand new car!." The odds of winning are also often very remote which might make some not bother. In theory, you could increase the chances of participation by offering a bunch of consolation prizes (free or discounted product for example). In reality, the opposite is true.

One theory would be that the consolation prizes may not interest the person and thus they are less interested in the contest as a whole.   While this might well be true, the authors (Dengfeng Yan and A.V.Muthukrishnan) found that there was more at work. Consolation prizes offer respondents a means to understand the odds of winning that doesn’t exist without them. Seeing, for example, that you have a one in ten million chance of winning may not really register because you are so focused on the car. But if you are told those odds and also the much better odds of winning the consolation prize you realize right away that at best chances are you will win the consolation prize. Since this prize isn’t likely to be as exciting (for example, an M&M contest might offer a free bag of candy for every 1000 participants), you have less interest in participating.

Since we rely so heavily on incentives to garner participation, it strikes me that these findings are worthy of consideration. A bigger “winner take all” prize drawing might draw in more respondents than paying each respondent a small amount. I can tell you from our own experimentation that this is the case. In some cases we employ a double lottery using our gaming technique Smart Incentives™  tool (including in our new ideation product Idea Mill™ ). In this case, the respondent can win one prize simply by participating and another based on the quality of their answer. Adding the second incentive brings in additional components of gaming (the first being “chance”) by adding a competitive element.

Regardless of this paper, we as an industry should be thinking through how we compensate respondents to maximize engagement.

...

My favorite feature of Quirk's Marketing Research e-newsletter is Research War Stories. In one issue this spring, Arnie Fishman reported that he had an unexpectedly high result when he asked research participants whether they eat dog food "all the time." He framed the question by asking how often they ate each of a variety of "exotic foods," including rattlesnake meat and frog kidneys, among others.

This got us thinking that maybe you'd get a different result if you asked just about dog food rather than about dog food amongst other crazy types of foods. So, being the researchers that we are, we designed a monadic design experiment to see what would happen.

Using Arnie's same framework of exotic foods, we asked one group of our online research panelists how frequently they eat dog food. On the next screen we asked the same question about rattlesnake meat. They always saw dog food first, so they had no other stimulus when they answered the dog food question.

We asked another group of panelists about dog food, rattlesnake meat, frog kidneys, gopher brains, and chocolate covered ants all on the same screen. We hypothesized that this group would be more open to admitting to eat dog food when grouped with these other items rather than just being asked directly about dog food.

Well, we were wrong about that – none of the folks asked about dog food alone admitted to eating dog food all the time, and 1% of those asked about dog food amongst the other exotic items did so (not a statistically significant difference). The percent of folks in both groups saying that they "never" ate dog food was the same as well (96%). So in our experiment, the "framing" of the question had no bearing on the response.

...

Want to know more?

Give us a few details so we can discuss possible solutions.

Please provide your Name.
Please provide a valid Email.
Please provide your Phone.
Please provide your Comments.
Enter code below : Enter code below :
Please Enter Correct Captcha code
Our Phone Number is 1-800-275-2827
 Find TRC on facebook  Follow us on twitter  Find TRC on LinkedIn

Our Clients