Welcome visitor you can log in or create an account

800.275.2827

Consumer Insights. Market Innovation.
blog-page

Purchase Funnel Measuring AwarenessWe at TRC conduct a lot of choice-based research, with the goal of aligning our studies with real-world decision-making. Lately, though, I’ve been involved in a number of projects in which the primary objective is not to determine choice, but rather awareness. Awareness is the first – and arguably the most critical - part of the purchase funnel. After all, you can’t very well buy or use something if you don’t know it exists. So getting the word out about your brand, a new product or a product enhancement matters.

Awareness research presents several challenges that aren’t necessarily faced in other types of research. Here’s a list of a few items to keep in mind as you embark on an awareness study:

Don’t tip your hand. If you’re measuring awareness of your brand, your ad campaign or one of your products, do not announce at the start of the survey that your company is the sponsor. Otherwise you’ve influenced the very thing you’re trying to measure. You may be required to reveal your identity (if you’re using customer emails to recruit, for example), but you can let participants know up front that you’ll reveal the sponsor at the conclusion of the survey. And do so.

The more surveys the better. Much of awareness research focuses on measuring what happens before and after a specific event or series of events. The most prevalent use of this technique is in ad campaign research. A critical decision factor is how many surveys you should do in each phase. And the answer is, as many as you can afford. The goal is to minimize the margin of error around the results: if your pre-campaign awareness score is 45% and your post-campaign score is 52%, is that a real difference? You can be reasonably assured that it is if you surveyed 500 in each wave, but not if you only surveyed 100. The more participants you survey, the more secure you’ll be that the results are based on real market shifts.

Match your samples. Regardless of how many surveys you do each wave, it’s important that the samples are matched. By that we mean that the make-up of the participants should be as consistent with each other as possible each time you measure. Once again, we want to make certain that results are “real” and aren’t due to methodological choices. You can do this ahead of time by setting quotas, after the fact through weighting, or both. Of course, you can’t control for every single variable. At the very least, you want the key demographics to align.

...

American Healthcare Market Research BlogAs a market researcher who has studied the health insurance industry for over two decades, this year has shown a dramatic shift in consumer thinking; that is, consumers are being forced to think harder than ever before to determine which health insurance plan is best for their family.

While I have the benefit of working directly with health insurance companies on product development, numerous articles are published each week illustrating some of the challenges consumers are facing with their new plans. These experiences make it clear why conjoint (Discrete Choice) is such a strong tool to understand consumer preferences for different health insurance plan components.

As I digest all of this information, a number of themes continue to surface:

High deductibles – consumers need to know what is subject to the deductible and what isn’t (preventive, Rx etc.). It’s unlikely that insurers want consumers to avoid preventive care as a way to manage their costs, and yet that is exactly what some consumers are doing.

Limited Network – consumers are learning the hard way that you get what you pay for. There are many stories of consumers having to drive ridiculous distances to get treatment from an in-network provider, or those who validated that their physician accepts their carrier to later learn that they don’t accept all plans offered by the carrier. Many are also having difficulty getting an appointment within a reasonable period of time, and some have even visited an in-network hospital and received a bill from an out-of-network physician who treated them there.  

...

I begin every weekday by driving through a toll plaza on the Pennsylvania Turnpike to get to work. By this time, I haven’t usually had my morning cup of coffee yet; therefore, my mathematical skills are probably not always up to par. So, I take the easy way out and use my E-ZPass, which saves me the daily burden of counting out change to make my way through the toll booth.

Overall, the E-ZPass system seems relatively straightforward. You use a credit card to open an account and you receive an electronic tag, or transponder, that has your personal billing and vehicle information embedded into it. You put the transponder somewhere on the dashboard or windshield of your vehicle, which then sends a signal to a receiver as you drive through the toll booth that detects your tag, registers your information and charges your account accordingly. When all is said and done, you see the polite green light that says “Thank You” (unless you have a low balance, of course) and you are on your merry way. Quick and simple, right?

Before I began working in market research, I wouldn’t have thought much more about the E-ZPass system other than it gets me to where I need to go quickly. Now that I’m almost a year into my market research career with more of a research-oriented point of view, I got to wondering a little more in depth about the E-ZPass system and how the company conducted its research within the toll-user market to find out if its new toll system would prosper. After a little research, I found that the company used the ever-reliable conjoint analysis method of research.

The scholarly article, Thirty Years of Conjoint Analysis: Reflections and Prospects by Paul Green, Abba Kreiger and Yoram Wind, discusses the use of conjoint analysis in an abundance of studies throughout the past 30 years. One of the studies that this article focuses on is the research done prior to the development and implementation of the E-ZPass system. E-ZPass has been in the works for about 12 years now; the company began its market research in 1992. Two states, New Jersey and New York, had conducted conjoint analysis research using a sample size of about 3,000 to decipher the potential of the system. There were seven attributes used in this conjoint study, such as number of lanes available, tag acquisition, cost, toll prices, invoicing and other potential uses of the transponder. Once the respondents’ data was collected, it was analyzed in total and by region and facility. The study yielded an estimated 49% usage rate, while the actual usage rate seven years later was a close 44%. While both percentages were not extremely high, the company estimated the usage rate would continue to increase in the future.

Green, Kreiger and Wind make a fair point in their article when they say that conjoint analysis has the ability “to lead to actionable findings that provide customer-driven design features and consumer-usage or sales forecasts”. This study serves as a great example to support this statement just by looking at how close the projected usage rate from the data collected ended up being to the actual usage rate. An abundance of the studies that we execute here at TRC use conjoint analysis because of its dependable predictive nature. Whether clients are looking to enter a new product or service into the market, or are looking to improve upon an already existing product or service, conjoint analysis provides them with direction for a successful plan.

...

Truth or Research

Posted by on in New Research Methods

respondents telling truth in surveysI read an interesting story about a survey done to determine if people are honest with pollsters. Of course such a study is flawed by definition (how can we be sure those who say they always tell the truth, are not lying?). Still, the results do back up what I’ve long suspected…getting at the truth in a survey is hard.

The study indicates that most people claim to be honest, even about very personal things (like financing). Younger people, however, are less likely to be honest with survey takers than others. As noted above, I suspect that if anything, the results understate the potential problem.

To be clear, I don’t think that people are just being dishonest for the sake of being dishonest….I think it flows from a few factors.

First, some questions are too personal to answer, even on a web survey. With all the stories of personal financial data being stolen or compromising pictures being hacked, it shouldn’t surprise us that some people might not want to answer some kinds of questions. We should really think about that as we design questions. For example, while it might be easy to ask for a lot of detail, we might not always need it (income ranges for example). To the extent we do need it, finding ways to build credibility with the respondent are critical.

Second, some questions might create a conflict between what people want to believe about themselves and the truth. People might want to think of themselves as being “outgoing” and so if you ask them they might say they are. But their behavior might not line up with reality. The simple solution is to ask questions related to behavior without ascribing a term like “outgoing”. Of course, it is always worth asking it directly as well (knowing the self image AND behavior could make for interesting segmentations variables for example).

...

My daughter was performing in The Music Man this summer and after seeing the show a number of times, I realized it speaks to the perils of poor planning…in forming a boys band and in conducting complex research.  

For those of you who have not seen it, the show is about a con artist who gets a town to buy instruments and uniforms for a boys band in exchange for which he promises he’ll teach them all how to play. When they discover he is a fraud they threaten to tar and feather him, but (spoiler alert) his girl friend gets the boys together to march into town and play. Despite the fact that they are awful, the parents can’t help but be proud and everyone lives happily ever after.

It is to some extent another example of how good we are at rationalizing. The parents wanted the band to be good and so they convinced themselves that they were. The same thing can happen with research…everyone wants to believe the results so they do…even when perhaps they should not.

I’ve spent my career talking about how important it is to know where your data have been. Bias introduced by poor interviewers, poorly written scripts, unrepresentative sample and so on will impact results AND yet these flawed data will still produce cross tabs and analytics. Rarely will they be so far off that the results can be dismissed out of hand.

The problem only gets worse when using advanced methods. A poorly designed conjoint will still produce results. Again, more often than not these results will be such that the great rationalization ability of humans will make them seem reasonable.

...

Want to know more?

Give us a few details so we can discuss possible solutions.

Please provide your Name.
Please provide a valid Email.
Please provide your Phone.
Please provide your Comments.
Enter code below : Enter code below :
Please Enter Correct Captcha code
Our Phone Number is 1-800-275-2827

Our Clients