Welcome visitor you can log in or create an account

800.275.2827

Consumer Insights. Market Innovation.

blog-page
Michele Sims

Michele Sims

VP / Research Management


Michele likes to hijack TRC's online consumer panel to get relevant answers to her burning research questions. She loves asking questions relating to her favorite hobbies - TV and movies, golf, casino gambling and travel - and more often than not the answers can be generalized across industries.


Contact Michele

hp-trivia-pricing-research-monetizing
In my previous blog about HQ Trivia I pondered how the creators of HQ were planning to make money.  Right now there is no advertising; venture capital funds the app and the jackpots. Apart from occasional sponsorships, there appears to be no immediate source of additional funding.
 
HQ could do many different things to achieve financial success – content sponsorships, jackpot sponsorships, advertising, product placement, buying ‘lives’ by watching a 15-second spot  – even sponsor logos on host apparel. In fact, there are probably different ways to monetize HQ Trivia that we haven’t even thought of yet – making this a perfect research case for TRC’s Idea Mill™.
 
Idea Mill™ is our method that employs Smart Incentives™ – harnessing the principles of crowd-sourcing to ask respondents for their best idea, and the ideas are then voted on by other respondents within the same research survey. The respondents with the best ideas as judged by their peers are rewarded with prizes. This is a great technique to use when you’re in the idea generation phase of product development.  
 
Once we get a list of potential ways to monetize HQ, we could then winnow the list to the ones that would be feasible to implement, and narrow the list using a prioritization-based research method such as Idea Magnet™. Results can be generated quickly.  
 
Before implementing the winning ideas, we could further explore options by building various scenarios of the sponsored game, and asking HQers to weigh in on which one would be most acceptable to them. Through a choice-based research tool such as discrete choice conjoint, we could vary HQ’s potential features, such as:
 
      • •  Number of ads or sponsorships per game
      • •  Where the ads appear (between rounds, upon game entry) 
      • •  Prize pool
      • •  Having sponsor-related questions
      • •  Getting bonus ‘lives’ for watching sponsor videos
 
All of these techniques employ strategies we use in pricing and product development research to include the consumer in the decision-making process. HQ’s creators are good at asking questions – I hope they do the same in further developing their product.
 
Hits: 553 0 Comments
hq-pricing-research
 A bunch of us here at TRC enjoy trivia, so we’ve been playing HQ Trivia using their online app for the past few months. HQ is a 12-question multiple choice quiz that requires a correct answer to move on to the next question. As a group, we have yet to get through all 12 questions and win our share of the prize pool. But it’s a nice team-building exercise and we like learning new things (who knew that 2 US Presidents were born in Vermont).  
 
Given the fun we have playing it, I can understand HQ’s success from the player perspective. Where I am a bit confused is the value proposition for its creators. Venture capital funding provides the prize money.  But there are no ads, so I’m not sure how anybody’s actually making money. There are occasional tie-in partnerships (The awesome Dwayne Johnson hosted one of the gaming sessions to promote his newest movie release, “Rampage”.)  But I suppose the biggest question is, will interest in HQ still be there when they’ve finally signed on enough sponsors to be profitable?  
 
We do a lot of pricing research at TRC, and can model on a variety of variables. But predicting the direction of demand is nearly impossible for certain products. For consumables and many services, product demand is predictable. How your product fares compared to the competition may have its ups and downs, but you can assume that people who bought toilet paper 2 weeks ago will be in the market for toilet paper again soon.
 
But with something like HQ Trivia, product demand is much more difficult to determine in advance, especially more than a few weeks from now. Right now it’s still hot – routinely attracting 700,000 – 1,000,000+ players (HQers) in a given game. How do the creators – and investors and potential sponsors – know whether it’s a good investment?  What if interest suddenly declines, either because the novelty has worn off or because something better comes along?  
 
One way to find out is through longitudinal research. Routinely check in with HQers over time to determine their likelihood to play the next week, their likelihood to recommend to their friends, and their attitudes toward the game itself. This information can be overlaid with the raw data HQ collects through game play every day – number of players, number of referrals, and number of first-time players. This information can not only help shed light on player interest, but players could also weigh in on changes the creators are considering to keep the game fresh.
 
HQers are engaging in a free activity which gives them the opportunity to win cash prizes.  But just because it’s free to play doesn’t mean the HQ powers-that-be couldn’t do pricing research (more on that in a future blog).  
 
For now, I’ll keep on playing HQ hoping I can answer all the questions, not the least of which is: when will I – and the other million HQers – no longer care? 
 
 
Tagged in: Pricing Resarch
Hits: 643 0 Comments

Market-Research-Prioritization-email-violations

I work in a business that depends heavily on email. We use it to ask and answer questions, share work product, and engage our clients, vendors, co-workers and peers on a daily basis. When email goes down – and thankfully it doesn't happen that often – we feel anything from mildly annoyed to downright panic-stricken.

So business email is ubiquitous. But not everyone follows the same rules of engagement – which can make for some very frustrating exchanges.

We assembled a list of 21 "violations" we experienced (or committed) and set out to find out which ones are considered the most bothersome.

Research panelists who say they use email for business purposes were administered our Bracket™ prioritization exercise to determine which email scenario is the "most irritating".

...

recycle market research pricingIn my two previous blogs about recycling, I reported on gender gaps in recycling behavior and general knowledge about what is curbside-recyclable and what isn't.

Now we turn to the real question: why aren't consumers recycling on a more consistent basis? Again we turned to our online consumer research panel and asked those with curbside recycling access who don't recycle regularly a simple question: Why not? What behaviors and attitudes can Recyclers act upon to educate their customers and encourage more recycling?

Well, like any complex problem, there's no one single answer. Lack of knowledge of what's recyclable and being unsure how to get questions answered play a big part (28%). Recyclers can raise awareness through careful and consistent messaging.

But just as significant as knowledge is overcoming basic laziness (29%). Sorting your recycling from your trash takes effort, and not everyone is willing to expend energy to do so. Recyclers may not be able to motivate them, but another concern is addressable, and that's scheduling – having trash and recycling pick-up on different days can de-motivate consumers to recycle (15%).

Another challenge is forgetfulness. Some folks are willing to recycle, but it slips their mind to do so (25%).
Education could help promote a feeling of responsibility and elevate recycling's importance:


•  I don't feel that whether or not I recycle makes a difference (14%)
•  Recycling isn't important to me (10%)
•  I'm not convinced recycling helps the environment (8%)

...

pets pricing-researchIn a recent survey we conducted among pet owners, we asked about microchip identification. We found that cat owners and dog owners are equally likely to say that having their pet microchipped is a necessary component of pet ownership. That’s the good news.

The bad news is that when it comes time to doing it, the majority haven’t taken that precaution. 69% of the cat owners and 64% of the dog owners we surveyed say they haven’t microchipped their companion.

Why is microchipping so important?  Petfinder reports that The American Humane Association estimates over 10 million dogs and cats are lost or stolen in the US every year, and that 1 in 3 pets will become lost at some point during their lifetime. ID tags and collars can get lost or removed, which makes microchip identification the best tool shelters and vets use to reunite pets with their owners.

One barrier to microchipping is cost – it runs in the $25 to $50 dollar range for dogs and cats. Not a staggering amount, but pet ownership can get expensive – with all the “stuff” you need for your new friend, this can be a cost some people aren’t willing to bear. Vets, shelters and rescue groups sometimes discount their pricing when the animal is receiving other services, such as vaccines. Which begs the question, if vets want their patients to be microchipped, what’s the best way for them to price their services to make this important service more likely to be included?

It seems that pet microchipping would benefit from some pricing research. Beyond simply lowering the price, bundle offers may hold more appeal than a la carte. Then again, a single package price may be so high that it dissuades action altogether. Perhaps financing or staggered payments would help. And of course, discounts on other services, or on the service itself, may influence their decision. All of these possibilities could be addressed in a comprehensive pricing survey. We could use one of our pricing research tools, such as conjoint, to achieve a solid answer.

...

Recycling market researchIn my previous blog, we determined that people with access to recycling services don’t necessarily recycle. And men were far less likely to recycle regularly than women.

One problem potential recyclers face is there is no federal standard for what is collected and how. Services vary from one contractor to the next. Items deemed recyclable in one municipality may not be the next town over. As a general rule, bottles, cans, and newspapers are curbside-recyclable. Also as a general rule, prescription drugs, electronic devices, CFL bulbs and batteries are not – they shouldn’t go in the trash either - they require special handling.  But does the average consumer know this? We asked our online panelists who have access to recycling services how they believe their trash/recycling haulers would like them to handle certain items. And here’s what we learned:

  • Knowledge of recycling the Big-3 (glass bottles – aluminum cans – newspapers) is quite high. At least 80% of our panelists with access to recycling services know each of these should be recycled as opposed to trashed. And men and women are equally knowledgeable.
  • Word has spread that electronics do not belong in the trash. But our consumers are divided as to where they should go – 35% believe their contractor wants them in their recycling bin while just 46% believe electronics require special arrangements.
  • When we get to other items, things get a bit murky:
    1. Our panelists are as likely to believe that batteries can go out in the trash or recycling (45%) as believe batteries require special arrangements (41%). The rest aren’t sure.
    2. 19% aren’t sure what to do with compact fluorescent light bulbs.
    3. 22% believe that prescription drugs can be put out in the trash. 17% aren’t sure.
  • Meanwhile, some items that are traditionally “trashed” make consumers take pause – 26% of our consumers believe their hauler wants them to recycle linens and towels.

Focusing solely on those who say they recycle, women are more likely than men to know what goes where…

Recycling Market Research part2

Ladies, you may want to re-think having your gents handle the trash and recycling - or give them a quick lesson on what you've learned!

Recent Comments - Show all comments
  • Sheridan
    Sheridan says #
    How many people were in the "panelists"? I mean, 80% of the panelists know the Big-3 but 80% of how many? Thanks!
  • Michele Sims
    Michele Sims says #
    Thanks for your question! We surveyed 507 adults in the US.
Hits: 3133 2 Comments

Do Americans Recycle Enough? - PART I

Posted by on in Consumer Behavior

access to recycling utilityAccording to Economist.com, Americans aren’t doing a good job of recycling.  There is actually a shortage of materials from recycling facilities that could be used to produce new products. The author posits that there are a variety of reasons for this, including simple access: “a quarter of Americans lack access to proper bins for collecting recyclable material, and another quarter go without any curbside recycling at all.”  But I think it goes beyond access, and I surveyed our intrepid online panel of adult consumers to find out.


A little over a quarter (28%) of TRC’s panelists say they do not have residential recycling service. Increasing awareness and access for rental properties would certainly make a dent: renters are more likely to say they don’t have it (44%) than homeowners (19%).


But what if you are aware and have access?  Does that mean you’re recycling?  Not necessarily.  Only 75% of those who could be recycling are doing so on a regular basis (usually or always). There’s no difference between renters and homeowners with recycling access as far as how often they recycle. But there is one key difference between those who regularly do and those who don’t: gender. Women are far more likely to say that they recycle regularly (84%) than men (62%). I’m not sure why there is a gender disparity, but we’ll explore how knowledgeable men and women are about recycling in my next blog.

Hits: 2487 0 Comments

conjoint-vs-configuratorWe at TRC conduct a lot of marketing research projects using Conjoint Analysis. Conjoint is a very powerful tool for determining preferences for the various components that make up a product or service. The power of Conjoint comes from having consumers make mental trade-offs in evaluating products against each other. Do they prefer a lower cost product that contains few features, or a higher priced product that provides many benefits? How willing are they to choose a product that meets 2 or 3 of their criteria, but not all? Conjoint forces consumers to make these decisions, and the results can then be simulated to determine purchase preferences in a variety of scenarios.
But not all product development problems can be solved with Conjoint. Conjoint requires certain steps in the development cycle to have already been taken (defined features, some idea of pricing – see my previous blog on the topic.) In some cases, though, you may be at a stage in which Conjoint is feasible, but a different approach may be more appropriate, such as a Configurator. In a Configurator, otherwise known as a "Build-Your-Own" approach, you would use the same product features as in a Conjoint, but instead of pitting potential products against one another, the consumer "builds their own" ideal product.
So why choose one technique over the other? There are many reasons, but here are a few:
1. If determining overall product price sensitivity is the goal – Choose Conjoint. Conjoint will produce scores that assess both the importance of price overall as well as price tolerance for the product as features are included or excluded.
2. If you just want to know which features are the most popular, or which ones are selected when choosing or not choosing other features – Choose Configurator. In an a la carte scenario, respondents can choose which items to throw in their shopping cart and which ones to leave on the shelf. Getting simple counts on which features are popular and which ones are not – and in what combinations – can be very useful information, and it's an easier task for respondents. Keep in mind though that the Configurator works best if each feature is pre-assigned a price (to keep respondents from piling on).
3. If understanding competitive advantage/disadvantage is paramount – Choose Conjoint. Conjoint allows you to include "Brand" as a feature, and the results will link brand to the product price to see if respondents are willing to pay more (or less) for your product vs. that of a key competitor. You can also simulate competitive market scenarios. While you can include Brand in a Configurator, modeling the trade-off between brand and product price is far less robust.
4. If you have a lot of features, or complex relationships between the features - Choose Configurator. It's much easier for a respondent to sift through a long list of features and build their ideal product just once than to choose between products with a gigantic feature list multiple times. Conjoint works best when the features are not dependent on one another; a long list of restrictions on the features can disqualify Conjoint as a viable solution from a design perspective.
There are plenty of times when a technique may present itself as an obvious choice, and other times when the choice may be more subtle. And in those cases, we turn to our senior analysts who use their expertise and understanding of the research objectives to make sound recommendations.

Hits: 3121 0 Comments

when not to use conjointAt the beginning of my research career I grew accustomed to clients asking us for proposals using a methodology that they had pre-selected. In many cases, the client would send us the specs of the entire job, (this many completes, that length of survey) and just ask us for pricing. While this is certainly an efficient way for a client to compare bids across vendors, it didn’t allow for any discussion as to the appropriateness of the method being proposed.  
Today most research clients are looking for their research suppliers to be more actively involved in formulating the research plan. That said, we are often asked to bid on a “conjoint study.”  Our clients who’ve commissioned conjoint work in the past are usually knowledgeable about when a conjoint is appropriate, but sometimes there is a better method out there. And sometimes the product simply isn’t at the right place in the development “chain” to warrant conjoint.
Conjoint, for the uninitiated, is a useful research tool in product development. It is a choice-based method that allows participants to make choices between different products based on the product’s make-up. Each product comprises various features and levels within those features. What keeps respondents from choosing only products made up of the “best” features and levels is some type of constraint – usually price.   
We look to conjoint to help determine an optimal or ideal product scenario, to help price a product given its features, or to suggest whether a client could charge a premium or require a discount.  It has a wide range of uses, but it isn’t always a good fit:  

  1.  When the features haven’t been defined yet. One problem product developers face is having to “operationalize” something that the market hasn’t seen yet. You need to be able to describe a feature, what its benefits are, and its associated levels in layman’s terms. We can’t recommend conjoint if the features are still amorphous.   
  2. When there are a multitude of features with many levels or complex relationships between the features. The respondent needs to be able to absorb and understand the make-up of the products in order to choose between them. If the product is so complex that it requires varying levels of a lot of different features, it’s probably too taxing for the respondents (and may tax the design and resulting analysis as well). Conjoint could be the answer – but the task may need to be broken up into pieces.   
  3. When there are a limited number of features with few levels. In this case, Conjoint may be overkill. A simple monadic concept test or price laddering exercise may suffice.   
  4. When pricing is important, but you have absolutely no idea what the price will be. Conjoint works best when the product’s price levels range from slightly below how you want to price it to slightly above how you want to price it.  If your range is huge, respondents will gravitate toward the lower priced product scenarios and you won’t get much data on the higher end. It may also confuse respondents that similar products would be available at such large price differences.
Hits: 3788 0 Comments

My friend and I don’t share the same definition of what it means to be on-time. I don’t necessarily subscribe to the “early is on-time, on-time is late, late is unacceptable” theory, but I do try to arrive at or before an agreed upon time. She thinks there is wiggle room surrounding any appointment time – 5 or 10 minutes – and doesn’t seem concerned that I’ve been waiting for her to arrive. The good news is, if I’m running behind schedule, it doesn’t bother her that I arrive late. But if I’m going to be 5 to 10 minutes late, I’ll notify her. She would never think to do the same – because in her mind she’s on-time.

Perhaps I have too strict a definition of what it means to be on-time. Is 5 minutes considered late to everyone or just to me? We surveyed TRC’s online consumer panel to get an answer.

We used 5 minutes as our test case. If an appointment time is at 9:00 and actual arrival is 9:05, do you consider yourself on-time or late (or early)? To make things interesting, we asked about a variety of scenarios, since it’s possible that definitions may change based on the social situation.

If your boss calls an urgent meeting and you arrive 5 minutes past the start time, 2/3 of our participants consider that to be “late”.  When I saw that, at first I felt vindicated. But then I realized that if 2/3 are saying they’re late, that means 1/3 say it’s okay – 5 minutes is on-time or even early. Then I looked at the rest of the scenarios: 2/3 consider 5 minutes as “late” for babysitting or for a weekly religious service. If you show up 5 minutes after your reservation time at a restaurant, only 57% consider that to be late. And if you’re meeting a friend for casual dinner (no reservations), only 47% -- less than half of the adults we surveyed -- believe that 5 minutes off-schedule is actually “late”. What’s this world coming to?

being late infographics TRC

...
  • market research philadelphia farmersA recent post on my Facebook timeline boasted that Lansdale Farmers Market was voted the Best of Montgomery County, PA two years in a row. That’s the market I patronize, and I’d like to feel a bit of pride for it. But I’m a researcher and I know better.

Lansdale Farmers Market is a nice little market in the Philadelphia outskirts, but is it truly the best in the entire county? Possibly, but you can’t tell from this poll. Lansdale Farmers Market solicited my participation by directing me to a site that would register my vote for them (Heaven only knows how much personal information “The Happening List” gains access to).  I’m sure that the other farmers markets solicited their voters in the same or similar ways. This amounts to little more than a popularity contest. Therefore, the only “best” that my market can claim is that it is the best in the county at getting its patrons to vote for it.

But if you have more patrons voting for you, shouldn’t that mean that you truly are the best? Not necessarily. It’s possible that the “best” market serves a smaller geographic area, doesn’t maintain a customer list, or isn’t as good at using social media, to name a few.

  • A legitimate research poll would seek to overcome these biases. So what are the markers of a legitimate research poll? Here are a few:
  1. You’re solicited by a neutral third party. Sometimes the survey sponsors identify themselves up front and that’s okay. But usually if a competitive assessment is being conducted, the sponsor remains anonymous so as not to bias the results.
  2. You’re given competitive choices, not just a plea to “vote for me”.  
  3. You may not be able to tell this, but there should be some attempt to uphold scientific sampling rigor. For example, if the only people included in the farmers market survey were residents of Lansdale, you could see how the sampling method would introduce an insurmountable bias.

The market opens for the summer season in a few weeks, and you can bet that I’ll be there. But I won’t stop to admire the inevitable banner touting their victory.

Hits: 4078 0 Comments

SolarPanel conjoint AnalysisWe marketing research types like to think of the purchase funnel in terms of brand purchase. A consumer wants to purchase a new tablet. What brands is he aware of? Which ones would he consider? Which would he ultimately purchase? And would he repeat that purchase the next time?

Some products have a more complex purchase funnel, one in which the consumer must first determine whether the purchase itself – regardless of brand – is a “fit” for him. One such case is solar home energy.

Solar is a really great idea, at least according to our intrepid research panelists. Two-thirds of them say they would be interested in installing solar panels on their home to help offset energy costs. There are a lot of different ways that consumers can make solar work for them – and conjoint analysis would be a terrific way to design optimal products for the marketplace.

But getting from “interest” to “consideration” to “purchase” in the solar arena isn’t as easy as just deciding to purchase. Anyone in the solar business will tell you there are significant hurdles, not the least of which is that a consumer needs to be free and clear to make the purchase – renters, condo owners, people with homeowners associations or strict local ordinances may be prohibited from installing them.

Even if you’re a homeowner with no limitations on how you can manage your property, there are physical factors that determine whether your home is an “ideal” candidate for solar. They vary by region and different installers have different requirements, but here’s a short list:

...

Market Segmentation ResearchAs anyone with experience with pets will tell you, no two are alike. If you tune in to Animal Planet’s series “Too Cute,” about the first few months of the lives of litters of puppies and kittens, you’ll find evidence of siblings’ behavior differences. One is reticent, another is always hungry, one sleeps a lot while his sibling is bouncing off the walls.

In my household, our two cats are no exception. They are very different from one another. You can categorize them by saying one is alpha, the other omega (or dominant/submissive, leader/follower). Alpha cat is a bully. He struts around like he owns the place and pushes Omega cat off her perch in the sun so he can claim her spot. She allows him to do this with little to no resistance.

And yet…

The moment the doorbell rings, Alpha hides under the bed while Omega rushes to the door. Alpha is afraid of the vacuum cleaner, strangers and loud noises, and he rolls over in a submissive pose when the neighbor’s dogs are around. Omega, on the other hand, is food-obsessed and gives Alpha the evil eye when he approaches “her” food dishes. And she’s fearless when encountering new people and strange objects.

So we have a dominant cat and a submissive cat, but those labels don’t really tell the whole story.

...

This past spring we surveyed our consumer panel about the winter of 2013 – 2014. We used our proprietary message prioritization tool called BracketTM to determine that high heating bills were the worst part of enduring a challenging winter.

Energy utilities dedicate resources toward educating consumers about ways to conserve, which both increases sustainability and also keeps money in consumers’ wallets. One conservation method is to use programmable thermostats – homes and businesses can be kept cooler at night or when no one is around, and warmer when people are home (and the opposite is true in the summer). The set-it-and-forget-it nature of the program means the consumer doesn’t need to fiddle and adjust; once you decide what temperature you want on which day and at which time, the system takes over.

But we like to fiddle and adjust, and thermostats can also be controlled through apps on your PC or mobile device. This allows you to over-ride the program if you forget to re-set it while you’re on vacation, for example.

We were interested to understand consumer interest in these technologies, so we polled our panel once again and asked them what type of thermostat they used, if any, and how interested they’d be in installing a fancier type than they have now.

Nearly all of our survey participants use some type of thermostat. Half use a standard thermostat (not programmable). This number is higher among non-homeowners (61% vs. 48%). Landlords take note: consider upgrading to a programmable thermostat in your rental units.

...

This past summer, much of my TV viewing was dedicated to watching the series “Downton Abbey” and “Breaking Bad” in their entirety. “Downton Abbey” continues this January, but “Breaking Bad” concluded its five-season run before I started watching it. I was concerned that I would accidentally learn Walter’s and Jesse’s fates before seeing the final episodes. My friends who had seen the series were quite accommodating. But it’s tough keeping secrets in the digital age, and unfortunately, I did learn what happened in advance of watching. Jesse’s outcome was revealed by Seth Meyers during this year’s Emmy broadcast. And Walter? Well, I guess I’m to blame, since I was stupid enough to read a New York Times article in which the first paragraph states “Warning: Contains spoilers about the new age of television.”

The article, by Emily Steel, discusses the social ramifications of revealing dramatic plot twists. She cites a study by Grant McCracken, which Netflix plans to use as the basis for a digital promotion which creates a flow chart to classify people by their propensity for spoiling. At the root of all this is an attempt to understand how people view television content in the age of time-shifting and streaming, which has critical impacts on TV’s business model.

As viewing patterns change, so does water cooler conversation. You can’t simply blurt out, “How crazy was it when Danny took out Joey last night?” You need to first establish that the episode was watched by the people in the room. But the burden seems to fall more so on the one who isn’t caught up; I fell a few episodes behind my friends watching “Sons of Anarchy” this fall – so I had to make sure that they weren’t talking about it when I was around.

But with all of this conversational jockeying going on, I needed to ask a pretty basic question: how much time must elapse before what happened in a popular TV show becomes “fair game” – no longer subject to Spoiler Alerts?

To find out, we surveyed TV viewers from our online consumer panel. We know from conducting new product development market research studies (using conjoint and Bracket) that the way a question is framed influences how respondents answer. We wanted to look at the issue from both sides, so we randomly split our sample into two groups, and posed essentially the same question to both: how many days need to pass before people can communicate freely about a show and not face criticism for spoiling it for someone who hasn’t watched it yet?  We asked each group to assume a different role: one group was told to assume that they had just watched the episode, and the other group was told to assume the episode had aired, but they hadn’t watched it yet. We upped the stakes by describing the show as the final episode of a series that they liked a lot.  

...

Purchase Funnel Measuring AwarenessWe at TRC conduct a lot of choice-based research, with the goal of aligning our studies with real-world decision-making. Lately, though, I’ve been involved in a number of projects in which the primary objective is not to determine choice, but rather awareness. Awareness is the first – and arguably the most critical - part of the purchase funnel. After all, you can’t very well buy or use something if you don’t know it exists. So getting the word out about your brand, a new product or a product enhancement matters.

Awareness research presents several challenges that aren’t necessarily faced in other types of research. Here’s a list of a few items to keep in mind as you embark on an awareness study:

Don’t tip your hand. If you’re measuring awareness of your brand, your ad campaign or one of your products, do not announce at the start of the survey that your company is the sponsor. Otherwise you’ve influenced the very thing you’re trying to measure. You may be required to reveal your identity (if you’re using customer emails to recruit, for example), but you can let participants know up front that you’ll reveal the sponsor at the conclusion of the survey. And do so.

The more surveys the better. Much of awareness research focuses on measuring what happens before and after a specific event or series of events. The most prevalent use of this technique is in ad campaign research. A critical decision factor is how many surveys you should do in each phase. And the answer is, as many as you can afford. The goal is to minimize the margin of error around the results: if your pre-campaign awareness score is 45% and your post-campaign score is 52%, is that a real difference? You can be reasonably assured that it is if you surveyed 500 in each wave, but not if you only surveyed 100. The more participants you survey, the more secure you’ll be that the results are based on real market shifts.

Match your samples. Regardless of how many surveys you do each wave, it’s important that the samples are matched. By that we mean that the make-up of the participants should be as consistent with each other as possible each time you measure. Once again, we want to make certain that results are “real” and aren’t due to methodological choices. You can do this ahead of time by setting quotas, after the fact through weighting, or both. Of course, you can’t control for every single variable. At the very least, you want the key demographics to align.

...

Rita’s Italian Ice is a Pennsylvania-based company that sells its icy treats through franchise locations on the East Coast and several states in the Midwest and West.

Every year on the first day of spring, Rita’s gives away full-size Italian ices to its customers. For free. No coupon or other purchase required. It’s their way of thanking their customers and launching the season (most Rita’s are only open during the spring and summer months).

Wawa, another Pennsylvania company, celebrated 50 years in business with a free coffee day in April.  

Companies are giving their products away for free! What a fantastic development for consumers! I patronize both of these businesses, and yet, on their respective free give-away days, I didn’t participate. I like water ice (Philadelphia’s term for Italian ice) and I really like coffee. So what’s the problem?

In the case of Rita’s, the franchise location near me has about 5 parking spots, which on a normal day is too few. I was concerned about the crowds. On the Wawa give-away day, I forgot about it as the day wore on. That made me wonder what other people do when they learn that retailers are giving away their products. So, having access to a web-based research panel (a huge perk of my job), I asked 485 people about it. And here are the 4 things I learned:

...

In my previous post I applauded Matthew Futterman’s suggestion that two key changes to baseball’s rules will produce a shorter, faster-paced game, one that will attract younger viewers. While I may not be that young, I’m certainly on-board with speeding up the game. I believe that faster-paced play will lead to greater engagement, and greater engagement will lead to greater enjoyment.

In some sense this is similar to our position on marketing research methods. We want to engage our respondents because the more focused on the task they become, the more considered their responses will be. One of our newer tools, Bracket,TM allows respondents to prioritize a long list of items in a tournament-style approach. Bracket™has respondents make choices among items, and as the tournament progresses the choices become more relevant (and hopefully more enjoyable).

Meanwhile, back to baseball. The rule changes Futterman suggests are very simple ones:

Once batters step into the box, they shouldn't be allowed to step out. Otherwise it's a strike.

If no one is on the base, pitchers get seven seconds to throw the next pitch. Otherwise it's a ball.

...

Sandy Hingston wrote an article appearing in the March 2014 Philadelphia Magazine about Milennials’ lack of interest in history, specifically as it relates to baseball (read abridged version here). Later in the article, she quotes Matthew Futterman, who posited in the Wall Street Journal that two key changes to baseball’s rules will produce a shorter, faster-paced game that will attract more youngsters. This notion didn’t sit well with Sandy Hingston.  

But it did sit well with me. Very well, in fact. I’m a Boomer like Hingston, not a Millenial, but I find myself increasingly frustrated by things that, put simply, take too long. Baseball is one of them. In fact, my TV viewing of the Phillies (go Phils!) decreased as my TV viewing of another professional sport was on the rise: golf.

Anybody who watches golf on TV, or attends an event live, will attest that players can take a very long time in between shots, which is essentially the same criticism lobbed at pitchers who take too long between throws. Slow-play in golf is a hot topic, and the golf powers-that-be are quite willing to put players “on the clock” for taking their good sweet time. So to be fair, both sports are grappling with this issue.

A first or second round of professional golf will take the better part of a day to televise. A 9-inning baseball game, in contrast, lasts around 3 hours. Given the disparity between how long each event takes, one would think that I, as someone interested in fast action, would prefer watching baseball. But that’s just not the case.

This got me thinking about an issue that we grapple with in market research: respondent tedium. Long attribute batteries of low personal relevance can tax a respondent’s patience. Even being compensated doesn’t always overcome the glaze that forms over their eyes when faced with mundane, repetitive tasks. That’s why we do our best to keep respondents engaged by having them make choices (our Bracket technique is a good example of this). In bracket, the choices become more relevant as the task progresses – not unlike how play at the end of a close game or match becomes more exciting to the viewer.

...
Recent comment in this post - Show all comments
  • Ed Olesky
    Ed Olesky says #
    Very interesting, Michele. I'll be looking forward to reading Part 2. Hope you are doing well! Maryann

As most anyone living on the East Coast can attest, the winter of 2013-2014 was, to put it nicely, crappy. Storms, outages, freezing temperatures…. We had a winter the likes of which we haven’t experienced in a while. And it wasn’t limited to the East Coast – much of the US had harsher conditions than normal.

Here in the office we did a lot of complaining. I mean a lot. Every day somebody would remark about how cold it was, how their kids were missing too much school, how potholes were killing their car’s suspension… if there was a problem we could whine about, we did.

Now that it’s spring and we’re celebrating the return of normalcy to our lives, we wonder… just what was it about this past winter that was the absolute worst part of it? Sure, taken as a whole it was pretty awful, but what was the one thing that was the most heinous?

Fortunately for us, we have a cool tool that we could use to answer this question. We enlisted the aid of our consumer panel and our agile and rigorous product Message Test Express™ to find the answer. MTE™ uses our proprietary Bracket™ tool which takes a tournament approach to prioritizing lists. Our goal; find out which item associated with winter was the most egregious.

Our 200 participants had to live in an area that experiences winter weather conditions, believe that this winter was worse or the same as previous winters, and have hated, disliked or tolerated it (no ski bums allowed).

...
Recent comment in this post - Show all comments
  • Ed Olesky
    Ed Olesky says #
    Now this is a research topic relevant to all!

Want to know more?

Give us a few details so we can discuss possible solutions.

Please provide your Name.
Please provide a valid Email.
Please provide your Phone.
Please provide your Comments.
Enter code below : Enter code below :
Please Enter Correct Captcha code
Our Phone Number is 1-800-275-2827
 Find TRC on facebook  Follow us on twitter  Find TRC on LinkedIn

Our Clients