Welcome visitor you can log in or create an account

800.275.2827

Consumer Insights. Market Innovation.

blog-page

Brand PerceptionsIs the Mini Cooper seen as an environmentally friendly car? What about Tesla as a luxury car? The traditional approach to understanding these questions is to conduct a survey among Mini and Tesla buyers (and perhaps non-buyers too, if budget allows). Such studies have been conducted for decades and often involve ratings of multiple attributes and brands. While certainly feasible, they can be expensive, time consuming and can get outdated over time. Is there a better way to get at attribute perceptions of brands that can be fast, economical and automated?

Aron Culotta and Jennifer Cutler describe such an approach in a recent issue of the INFORMS journal Marketing Science, and it involves the use of social media data – Twitter, in this case. Their method is novel because it does not use conventional (if one can use that term here) approaches to mining textual data, such as sentiment analysis or associative analysis. Sentiment analysis (social media monitoring) provides reports on positive and negative sentiments expressed online about a brand. In associative analysis, clustering and semantic networks are used to discover how product features or brands are perceptually clustered by consumers, often using data from online forums.

Breaking away from these approaches the authors use an innovative method to understand brand perceptions from online data. The key insight (drawn from well-established social science findings) is that proximity in a social network can be indicative of similarity. That is, understanding how closely brands are connected to exemplar organizations of certain attributes, it is possible to devise an affinity score that shows how highly a brand scores on a specific attribute. For example, when a Twitter user follows both Smart Car and Greenpeace, it likely indicates that Smart Car is seen as eco-friendly by that person. This does not have to be true for every such user, but at “big data” levels there is likely to be a strong enough association to extract signal from the noise.   

What is unique about this approach to using social media data, is that it does not really depend on what people say online (as other approaches do). It only relies on who is following a brand while also following another (exemplar) organization. The strength of the social connection becomes a signal of the brand’s strength on a specific attribute. “Using social connections rather than text allows marketers to capture information from the silent majority of brand fans, who consume rather than create content,” says Jennifer Cutler, who teaches marketing at the Kellogg School of Management in Northwestern University.

Sounds great in theory, right? But how can we be sure that it produces meaningful results? By validating it with the trusted survey data that has been used for decades. When tested across 200+ brands in four sectors (Apparel, Cars, Food & Beverage, Personal Care) and three perceptual attributes (Eco-friendliness, Luxury, Nutrition), an average correlation of 0.72 shows that social connections can provide very good information on how brands are perceived. Unlike with survey data, this approach can be run continuously, at low cost with results being spit out in real time. And there is another advantage. “The use of social networks rather than text opens the door to measuring dimensions of brand image that are rarely discussed by consumers in online spaces,” says Professor Cutler.

...

new product pricing research ebayI’ve become a huge fan of podcasts, downloading dozens every week and listening to them on the drive to and from work. The quantity and quality of material available is incredible. This week another podcast turned me on to eBay’s podcast “Open for Business”. Specifically the title of episode three “Price is Right” caught my ear.   
While the episode was of more use to someone selling a consumer product than to someone selling professional services, I got a lot out of it.
First off, they highlighted their “Terapeak” product which offers free information culled from the massive data set of eBay buyers and sellers. For this episode they featured how you can use this to figure out how the market values products like yours. They used this to demonstrate the idea that you should not be pricing on a “cost plus” basis but rather on a “value” basis.
From there they talked about how positioning matters and gave a glimpse of a couple market research techniques for pricing. In one case, it seemed like they were using the Van Westendorp. The results indicated a range of prices that was far below where they wanted to price things. This led to a discussion of positioning (in this case, the product was an electronic picture frame which they hoped to be positioned not as a consumer electronic product but as home décor). The researchers here didn’t do anything to position the product and so consumers compared it to an iPad which led to the unfavorable view of pricing.  
Finally, they talked to another researcher who indicated that she uses a simple “yes/no” technique…essentially “would you buy it for $XYZ?” She said that this matched the marketplace better than asking people to “name their price”.  
Of the two methods cited I tend to go with the latter. Any reader of this blog knows that I favor questions that mimic the market place vs. asking strange questions that you wouldn’t consider in real life (what’s the most you would pay for this?”). Of course, there are a ton of choices that were not covered including conjoint analysis which I think is often the most effective means to set prices (see our White Paper - How to Conduct Pricing Research for more).
Still there was much that we as researchers can take from this. As noted, it is important to frame things properly. If the product will be sold in the home décor department, it is important to set the table along those lines and not allow the respondent to see it as something else. I have little doubt if the Van Westendorp questions were preceded by proper framing and messaging the results would have been different.
I also think the use of big data tools like Terapeak and Google analytics are something we should make more use of.  Secondary research has never been easier!  In the case of pricing research, knowing the range of prices being paid now can provide a good guide on what range of prices to include in, say, a Discrete Choice exercise. This is true even if the product has a new feature not currently available. Terapeak allows you to view prices over time so you can see the impact of the last big innovation, for example.
Overall, I commend eBay for their podcast. It is quite entertaining and provides a lot of useful information…especially for someone starting a new business.

Hits: 993 0 Comments

GRIT-50-LogoTRC is proud to announce that it was voted as one of the top 50 innovative firms on the market research supplier side. We’re big believers in trying to advance the business of research and we’re excited to see that the GRIT study recognized that.

Our philosophy is to engage respondents using a combination of advanced techniques and better interfaces. Asking respondents what they want or why without context leads to results that overstate real preferences (consumers, after all, want “everything”) and often miss what is driving those decisions (Behavioral Economics tells us that we often don’t know why we buy what we buy).

Through the use of off-the-shelf tools like Max-Diff or the entire family of conjoint methods, we can better engage respondents AND gather much more actionable data. Through these tools and some of our own innovations like Bracket™ we can efficiently understand real preference and use analytics to tell us what is driving them.

Our ongoing long-terms partnerships with top academics at universities throughout the country also help us stay innovative. By collaborating with them we are able to drive new innovations that better unlock what drives consumers.

The GRIT study tracks which supplier firms are perceived as most innovative within the global market research industry. It’s a brand tracker using the attribute of ‘innovation’ as the key metric. The answers are gathered on an unaided basis. The survey asks to list top 3 research companies respondents consider innovative, then asks to rank the companies from least to most innovative and finally asks for explanation why they think they are innovative. Given the unaided nature of the study, it is quite an achievement for a firm like TRC to make the same list as firms hundreds of times our size.

...
Recent comment in this post - Show all comments
  • Andre
    Andre says #
    Congrats to TRC, The greatest group of people I've ever worked with and for. Well deserved !!!!!

Catalog coverIf you open your mailbox today, chances are that there will be a catalog in it. Even with the explosion in online purchasing, paper catalogs continue to be an important part of the retail marketing mix. Whether they spur traditional mail- or telephone-ordering or, more often now, online purchasing and even foot traffic in brick and mortar stores, catalogs remain critical for retailers. They not only show consumers what is available, but they also serve as an important branding tool.
Even if the recipient does not open or thoroughly review a catalog, its cover, its size and the kind of paper it is printed on can all telegraph meaning about the sender's brand.
But isn't there much more to be gained if the consumer does open the catalog?

How Can Marketers Maximize the Likelihood that a Catalog Is Opened?

Based on an online survey among a panel of consumers nationwide, TRC estimates that the average household receives 3.7 catalogs per week.  That is nearly 200 in the course of a year!
So how can catalog marketers break through the mailbox clutter and inspire consumers to look at what is actually inside their materials? We asked our national panel about some factors that influence their decisions to open (or not open) a catalog they receive. A key learning is something catalog marketers would certainly confirm: targeting is critical. Product interest and perceived need account for a large share of the decision to open a catalog, so getting the catalog to the right person is of course essential.
But once the catalog is in the right mailbox, it is clear that what the recipient sees on its cover will be important in whether or not the catalog is opened. First and foremost is the specific offer (sale, percent off, etc.) highlighted on that cover. Cover imagery also plays a role, particularly if the brand is familiar to the recipient.  
Take a look at the accompanying chart, and note that we asked some respondents to think about catalogs they might receive from familiar companies, while others considered catalogs from companies they had not heard of before. All of those answering had indicated earlier in the survey that they receive and open/look through catalogs in a typical week.

Catalog cover testing2

Leveraging Consumer Research in Catalog Cover Selection

Knowing that the cover can be so important in whether a catalog is opened, TRC believes it is well worth it to devote resources to ensure that the right cover is used. While some catalog marketers will test multiple covers prior to full mail launches, it is impractical to test more than just a few. Those few are typically selected from among a broader set – based on “gut feel” or simple preferences on the part of the design team.
But what if there was an efficient, consumer data driven method to select a “winning” cover from among a broad set of candidates? TRC has developed just that method: our approach leverages our proprietary Bracket™ survey technology to submit a large number of cover designs to a tournament-type evaluation that yields rankings and relative distance across the entire set of designs. An even more streamlined approach, Message Test Express™ or MTE™, can provide similar insights for up to 16 cover designs – in around a week and for a cost of approximately $10,000.  
Considering the volume that any catalog must compete against in the typical recipient’s mailbox, isn’t it practical to maximize the likelihood that the catalog will be opened? Concise, consumer-driven metrics on likely success have been shown in our experience to be superior to “gut feel” evaluations and are certainly more affordable than in-market testing of even a small number of options. Why risk missing a great opportunity by overlooking an optimal cover execution?

Hits: 1777 0 Comments

election bias new product researchSo I certainly do not follow politics closely, even during a presidential election year, which I guess could also be read as I don’t know very much about politics. But that small disclaimer aside, watching the news coverage of the recently passed Iowa Caucuses and upcoming New Hampshire primary, something struck me as peculiar in this process. These events happen in succession, not simultaneously. So first is the Iowa Caucus, then the New Hampshire primary, followed by the Nevada and South Carolina primaries, and so on with the other states.  And after each event is held the results are (almost) immediately known. So the folks in New Hampshire know the outcome from Iowa. The folks in Nevada and South Carolina know the outcomes from Iowa and New Hampshire. 

Doesn’t this lead to inherent and obvious bias? That’s the market researcher side talking. In implementing questionnaires we wouldn’t typically make known the results from previous respondents to those taking the survey later. This would surely have some influence on their answers that we wouldn’t want. We need a clean, pure read (as best as we can with surveys) as to consumer opinions and attitudes. Any deviation from this would surely compromise our data. 

But then again, is this always the case? Could there be situations in which some purposely predisposed informational bias is beneficial? I say yes! Granted one needs to be cautious and thoughtful when exposing respondents to prior information, but sometimes in order to get the specific type of response we want, a little bias is helpful. If asking about a particular product or product function, we may provide an example or guide so they can fully understand the product. E.g. 10 GB of storage is good for X number of movies and X number of songs. 

But circling back to the notion of letting respondents see the answers from previous respondents, even within the same survey, this could be quite helpful in priming folks to start thinking creatively. If we wish to gather creative ideas from consumers, it’s easy enough to ask them outright to jot something down. But it’s difficult to come up with new and creative ideas on the fly without much help. And responses we get from such tasks validate that point as many are nonsense, or short dull answers. So instead, we could show a respondent several ideas that have come up previously, either internally or from previous respondents, to jumpstart the thinking process and either edit/add onto an existing idea, or be stimulated enough to come up with their own unique idea. And truth is, it works! We at TRC implement this exact new product research technique with great success in our Idea Mill™ solution, and end up with many creative and unique ideas that our client companies use to move forward.

So while the presidential process strikes me as odd since any votes cast in other states following the Iowa Caucus may be inherently biased, there are opportunities where this sort of predisposition to information can work in our favor.

Hits: 2609

Want to know more?

Give us a few details so we can discuss possible solutions.

Please provide your Name.
Please provide a valid Email.
Please provide your Phone.
Please provide your Comments.
Enter code below : Enter code below :
Please Enter Correct Captcha code
Our Phone Number is 1-800-275-2827
 Find TRC on facebook  Follow us on twitter  Find TRC on LinkedIn

Our Clients