I work in a business that depends heavily on email. We use it to ask and answer questions, share work product, and engage our clients, vendors, co-workers and peers on a daily basis. When email goes down – and thankfully it doesn't happen that often – we feel anything from mildly annoyed to downright panic-stricken.
So business email is ubiquitous. But not everyone follows the same rules of engagement – which can make for some very frustrating exchanges.
We assembled a list of 21 "violations" we experienced (or committed) and set out to find out which ones are considered the most bothersome.
Research panelists who say they use email for business purposes were administered our Bracket™ prioritization exercise to determine which email scenario is the "most irritating"....
You are planning to take a trip to the city of brotherly love to visit the world famous Philadelphia Flower Show, and would like to book a hotel near the Convention Center venue. If you’re like most people, you go online, perhaps to TripAdvisor or Expedia and look for a hotel. In a few clicks you find a list of hotels with star ratings, prices, amenities, distance to destination – everything you need to make a decision. Quickly you narrow your choice down to two hotels within walking distance of the Flower Show, and conveniently located near the historic Reading Terminal Market.
But how to choose between the two that seem so evenly matched? Perhaps you can take a look at some review comments that might provide more depth? There are hundreds of comments which is more than you have time for, but you quickly read a few on the first page. You are about to close the browser when you notice something. One of the hotels has responses to some of the negative comments. Hmmm…interesting. You decide to read the responses, and see some apologies, a few explanations and general earnestness. No such response for the other hotel, which now begins to seem colder and more distant. What do you do?
In effect, that’s the question Davide Proserpio and Georgios Zervas seek to answer in a recent article in the INFORMS journal Marketing Science. And it’s not hard to see why it’s an important question. Online reviews can have significant impact on a business, and unlike word of mouth they tend to stick around for years (just take a look at the dates on some reviews). Companies can’t do much to stop reviews (especially negative), and so they often try to coopt them by providing responses to selected reviews. It is a manual task, but the idea seems sound. By responding, perhaps they can take the sting out of negative reviews, appear contrite, promise to do better, or just thank the reviewer for the time they took to write the feedback – all with the objective of getting prospective customers to give them a fair chance. The question then is whether such efforts are useful or just more online clutter.
It turns out that’s not an easy question to answer, and as Proserpio and Zervas document in the article, there are several factors that first need to be controlled. But their basic approach is easy enough to understand – they examine whether TripAdvisor ratings for hotels tend to go up after management responds to online reviews. An immediate problem to overcome, ironically enough, is management response. That is, in reaction to bad reviews a hotel may actually make changes that then increases future ratings. That’s great for the hotel, but not so much for the researcher who is trying to study if the response to the online review had an impact, not whether the hotel is willing to make changes in response to the review. So, that’s an important factor that needs to be controlled. How to do that?
Enter Expedia. As it happens, hotels frequently respond to TripAdvisor reviews while they almost never do so on Expedia. So, they use Expedia as a control cell and compare the before-after difference in ratings on TripAdvisor and Expedia (the difference-in-difference approach). Hence they are able to tease out if the improvement in ratings was because of responding to reviews or real changes. Another check they use is to compare the ratings of guests who left a review shortly before a hotel began responding with those who did so shortly after the hotel began responding. Much of the article is actually devoted to several more clever and increasingly complex maneuvers they use to finally tease out just the impact of management responses. What do they find?...
The Economist Magazine did an analysis of political book sales on Amazon to see if there were any patterns. Anyone who uses social media will not be surprised that readers tended to buy books from either the left or the right...not both. This follows an increasing pattern of people looking for validation rather than education and of course it adds to the growing divide in our country. A few books managed a good mix of readers from both sides, though often these were books where the author found fault with his or her own side (meaning a conservative trashing conservatives or a liberal trashing liberals).
I love this use of big data and hopefully it will lead some to seek out facts and opinions that differ from their own. These facts and opinions need not completely change an individual's own thinking, but at the very least they should give one a deeper understanding of the issue, including an understanding of what drives others' thinking.
In other words, hopefully the public will start thinking more like effective market researchers.
We could easily design research that validates the conventional wisdom of our clients.
• We can frame opinions by the way we ask questions or by the questions we asked before.
• We can omit ideas from a max-diff exercise simply because our "gut" tells us they are not viable.
• We can design a discrete choice study with features and levels that play to our client's strengths.
• We can focus exclusively on results that validate our hypothesis.
Do people buy green products? Yes, of course. The real question for green marketers is whether they buy enough. In other words, are green sales in line with pro-green attitudes? Not really, as huge majorities of consumers show at least some green tendencies while purchases lag far behind. Why is that? Economics tells us that consumers buy based on value (trading off cost and benefits). Since eco-friendly products are seen as being more expensive, higher prices can lower the value of a green product enough to make a conventional alternative more attractive.
While the cost trade-off is clear, it is not the only one. The benefit side has at least two major components. One is the environmental benefit, which may or may not seem tangible enough to make a difference. For instance, a dozen eggs at Acme goes for less than a dollar, while some cage-free varieties can run north of $4 at Whole Foods. So, an environmentally conscious consumer has to make a trade-off at the time of purchase – is the product worth the additional cost? For items like food, the benefits may seem small enough, and far enough out, that many may decide the value proposition does not work for them. In other product categories (say, green laundry detergent), the benefits may seem both long term and impersonal, making the trade-off even harder.
The second major component is the effectiveness of the product in performing its basic function. If consumers perceive green products as inherently inferior (in terms of conventional attributes like performance), they are less likely to buy them. So a green laundry detergent (that uses less harsh chemicals) could be seen as more expensive and less effective in cleaning clothes, further dropping its overall value. (A complicating issue is that the lack of effectiveness itself could be a perceptual rather than real problem). Unless the company is able to offset these disadvantages, the product is unlikely to succeed.
A direct way to increase demand is to offer higher performance on a compensatory attribute. In the case of LED TVs, for example, newer technology consumes less power and provides better picture quality. (Paradoxically, this can sometimes lead to the Rebound Effect, whereby greener technologies encourage higher use, thus clawing back some of the benefits). But in reality, most products are not in a position where green attributes offer performance boosts.
And of course, as it is with every other market, there are segments in this market as well. Consumers who are highly committed (dark green) are willing to buy, as the value they place on the longer term environmental benefits is high enough. And, often they are affluent enough to afford the price. But a product looking for mainstream success cannot succeed only with dark green consumers (who rarely account for more than 20% of the market). Other shades of green will also need to buy. Short of government subsidies and mandates, green marketers have to find ways to balance out the components of the value proposition for the bulk of the market....
TRC is celebrating 30 years in business…a milestone to be sure.
Being a numbers guy, I did a quick search to see how likely it is for a business to survive 30 years. Only about 1 in 5 make it to 15 years, but there isn’t much data beyond that. Extrapolation beyond the available data range is dangerous, but it seems likely that less than 10% of businesses ever get to where we are. To what do I owe this success then?
It goes without saying that building strong client relationships and having great employees are critical. But I think there are three things that are key to having both those things:
I’ve always felt that researchers need to be curious and I’d say the same for Entrepreneurs. Obviously being curious about your industry will bring value, but even curiosity about subjects that have no obvious tie in can lead to innovation. For example, by learning more about telemarketing I discovered digital recording technology and applied it to our business to improve quality....
So much has been written about conducting research for new product development. Not surprisingly, as this is an area of research almost every organization, new or old, has to face day in and day out. As market research consultants, we deal with it all the time and thought it would be beneficial to provide our audience with our own recommendations for some useful sources that explain conjoint analysis – a method most often used when researching new products and conducting pricing research.
This is a relatively brief article from Sawtooth Software, the makers of software used for conjoint, that provides an explanation of the basics of conjoint. The paper uses a specific example of golf balls to make it easy to understand.
I recently heard an old John Oliver comedy routine in which he talked about a product he'd stumbled upon...a floating barbeque grille. He hilariously makes the case that it is nearly impossible to find a rationale for such a product and I have to agree with him. Things like that can make one wonder if in fact we've pretty well invented everything that can be invented.
A famous quote attributed to Charles Holland Duell makes the same case: "Everything that can be invented has been invented". He headed up the Patent Office from 1898 to 1901 so it's not hard to see why he might have felt that way. It was an era of incredible invention which took the world that was largely driven by human and animal power into one in which engines and motors completely changed everything.
It is easy for us to laugh at such stupidity, but I suspect marketers of the future might laugh at the notion that we live in a particularly hard era for new product innovation. In fact, we have many advantages over our ancestors 100+ years ago. First, the range of possibilities is far broader. Not only do we have fields that didn't exist then (such as information technology), but we also have new challenges that they couldn't anticipate. For example, coming up with greener ways to deliver the same or better standard of living.
Second, we have tools at our disposal that they didn't have. Vast data streams provide insight into the consumer mind that Edison couldn't dream of. Of course I'd selfishly point out that tools like conjoint analysis or consumer driven innovation (using tools like our own Idea Mill) further make innovation easier.
The key is to use these tools to drive true innovation. Don't just settle for slight improvements to what already exists....great ideas are out there....
We are now officially two months into 2017, which means it’s time to keep up with those New Year’s resolution goals. Resolutions can be difficult to attain in both personal and professional life settings. Recently, I stumbled upon an article by Crawford Hollingworth, an interesting read about behavioral science and its effect on New Year’s resolution goal attainment. As I was reading the article, I realized the suggestions for preparing resolution goals provided in the article also relate to the process of preparing a market research study. The four steps for developing a New Year’s resolution recommended in the article are: Make a plan, Substitute old behavior for new behavior, Make it easy, and Make only one New Year’s resolution. My view on how these strategies relate to market research is as follows:
The first step of the market research journey is to make an action plan. Figure out what the objective of your research is going to be – what do you want to know and from who do you want insight? Next, consider the methods through which you will obtain the most meaningful and useful results for your research objective. Finally, put together a schedule that includes every aspect of the research, including questionnaire design, fielding the survey, data delivery and reporting the research findings.
In the grand scheme of market research methodologies, there are plenty of approaches to choose from that will provide the results needed to make powerful decisions about your product or service. Of course, it is normal human behavior to have the desire to stick to what you know, and market research isn’t much different. However, methodologies are continuing to evolve and can provide findings in various ways. For example, TRC has developed methodologies such as Message Test Express™, Idea Mill™ and Bracket™, along with other solutions that are increasingly popular among the research we conduct. This is an opportunity to be creative and try methodologies that have been tested and offer proven results, which will allow you to view research findings from an alternative perspective.
In order to get reliable results from your research, it is best to start with consideration of the questionnaire design. Plan the design with the end in mind first, then work your way to the front; if you consider what you want to know first, the questions themselves will come together easily. This will allow you to easily interpret and analyze data during the final reporting stages. On the other hand, in terms of the actual survey, you want to avoid developing questions that are overly complicated or time consuming for respondents. Make sure the questions asked make sense and the instructions are clear and concise so that respondents can quickly grasp the idea of what you are asking of them.
A colleague of mine, Rajan Sambandam, provided insight during a recent meeting about the scope of market research studies being “Broad and Shallow” versus “Narrow and Deep” that I found to be interesting. A take-away from his statement is that you should either have a broad and shallow scope through which you will have less informative findings about a larger group of topics, or a narrow and deep scope through which you will have an abundance of detailed findings about one topic. Instead of striving to accomplish both “broad and shallow” and “narrow and deep” research in one initiative, focusing on one or the other will provide the most meaningful and useful information to be applied to your product or service....
Over the years our clients have increasingly looked to us to condense results. Their internal stakeholders often only read the executive summary and even then they might only focus on headlines and bold print. Where in the past they might have had time to review hundreds of splits of Max-Diff data or simulations in a conjoint, they now want us to focus our market research reporting on their business question and to answer it as concisely as possible. All of that makes perfect sense. For example, wouldn’t you rather read a headline like “the Eight Richest People in the World Have More Wealth than Half the World’s Population” than endless data tables that lay out all the ways that wealth is unfairly distributed? I know I would…if it were true.
The Economist Magazine did an analysis of the analysis that went into that headline-grabbing statement from Oxfam (a charity). The results indicate a number of flaws that are well worth understanding.
• They included negative wealth. Some 400 million people have negative wealth (they owe more than they own). So it requires lots of people with very low positive net worth to match the negative wealth of these 400 million people…thus making the overall group much larger than it might have been.
• For example, there are 21 million Americans with a net worth of over $350 Billion. Most of them would not be people you might associate with being very poor…rather they have borrowed money to make their lives better now with the plan to pay it off later.
• They were looking at only material wealth…meaning hard assets like property and cash. Even ignoring wealth like that of George Baily (“The richest man in town!”), each of us possesses wealth in terms of future earning potential. Bill Gates will still have more wealth than a farmer in sub-Saharan Africa, but collectively half the world’s population has a lot of earnings potential....
Now we turn to the real question: why aren't consumers recycling on a more consistent basis? Again we turned to our online consumer research panel and asked those with curbside recycling access who don't recycle regularly a simple question: Why not? What behaviors and attitudes can Recyclers act upon to educate their customers and encourage more recycling?
Well, like any complex problem, there's no one single answer. Lack of knowledge of what's recyclable and being unsure how to get questions answered play a big part (28%). Recyclers can raise awareness through careful and consistent messaging.
But just as significant as knowledge is overcoming basic laziness (29%). Sorting your recycling from your trash takes effort, and not everyone is willing to expend energy to do so. Recyclers may not be able to motivate them, but another concern is addressable, and that's scheduling – having trash and recycling pick-up on different days can de-motivate consumers to recycle (15%).
Another challenge is forgetfulness. Some folks are willing to recycle, but it slips their mind to do so (25%).
Education could help promote a feeling of responsibility and elevate recycling's importance:
• I don't feel that whether or not I recycle makes a difference (14%)
• Recycling isn't important to me (10%)
• I'm not convinced recycling helps the environment (8%)
In a recent survey we conducted among pet owners, we asked about microchip identification. We found that cat owners and dog owners are equally likely to say that having their pet microchipped is a necessary component of pet ownership. That’s the good news.
The bad news is that when it comes time to doing it, the majority haven’t taken that precaution. 69% of the cat owners and 64% of the dog owners we surveyed say they haven’t microchipped their companion.
Why is microchipping so important? Petfinder reports that The American Humane Association estimates over 10 million dogs and cats are lost or stolen in the US every year, and that 1 in 3 pets will become lost at some point during their lifetime. ID tags and collars can get lost or removed, which makes microchip identification the best tool shelters and vets use to reunite pets with their owners.
One barrier to microchipping is cost – it runs in the $25 to $50 dollar range for dogs and cats. Not a staggering amount, but pet ownership can get expensive – with all the “stuff” you need for your new friend, this can be a cost some people aren’t willing to bear. Vets, shelters and rescue groups sometimes discount their pricing when the animal is receiving other services, such as vaccines. Which begs the question, if vets want their patients to be microchipped, what’s the best way for them to price their services to make this important service more likely to be included?
It seems that pet microchipping would benefit from some pricing research. Beyond simply lowering the price, bundle offers may hold more appeal than a la carte. Then again, a single package price may be so high that it dissuades action altogether. Perhaps financing or staggered payments would help. And of course, discounts on other services, or on the service itself, may influence their decision. All of these possibilities could be addressed in a comprehensive pricing survey. We could use one of our pricing research tools, such as conjoint, to achieve a solid answer....
Is the Mini Cooper seen as an environmentally friendly car? What about Tesla as a luxury car? The traditional approach to understanding these questions is to conduct a survey among Mini and Tesla buyers (and perhaps non-buyers too, if budget allows). Such studies have been conducted for decades and often involve ratings of multiple attributes and brands. While certainly feasible, they can be expensive, time consuming and can get outdated over time. Is there a better way to get at attribute perceptions of brands that can be fast, economical and automated?
Aron Culotta and Jennifer Cutler describe such an approach in a recent issue of the INFORMS journal Marketing Science, and it involves the use of social media data – Twitter, in this case. Their method is novel because it does not use conventional (if one can use that term here) approaches to mining textual data, such as sentiment analysis or associative analysis. Sentiment analysis (social media monitoring) provides reports on positive and negative sentiments expressed online about a brand. In associative analysis, clustering and semantic networks are used to discover how product features or brands are perceptually clustered by consumers, often using data from online forums.
Breaking away from these approaches the authors use an innovative method to understand brand perceptions from online data. The key insight (drawn from well-established social science findings) is that proximity in a social network can be indicative of similarity. That is, understanding how closely brands are connected to exemplar organizations of certain attributes, it is possible to devise an affinity score that shows how highly a brand scores on a specific attribute. For example, when a Twitter user follows both Smart Car and Greenpeace, it likely indicates that Smart Car is seen as eco-friendly by that person. This does not have to be true for every such user, but at “big data” levels there is likely to be a strong enough association to extract signal from the noise.
What is unique about this approach to using social media data, is that it does not really depend on what people say online (as other approaches do). It only relies on who is following a brand while also following another (exemplar) organization. The strength of the social connection becomes a signal of the brand’s strength on a specific attribute. “Using social connections rather than text allows marketers to capture information from the silent majority of brand fans, who consume rather than create content,” says Jennifer Cutler, who teaches marketing at the Kellogg School of Management in Northwestern University.
Sounds great in theory, right? But how can we be sure that it produces meaningful results? By validating it with the trusted survey data that has been used for decades. When tested across 200+ brands in four sectors (Apparel, Cars, Food & Beverage, Personal Care) and three perceptual attributes (Eco-friendliness, Luxury, Nutrition), an average correlation of 0.72 shows that social connections can provide very good information on how brands are perceived. Unlike with survey data, this approach can be run continuously, at low cost with results being spit out in real time. And there is another advantage. “The use of social networks rather than text opens the door to measuring dimensions of brand image that are rarely discussed by consumers in online spaces,” says Professor Cutler....
The surprising result of the election has lots of people questioning the validity of polls…how could they have so consistently predicted a Clinton victory? Further, if the polls were wrong, how can we trust survey research to answer business questions? Ultimately even sophisticated techniques like discrete choice conjoint or max-diff rely upon these data so this is not an insignificant question.
As someone whose firm conducts thousands and thousands of surveys annually, I thought it made sense to offer my perspective. So here are five reasons that I think the polls were “wrong” and how I think that problem could impact our work.
1) People Don’t Know How to Read Results
Most polls had the race in the 2-5% range and the final tally had it nearly dead even (Secretary Clinton winning the popular vote by a slight margin). At the low end, this range is within the margin of error. At the high end, it is not far outside of it. Thus, even if everything else were perfect, we would expect that the election might well have been very close.
I always dread the inevitable "What do you do?" question. When you tell someone you are in market research you can typically expect a blank stare or a polite nod; so you must be prepared to offer further explanation. Oh, to be a doctor, lawyer or auto mechanic – no explanation necessary!
Of course, as researchers, we grapple with this issue daily, but it is not often we get to hear it played out on major news networks. After one of the debates, I heard Wolf Blitzer on CNN arguing (yes arguing) with one of the campaign strategists about why the online polls being quoted were not "real" scientific polls. Wolf's point was that because the Internet polls being referenced were from a self-selected sample their results were not representative of the population in question (likely voters). Of course, Wolf was correct, and it made me smile to hear this debated on national TV.
A week or so later I heard an even more, in-depth consideration of the same issue. The story was about how the race was breaking down in key swing states. The poll representative went through the results for key states one-by-one. When she discussed Nevada she raised a red flag as to interpreting the poll (which has one candidate ahead by 2 - % points). She further explained it is difficult to obtain a representative sample in Nevada due to a number of factors (odd work hours, transient population, large Spanish speaking population). Her point was that they try to mitigate these issues, but any results must be viewed with a caveat.
Aside from my personal delight that my day-to-day market research concerns are newsworthy, what is the take-away here? For me, it reinforces how important it is to do everything in our power to ensure that for each study our sample is representative. The advent of online data collection, the proliferation of cell phone use and do-it-yourself survey tools may have made the task more difficult, but no less important. When doing sophisticated conjoint, segmentation or max-diff studies, we need to keep in mind that they are only as good as the sample that feeds them.