I'm on the plane heading back from ESOMAR. I found the diversity of opinions and ideas shared there to be both interesting and thought provoking. Over the next couple blogs I'll share my thoughts on what I got from the event.

First off, gaming; no subject divides researchers more. Several presentations showed tests that used game elements to engage the respondents. One effort by MSI created a sort of fantasy backdrop in which players answered questions to get things they would need on their game quest. The idea was to engage respondents and with that get better data. Sadly, the results didn't back that up at all. Results did not vary much (specifics are available on the ESOMAR site), but respondents who did it were more engaged. At the same time, response rates were lower (loading time put some people off and some had no interest in the game). Easy enough to theorize that the mistake here was that the game was a sort of reward for doing the survey, but not related to it. As such, it does little to engage the respondent.

GMI did a series of things to use "game elements" in a normal survey. Simple things like telling people their answers were timed or instead of asking an awareness question straight out they showed a screen with say 10 boxes on it and told respondents that each box had a name of a type of product in it and they would get a point for each one they guessed correctly...three wrong guesses and they move on. These produced not only greater engagement (and enjoyment for the respondent) but also more data. They rightly received an award for their work.

I was reminded of a technique we've used for years called "Smart Incentives". I never thought of it as a "gaming element", but to some extent it is. Basically instead of asking a question like "what features could the bank add to make their checking accounts better?" straight out, we preface it by saying, "On this question we are going to judge the answers given...the three best ideas will win $50". As with GMI's work, this has proved to drive a greater depth of responses and creativity.

So, as a believer that game elements have a future in research, I am encouraged to see these tests and hopeful that we'll all build on them. As I mentioned though, the opinion about gaming is not anywhere near universal. The biggest concern is "how do we know these data are better?" In many cases I think it is obvious from looking at it (the checking account example of Smart Incentives being an example). But the real test, as another presenter pointed out, will be whether these data (combined with any other available data) will lead to better direction for our clients. If so, then the debate will be settled in short order.