In a very interesting article in The New YorkerJonah Lehrer asks the question can truth Wear Off? So, what is “truth” and what is “wearing off”? In this case truth is that which has been proven by the scientific method (i.e.) experimentally. As the psychologist Jonathan Schooler discovered, experimental effects he had shown very clearly started disappearing into the dreaded land of non-significance over time. That is the “wearing off” part. And it wasn’t just Schooler. Others have seen similar phenomena where published studies when replicated over time have effectively lost their potency. This is a particularly troubling problem for medical science and is practically seen in the number of drugs that are retroactively pulled off the market. As the medical researcher John Ioannidis has shown, there can be substantial harm to society from wrong (but well publicized) results living in the memories of doctors who continue prescribing those drugs or treatments (hormone replacement therapy and daily low doses of aspirin) even when they have proven to be ineffective or harmful. So, the question is, how do effects disappear over time?



When small samples are used there is plenty of variance in the data which means that fluke results are a real possibility. When that happens they can get published. Replications that are published could be effectively replicating the fluke while those that get at the truth are not published, also called the publication bias. Eventually so many studies are done (but not necessarily published) that the truth starts showing itself (i.e.) there was no effect in the first place.


Another problem Lehrer points out is selective reporting. We are all familiar with that one. It is not fraud. But when a researcher has a hypothesis in mind it becomes awfully hard to disregard the data points that support it while not being enamored with those that don’t, however subtle or unintentional the effect. This leads to the first publication and then the publication bias takes over. Without publicly and transparently stating exactly what one is setting out to find, it is hard to shake this effect.

And then there is the big cahuna of randomness. It’s there, always. And it can have an effect even in the most carefully controlled experiments. As much as researchers would like to believe that they have controlled for all factors except the stimulus, the reality is there can be several unobserved variables that can affect the results. Thus running one study with a small sample and then dashing off to publish the results inevitably leads to the wearing off of “truth”.


It’s a great article, well researched and written. But what does it mean for us market researchers? If carefully controlled lab experiments can go so wrong, how much randomness and selective reporting is there in commercial surveys and other less rigorous data? What happens when we run scores of banners and hundreds of significant tests searching for differences? How real are those differences? Can we really form strong conclusions about consumer behavior based on such data?