I heard a great episode of the “You Are Not so Smart” podcast in which Sam Arbesman talked about his book called “The Half Life of Facts”. This book has nothing to do with “truthiness”, “fake news” or any accusation that someone is or is not a liar, but it does provide some context for the world we live in.
The book’s title is taken from a scientific term (the time it takes an isotope to lose half of its radioactivity) and the notion that as we learn more, some things we took as “fact” will turn out to be wrong. Newton’s laws, for example, were supplanted by Einstein. The point of the book is not that we shouldn’t bother learning facts, but rather that we should be open to the possibility that they might be wrong. Modern medicine acknowledges that they don’t know everything and that some things they “know” will prove to be false. At the same time, they must treat patients based on what is known or thought to be known.
It got me thinking about our business. What is the half-life of facts here? You might be tempted to take comfort in the fact that things like margin of error have not changed. While technically true, this ignores that academia is facing a crisis of confidence over statistically significant findings that don’t hold up in subsequent studies. One cause for this is they run lots of cuts of the data and look for anything statistically significant and then build a rationale for that finding. They ignore that with so many cuts of the data they are likely to find some statistical noise. Don’t we run the same risk with each additional banner we run?
There is a known problem with Discrete Choice Conjoint that is often ignored. If you have a product made up of say 8 features each with three levels and 1 with 150 the importance of the feature with 150 levels will be overstated by the model. Still, the model will run, utilities will be calculated and a simulator can be constructed…all of which provide a sense of precision that is not warranted. A researcher who knows about it will guide the client either by changing the design OR by putting the results into their proper perspective. There are many other ways that a complex model like this can produce skewed results and I have little doubt more will be found in the future.
This is not to say that we can’t trust results. Doctors have to treat patients based on what is known today and we must do the same for our clients. The important thing is that we have to acknowledge we have things to learn. As researchers that should be easy for us…