Saturday, October 3, 2015

Some thoughts after attending a conference in Copenhagen

I just got done with a very nice conference in Copenhagen on grammar vs lexicon.

One thing that struck me afresh about several of the people I spoke to there and the talks I heard there is that scientists feel compelled to hold or stand for a theoretical position. People often design their careers around a position that they hold, and then they proceed to defend it no matter what data comes their way. Doing science is very much like a forecasting problem.  Your job is to come up with a prediction of what will happen if a particular experiment is run.

The way we do science, however, is as follows. We first find out what the experiment showed. Then we make the "prediction" based on our favorite theory.  Researchers routinely use the word prediction even when they already know the outcome of an experiment. If this was a weather forecasting problem, it would be like publishing the probability of rain yesterday. Of course you would get everything right! It is this unfortunate tendency to predict after the fact that people are so confident about their theories and positions. After the fact prediction gives an illusion of being right all the time.

I just read a great review of a book on forecasting by the greatest reviewer I have ever encountered on the web: RK, of RK's musings fame.

He discusses a book, Superforecasters, in which the author lays out the qualities of a good forecaster. I quote from the blog almost verbatim:

  • Good back of the envelope calculations
  • Starting with outside view that reduces anchoring bias
  • Subsequent to outside view, get a grip on the inside view
  • Look out for various perspectives about the problem
  • Think three/four times, think deeply to root out confirmation bias
  • It's not the raw crunching power you have that matters most. It's what you do with it.

And here is another quote from the blog, which itself is a quote from the book:

Unpack the question into components. Distinguish as sharply as you can between the known and unknown and leave no assumptions unscrutinized. Adopt the outside view and put the problem into a comparative perspective that downplays its uniqueness and treats it as a special case of a wider class of phenomena. Then adopt the inside view that plays up the uniqueness of the problem. Also explore the similarities and differences between your views and those of others-and pay special attention to prediction markets and other methods of extracting wisdom from crowds. Synthesize all these different views into a single vision as acute as that of a dragonfly. Finally, express your judgment as precisely as you can, using a finely grained scale of probability.

And finally, RK also excerpts a composite portrait of a good forecaster from the book: 




Scientists in psycholinguistics tend to be the exact opposite of a good forecaster. 

They hunker down and defend to death one position, never never never back down in the face of counterevidence, never entertain multiple alternative theories simultaneously, never express any self-doubt (at least not publicly) that their favorite position might be wrong. Whenever we write papers, we end up converging on what we claim is the most plausible explanation for the result we have found. We never end on an equivocation, because that would mean rejection from the top journal we have submitted our paper to.

If anyone other than me is reading this blog, maybe you should read RK's original review of the book, Superforecasters, and maybe also read the book (I know I will), and then think about what's wrong with the way you are doing science, because it is bass-ackwards. We are terrible forecasters, and there's a damn good reason for it!