Monday 14 December 2009

Growthgate and Climategate

William Easterly notes parallels between climate research and research on economic growth.
There were three steps in the the great History of Evolving Cluelessness:
  1. Economists spent the past two decades trying every possible growth determinant in sight. They found evidence for 145 different variables (according to an article published in 2005). That was a bit too many in a sample of only about one hundred countries. What was happening is there would be evidence for Determinants A, B, C, and D when tried one at a time to explain growth. But the evidence for A disappeared when you also controlled for some combination of B, C, and D, and/or vice versa. (Interestingly enough, foreign aid never even merited inclusion in the list of 145 variables.)

  2. The Columbia economist Xavier Sala-i-Martin and co-authors ran millions of regressions on all possible combinations of 7 variables out of the many possible determinants of growth. Skipping a lot of technical detail, they essentially averaged out the millions of regressions to see which determinants had evidence for them in most regressions. There was hope: some were robust! For example, the idea that malaria prevalence hinders growth found consistent support.

  3. This new paper by Ciccone and Jarocinski found that every time the growth data are revised, or if the sample is changed to another equally plausible one, the results vanish on the “robust” variables and new “robust” variables appear. Goodbye, malaria, hello, democracy. Except the new “robust” determinants are no longer believable if minor differences between equally plausible samples changes what is robust. So nothing is robust.
There are two possible ways to describe what had happened over the past two decades:
  1. The growth research was at least partially fraudulent, in that we researchers were searching among many different econometric exercises till we got the “determinants of growth” we wanted all along.

  2. There was a good faith effort by us researchers to test different theories of growth, which led to some results. We didn’t realize until later that these results were not robust.
Description (1) would be a “GrowthGate,” but since so many people would be guilty (of “data mining”), and since we really can’t tell for any individual study or researcher whether it was (1) or (2), “GrowthGate” never became a story.
It's rather worrying if the Sala-i-Martin variables prove non-robust across new iterations of the Penn World Tables. Minor errors in data seem to blow the technique apart.

2 comments:

  1. ...give a thousand monkeys a thousand typwriters...

    ReplyDelete
  2. @Rob: Somebody ran the experiment; they'd destroyed the typewriters within days. So was a test of joint hypothesis that random typing leads to Shakespeare AND that typewriters wouldn't be destroyed. Lousy Quine...

    ReplyDelete