Not entirely unsurprisingly, he's gone and annoyed some folks. This time it's looking serious. The Add Health data series interviews high schoolers in three waves, making a nice panel data set. One question asks the interviewer to rate the respondent's attractiveness on a five point scale. Satoshi ran some regressions on attractiveness and found racial differences in means after correcting for possible confounds like weight; black women, but not men, were found less attractive in the surveys. He then speculated about whether testosterone levels might account for the result. His blog post, as usual, was pretty blunt about what he'd found; it's mirrored here as it's now been pulled.
The pile-on has been pretty brutal. He's been called a racist for finding data suggesting black women are less attractive than white or asian women; I'm not sure whether he's a racist also for finding data suggesting that there are no big racial differences in attractiveness among men.
Here's Huffington calling him a racist.
Lindsay Beyerstein is less than charitable in her interpretation of Kanazawa's stats. She gets the last wave of Add Health data and says that the difference disappears by Wave Four, raising troubling questions about Kanazawa's bias. I'd say rather more likely, he just had the first three waves' data sitting on his hard drive; getting the fourth wave would have been a pain in the arse for a blog post, so he just used the data at hand.
Hank Campbell is no more generous, with lots of snarky scare quotes about what factor analysis is. Because three interviewers rated respondent attractiveness at different points in time, you need to draw some summary stat out of the three observation. I'd have just gone with a straight average, maybe weighted towards the latter waves when the respondents were older. Kanazawa ran a factor analysis instead. The difference between the two isn't going to be great - factor analysis will try to extract some common underlying measure from the three observations, making the weights across waves endogenous. But Campbell likes to say 'factor analysis' with the scare quotes to make it seem dodgy.
The first thing I'd thought when I saw the controversy was that OK Cupid recently put up data noting that black women get far fewer messages from other OK Cupid members than they ought to; this was potentially consistent with Kanazawa's story (or with other equally plausible ones). But Campbell calls Hontas Farmer a racist for citing that data.
Now, Huliq reports Kanazawa's lost his blog slot at Psychology Today (one wonders how long Walter Block will last).
Scientific American wonders whether Add Health should be collecting data on interviewer-rated attractiveness:
I am disturbed by the fact that the Add Health study's adult researchers even answered the question of how attractive they rated these youth.Never mind that a ton of research on kids' social capital would draw on measured attractiveness as a potential explanatory variable; apparently it's better to make things unknowable than to risk disturbing findings.
The Daily Mail insinuates that Kanazawa's a racist for his prior work suggesting IQ might be responsible for some poor health outcomes in Africa, and cites an LSE colleague calling for his firing.
It is not the first time that Dr Kanazawa, 48, a lecturer within the department of management at the LSE, has been accused of peddling racist theories.Here's Linda Gottfredson on IQ and health; here's Garrett Jones on IQ and economic outcomes.
In 2006 he published a paper suggesting the poor health of some sub-Saharan Africans is the result of low IQ, not poverty.
Professor Paul Gilroy, a sociology lecturer at the LSE, said: ‘Kanazawa’s persistent provocations raise the issue of whether he can do his job effectively in a multi-ethnic, diverse and international institution.
‘If he announces that he thinks sub-Saharan Africans are less intelligent than other people, what happens when they arrive in his classroom?’
He added: ‘The LSE risks disrepute if it fails to take a view of these problems.’
Britton at Scientific American, linked above, raises a lot of better questions about whether Kanazawa's findings would stand up to more thorough investigation; so does Robert Kurzban. But it was a freaking blog post! Blog posts are where you put up initial data exploration and speculation to bat things around and see whether it's worth more thorough investigation. If you disagree with the analysis on a blog post, you write up your own post on why you think it was wrong or how it could be done better (like Kurzban); calling for Kanazawa's firing borders on witch hunt.
The LSE beclowns itself if it sanctions Kanazawa for this particular blog post.
Michael Mills's "Seven Things Satoshi Kanazawa Cannot Blog About" is a must read...
Update: I've read Gelman's critique in more depth. Gelman's an excellent statistician. But some of the criticisms there lodged would apply to a reasonably high proportion of published empirical work. Endogeneity issues are everywhere; damning everyone who's ever had potential endogeneity / reverse causality problems in their published work would be a bit broad. And failing to adjust significance tests for the potential number of comparisons (as a guard against data mining) - I have a hard time thinking of many published pieces that have done that other than the metastudies that say we can't trust any empirical work.
Gelman's specific (and not at all unreasonable) worry on datamining is that Kanazawa's work on whether more attractive couples have more daughters tests whether the most attractive couples have more daughters than all others; equally plausible would be tests of whether the least attractive couples had the fewest daughters, the top two categories of attractive couples against the rest, and so on. XKCD summarized the problem here. But subsequent work with a different data set found the same result; matching the prior paper's result via datamining would then have taken mining across different datasets until finding the one that gave the best match, and I'm not sure there are all that many datasets that include attractiveness data.
Finally, Gelman (2007) critiques Kanazawa's earlier (2005) work for missing that there are potential problems in using number of daughters on the right hand side of a regression equation and number of sons on the right if some couples use a stopping rule that aims at particular ratios. But Kanazawa's 2007 piece recognizes that issue. I'm not sure whether the prior pieces' results were sensitive to this specification issue, but I'm also not sure it's right to say, as Gelman implies, that Kanazawa then went on to do other work without taking due account of critics' view of prior work.