Showing posts with label quality of policy reports. Show all posts
Showing posts with label quality of policy reports. Show all posts

Tuesday, 24 November 2009

First rule of social policy advocacy


The first rule of social policy advocacy is to get yourself a really Big Number. The bigger and badder your problem appears, the better. Whether it makes any sense is beside the point.

This explains why some poverty groups still cling to the deliberate deception of before-tax poverty rates. Any poverty rate that ignores the role of taxes and transfers in redistributing income, as before-tax calculations do, only tells half the story. But advocates use it rather than the after-tax figure because it makes poverty look bigger.

The same statistical obfuscation appears to be going on in housing policy.
From Peter Shawn Taylor's article on problems in the advocacy stats for social housing in Canada. Turns out that the vast majority of folks falling into the advocacy stats are households spending more than 30% of their income on housing. I guess I also have a housing affordability problem too! Somebody subsidize me!

Friday, 1 May 2009

External refereeing and policy reports

In the academic world, papers submitted to journals go to external referees. The journal editor will usually pick somebody familiar with the general methodology, but will try to avoid picking folks with obvious conflicts with the author. If your paper builds on someone else's work, that person might be asked to referee, but a second referee would also be asked whether the methodology chosen is appropriate to the task at hand. So, if your paper builds on the seminal model by Professor X, referees will also generally include someone other than Professor X who can reasonably comment on the appropriateness of the X model in this context.
"The report was upfront that it was commissioned research, what its aims were, that it was conducted independently, and that it was externally peer reviewed by leading academics in this field."

So, who then were the external referees on the BERL report on the social costs of alcohol use? The acknowledgments thank Professor David Collins and Professor Helen Lapsley for their work as external referees. Who does BERL cite as providing their basic methodological approach? Professor David Collins and Professor Helen Lapsley. Some relevant quotes from the report:
Aside from medical drug use, other drug consumption is routinely presented in the literature as misuse or having harmful impacts only. This tends to reflect the absence of evidence for the non-medical health benefits from the consumption of other drugs (Ridolfo and Stevenson 2001). Collins and Lapsley (2008), for example, has “no problem in using the term ‘abuse’ when referring to the consumption of… illegal drugs”. The authors argue, “in the case of illegal drugs, by definition, society has decided to proscribe their consumption, with the implication that any consumption is abuse.”(p.8)
Any illegal drug use is assumed to be harmful, reflecting the absence of evidence for the non-medical health benefits from the consumption of illegal drugs (Ridolfo and Stevenson 2001). This approach is also consistent with the approach used in recent Australian social cost estimates (Collins and Lapsley 2002, 2008). (p.9)
This study focuses on a broad range of costs covering personal, economic, and wider social impacts. These costs are collectively denoted by the term ‘social costs’ in this report. This focus is consistent with that presented in Collins and Lapsley (2008). Collins and Lapsley gives a “comprehensive economic definition” of harmful drug use costs:
The value of the net resources which in a given year are unavailable to the community for consumption or investment purposes as a result of the effects of past and present drug abuse, plus the intangible costs imposed by this abuse.
Our definition assumes a counterfactual situation in which no harmful drug use has occurred. The range of costs included in this study is detailed in Appendix Table 1. The inclusions and exclusions are compared to the range of costs found in Collins and Lapsley (2008), BERL (2008a) and other drug misuse cost studies. (p.10)
I won't bore you by continuing further. In short, one could reasonably view the BERL report as a NZ application of a methodology applied in Australia by Collins and Lapsley. Not that there's anything necessarily wrong with that, but then claiming that the paper is externally refereed when those referees are Collins and Lapsley...well, that's not external refereeing. Maybe that's standard practice over in the consulting world, but it sure isn't the same thing as academic notions of external refereeing. Over on the academic side, a journal using that kind of refereeing practice would be deemed a mutual admiration society.

Update: None of the Collins and Lapsley papers cited by BERL are themselves published in refereed journals. They're all government reports. As best I can tell from the bibliography. This just keeps getting better, doesn't it.