I presented to the Ministerial Forum on Alcohol Advertising and Sponsorship on Monday; the Forum put me in with the industry folks, though I'd have had a bit more fun had I been in the session with the public health presenters earlier in the morning.
Tuari Potiki, of the NZ Drug Foundation, opened by asking me a really good question. I didn't have a great answer on the spot, so I sent more considered thoughts on it to the Forum on getting home on Tuesday. As blogging will continue to be light as I work to and past a couple deadlines, and that no bit of writing be wasted, I'll post it here as well.
Hi Anne,
Tuari asked me in Monday’s session why economists and public
health researchers can reach different conclusions about the same data or the
same studies. It’s been rolling around in my head since then; it was a damned
good question.
Here’s the start of an answer; feel free to share it with the
Forum.
The training for applied empirical economists is pretty
different from that in public health.
In health research, you do get a lot more randomised trials
where it’s easy to sort out causality. That is definitely far from being the
whole world of public health research, but it’s a pretty important part of it.
And, it forms the core of the background training, as I understand things, in
that discipline. If you have a large group of people and split them into two or
three treatment groups and a control using randomisation, differences across
groups are almost certainly going to be due to the treatment you apply. There
are good standard statistical tools for that kind of problem.
In economics, nobody really lets us play with experimental
treatments of that sort on national economies: we don’t get to run clinical
trials on economic policy. And, in microeconomic analysis, it’s pretty rare
that somebody would let us figure out the benefits of, say, schooling by
setting a control group to get no schooling. Instead, in pretty much every
case, we have to make inferences from messy data where we have absolutely no
guarantee of which way causality could run. Every bit of our training then,
from intermediate level upwards, focuses on the really tricky problems in
trying to infer the effects of X on Y when people’s choices on X will, in part,
be determined by their underlying characteristics and where those underlying
characteristics also strongly affect Y. If we observe that higher X correlates
with higher Y, we always have to check what part of the effect is due to X, and
what part is due to the kinds of underlying characteristics that correlate with
both X and Y.
Here’s a more concrete example of the kind of thing each and
every applied economist will be familiar with. Suppose we wanted to know how
much benefit there is from a kid’s getting an additional year of schooling.
Well, we could just look at differences in wages between kids completing only
NCEA Level 1 and those completing either 2, 3, or different levels of tertiary
study. But if we said that difference were just due to the schooling, we’d be
entirely wrong: the kids who stop at NCEA Level 1 are different from the kids
who go on to graduate school for reasons other than just education. And so we
need things like differences across US States in the age at which you’re
allowed to drop out of school, or differences in the amount of school you get
by that age because of differences in dates of birth in combination with the
school calendar, to try to see what proportion of the effect is due to
schooling and what proportion is due to other underlying differences.
Every reasonably-trained applied economist will have taken a
couple of courses at undergraduate level and a couple more at graduate level
that go through these complications in excruciating detail and the techniques
appropriate for dealing with non-experimental data. I’m sure there’s some
similar coursework in this stuff in public health, but it’s far less at the
core of the public health toolbox. It’s an add-on there; here in econ, it’s
baked-in.
So that would explain some of the differences in how economists
and public health people might read studies like Jones and Magee (2011). When I
see it, I see something potentially useful as a “how not to” example in
econometrics classes; public health people like this kind of approach though.
I’d be happy to list all the problems in that paper, but you might not want to
take my word for it. If you’d like, send a copy of it instead to whoever
teaches graduate applied econometrics at Otago (Steve Stillman), Canterbury
(Bob Reed or Andrea Menclova), or elsewhere (I don’t know who teaches Metrics
at each of the other departments). They’ll likely say the same thing I would:
failure to adjust their standard errors for the multiple comparisons problem,
strong potential for reverse causality given recall data, odd choices on
control variables (why do they control for friends’ drinking in the subsamples
but not in the aggregate sample?), and that it’s rather likely that a fuller
set of controls would soak up much of the remaining effect currently attributed
to advertising. If they tell you the same general thing I do, you should put
less weight on submissions citing that paper as authority; if they don’t, put
less weight on mine (but please tell me if economists I respect disagree with
me on this, as I’d then need to check where I went wrong).
So that addresses some of the differences in how we’d approach
empirical studies.
But there is a broader difference. For the past several years,
I’ve been watching the data on alcohol consumption and harmful effects of
alcohol, seeing the continued decline in problems, and been utterly perplexed
by what seems to be a never-ending sequence of media hits by public health
scholars saying that New Zealand has some binge drinking crisis. To me, crisis
means that things are substantially worse than they’ve been previously, or
maybe that we’re way out of line relative to other comparable countries. And
neither of those is the case. So what is it? It’s one that I’ve thought about
for a few years and haven’t come to any great answer on, but here’s my current
thinking on it.
A lot of the people who work in the public health area have a
lot of front-line exposure to individuals and families that have just had
horrible times with alcohol abuse. It would be really really hard to spend a
lot of time with people who have very serious problems and not feel that
there’s a crisis, even if the overall statistics show reasonable improvement:
it’s a crisis for those families. The same laudable and wonderful empathy that
brings these researchers to work with these families also drives them to look
for policy fixes, however tenuous the link might be between the policy proposal
and any real outcome. Worse, for some, they’ll come to view the alcohol
industry as being the enemy, so if a policy has some small chance of helping
the families they deal with, any cost that might fall on either industry or
moderate drinkers just seems trivial by comparison. And, I also worry that “Big
Alcohol” also gives a convenient scapegoat rather than having to work through
the precise dysfunctions affecting the families they’re meeting. Finally, I
worry a lot that some of the constant “Binge Drinking Crisis” talk has a bit of
the Noble Lie to it: that pushing a line saying there’s a crisis builds public
support for the kinds of policies that they think might help the families
they’re trying to help, even if there really isn’t a crisis. And that last
one’s really risky: what happens when people start ignoring the really really
important and true things that doctors say about health because they’ve been
stretching things a bit on alcohol?
The above on the broader difference is just impressionistic; I
could easily be wrong. But it’s my current best guess informed by a few years’
of interaction with the folks over in public health departments.
A final, and reasonably substantial, difference between
economists and public health researchers is that economists typically assess
policies relative to a broader conception of costs and benefits, weighing up
the benefits of a policy to those benefitted by a policy and weighing that
against both the implementation costs and the costs to those who are harmed by
the policy. So, on a policy like alcohol minimum pricing, economists would
consider the potential health benefits among alcoholics while also weighing the
losses imposed on moderate drinkers through higher prices. Public health
researchers instead tend to assess policy against a harm-minimisation standard
with perhaps some accounting for implementation costs but with no particular
consideration of the harms the policy might impose on others. Again, this will
come down to some differences in background. Traditionally, when public health
dealt mostly with communicable diseases, it would be pretty tough to point to
the benefits of, say, smallpox: we don’t need to consider the losses imposed on
people who enjoy smallpox if we’re thinking about vaccination policies. Nobody
enjoys smallpox. So measures of cost-effectiveness then tend only to look at
how much the vaccine might cost and whether it saves enough disease burden to
be worthwhile. In economics, we more typically deal with policies that have
real trade-offs, and so are far more used to thinking in that kind of
framework, where we’d have to add to a measure of cost-effectiveness the costs
imposed on those who do not like the policy.
I’m of course available should you have other questions. It just
bugged me that I hadn’t answered Tuari’s question adequately. It’s something
that I’ve mulled over for a while, but that I’d not had to pull together
succinctly before.
Best,
Eric Crampton
I also strongly recommended that they pay attention to
Jon Nelson's metastudy of results in this area.
My full submission is below.
Update: Yes, there are epidemiologists who know how to do this stuff properly. I still don't get how things like Jones and Magee pass peer review in their journals though.