If your vote is decisive, it will make a difference for 300 million people. If you think your preferred candidate could bring the equivalent of a $50 improvement in the quality of life to the average American--not an implausible hope, given the size of the Federal budget and the impact of decisions in foreign policy, health, the courts, and other areas--you're now buying a $1.5 billion lottery ticket. With this payoff, a 1 in 10 million chance of being decisive isn't bad odds.I'm totally on board with Andrew that voters are more inclined to vote sociotropically than egocentrically - altruism at the ballot box seems very likely. But I'm not sure that irrationality hasn't come in through the back door here.
And many people do see it that way. Surveys show that voters choose based on who they think will do better for the country as a whole, rather than their personal betterment. Indeed, when it comes to voting, it is irrational to be selfish, but if you care how others are affected, it's a smart calculation to cast your ballot, because the returns to voting are so high for everyone if you are decisive. Voting and vote choice (including related actions such as the decision to gather information in order to make an informed vote) are rational in large elections only to the extent that voters are not selfish.
Gelman has turned voting into a positive expected value lottery by assuming that your vote changes the outcome for the good, providing benefits for everybody in the country on average. Or, at least, that each voter estimates as much.
But surely, if you're the decisive voter, half of all other voters think that you're changing things for the worse! Put in uncertainty over whether you're providing a $50 benefit for everyone else or imposing a $50 cost and the case for rational voting disappears: the altruistic benefits diminish to zero if you're as likely to prevent as to provide benefits.
Now maybe it's the case that you're the one that's rational and has private knowledge about the great benefits that will be achieved if only your preferred party is elected. But that's only the case because half of all voters are imposing large probabilistic costs through their votes. And how can you be so sure that you're not one of the bad half? After all, none of the folks in the bad half are chortling about how they're going to make the country worse off than your team. They're saying your team's guy will make things worse. Isn't it as likely that you're self-deceiving about the merits of your vote as that they are? Voting is then only rational because of your irrationally high assessment of the quality of your vote!
The first footnote in Gelman's paper:
A failure to update [based on others' expected votes] reflects that the voter feels strongly enough about which candidate is best for the country that his or her mind will not be changed simply because the majority of voters disagree. In this framework, the two groups of voters in an election do not represent competing interests but rather competing perspectives about what is best for the country.
Can you have rational voters each of whose priors are sufficiently strong that they discount all information contained in the numbers lining up for other candidates? Could I consider myself rational for discounting all the "round earth" proponents if my priors on "flat earth" are sufficiently strong?
I think that Gelman's mechanism requires that the voter place himself in an epistemically privileged position.
Gelman disagrees in the comments to his original post:
I think you're overthinking things here. In an election with two options, some people will think candidate A is better for the country, others will think candidate B is better. And of course others won't give a damn at all. If you prefer A or B, sure, if you're sane you'll realize you might be wrong, but your preference is still there in expectation. For example, maybe I'm pretty sure that A is better than B, but I think there's a 20% chance I'm wrong. That's like any decision problem: the existence of uncertainty does not imply indifference. Nowhere did I say that I know I'm right, and in a decision problem there's no need to assume certainty. Not at all.Let's reduce it entirely to disagreement about whether A or B is more likely to achieve our shared goal. That half the population disagrees with me (if I'm the pivotal voter) about which party is best requires that I place myself in an epistemically superior position relative to other voters in order to tally up these really big expected net benefit numbers. Suppose before I look at polling data, I figure that Party A will make the country per capita $100 better off relative to Party B. After I look at polling data, I see that half the population reckons $100 per capita net benefit of A and the other half says $100 per capita net benefit of B. I can only continue to maintain my position if I discount all the information in the other half's preferences. And I can only do that if I think I (and the folks on my side) am way better informed than the folks on the other side. And each voter will have to think that. They can't all be right.
Regarding heterogeneous preferences: most political issues are not like abortion where people have completely opposing goals. A vast majority of Americans want peace and prosperity, but people have different ideas about how to get there.
Finally, my argument applies to all voters who have a preference. Nobody is privileged, it's just that people disagree about who should represent them in public office. There's a disagreement so we have an election. And the same argument applies to any sort of political participation, including campaign contributions, letters to your congressmember, etc.
Suppose that I do reckon, rationally, that I'm in an epistemically superior position relative to the median voter. In that case, I should vote if I'm altruistic and have a decent enough chance of winning. But what if I see a whole pile of other smart people lined up on the other side? We're then back to the same problem again. I have to fail to update based on their beliefs if I want to estimate large per capita net benefits based on my choice. And I'm not sure such failure to update is consistent with rationality. There's a theorem about that. But that theory also says I shouldn't disagree with Andrew Gelman. I'll put it this way then: I agree with him that altruistic voters who do not update their beliefs based on observing others' vote preferences can vote rationally, if such failure to update is rational. But any updating based on observing other intelligent voters' disagreements about which choice is best ought to erode the expected net benefits of voting sufficiently to make such voting again a loser on a cost-benefit assessment - or at least no more rational than buying lotto tickets in hopes of being able to make a large charitable contribution.
I'll stick with my assessment of voters tendencies and my reasons for not voting.