Showing posts with label papers I want somebody else to write. Show all posts
Showing posts with label papers I want somebody else to write. Show all posts

Friday, 24 December 2010

The lesbian pay gap

Says BoingBoing:
Lesbians make more money than straight women (And nobody really knows why)
Really? Nobody? I can think of a couple of explanations, pretty easily testable. But we'll get to that in a minute. BoingBoing points to Big Think:
The wage premium paid to lesbian workers is a bit of a mystery. Sure, lesbian women are better-educated on average, are more likely to be white, live predominantly in cities, have fewer children, and are significantly more likely to be a professional. But even when you control for these differences, the wage premium is still on the order of 6%.

It is fascinating when the data starts looking like the majority is being discriminated against. Is it wage discrimination, though, or is there an economic argument for why lesbians are getting paid more?

Well, a possible explanation has to do with the division of labor in a heterosexual union.

...

This theory is cleverly tested in a paper which calculates the wage premium paid to lesbians in two distinct groups—those who were once in a heterosexual marriage and those have never been married.* The assumption made is reasonable; lesbian women who were once married to men (about 44% of the lesbians in the sample) presumably have in the past had the expectation that they would have a marriage partner with a higher income. The never-married women might also have had this expectation, but it is much more likely that, on average, women in that group expected to be in a relationship with another woman with a comparable income.

Does the evidence support the theory that the wage premium can be explained by greater investment in more market-oriented skills by lesbian women? Well the premium does not disappear completely for the subset of previously married women but is reduced by about 17%, providing some support for the idea. At 5.2% though, the once-married lesbian premium is still high enough that I don’t think we can consider the case closed.
Here's my candidate explanations.

First, and most importantly, maternity risk. If an employer expects a lesbian employee to be less likely to take maternity leave, and if maternity leave imposes costs on an employer, then the employer will be more likely to hire and to promote the lesbian over the straight woman. What evidence do we have? Petit's field experiment showing that maternity risk is responsible for a fair bit of women's lower average salaries.

How could this be tested in the data presumably available in the original study? Test whether the wage gap between lesbian and straight women is larger for younger women than for post-menopausal women. That will confound with age cohort effects, but there may be a way around it: use state insurance mandates on assisted reproduction, or state policies with respect to same-sex adoption. If some states require that insurers cover fertility treatments as part of an employer's insurance package and others don't, or if some states make it easier for lesbians to adopt kids, then we'd expect the wage gap between lesbians and straights to be smallest in those states that make it easiest for lesbians to have kids.

Second, testosterone and negotiation strategies. Women, on average, are less aggressive in wage negotiations. If testosterone correlates with aggressiveness in salary negotiations, and some evidence suggests higher than average testosterone levels among lesbians as compared to heterosexual women (though that evidence is contested), then we've another candidate explanation.

I'd put money on the maternity risk variable. I'd only put money on the negotiations one at decent odds.

But really, if correcting for the observables reduces the wage gap between lesbians and heterosexual women from around 40% [the paper cites average hourly wages of $18.70 for lesbians and $13.34 for cohabiting non-lesbian females] to around 5%, odds are pretty high that there are a bunch of unobservables also correlated with job performance that aren't captured in the wage regression.

More broadly: if you think it's implausible that employers love lesbians so much that they pay them extra for no good reason, then shouldn't you expect the same when looking at the male-female pay gap?

Thursday, 30 September 2010

Crowdsourcing the Seismograph

@adzebill took the 11 Twitter guesses of the last aftershock's magnitude, which ran from 3.5 to 6, got an average of 4.37. The actual result? 4.5.

I'd guessed 4.8; I'd not adjusted sufficiently for being up on the 5th floor, where the initial rolling was followed by a sharp jolt and the building groaned. It was definitely bigger than the 4.3 of the other night (experienced at home, ground floor) and smaller than the 5.2 (also experienced ground floor at home).

What were the estimates on this one?

@hamishduff: 3.5 (retweeted)
@ericcrampton: 4.8
@malclocke: 4.4
@rafmanji: 4.0 (retweeted)
@HerrSchnapps: 4.5
@beazer: 4.2
@lightweight: 6+
@90_second_fall: 4.2
@heabe: at least 4.5

So 9 independent estimates, two of them retweeted by the same person so we probably ought not count those two as part of the sample. If we consider only the independent guesses of these nine folks, counting the "at least" folks as giving only a point estimate, then the average for the group is 4.456. The median, which is more robust to the "at least an X" estimates, gives a 4.4. Both are seriously good estimates of the actual magnitude.

I can't find Twitter archive on #eqnz older than 26 September. If anybody knows how to find the older #eqnz tweets, though, there would be a pretty interesting research project in this.
  • Is the median or mean twitter estimate more accurate?
  • Does mean accuracy improve over time with more exposure to quakes?
  • Do individuals who make estimates get more accurate with repeated quakes?
  • How does the variance of estimates move with the magnitude of the quake?
Somebody could have an awful lot of fun with this, if they had access to the full twitter history on the #eqnz channel.

Update: Friday's 5:06 PM shake, 3.7.
The guesses:
4.0
"mid-high 3's?" (I'll count as 3.75)
4.0
"late 3s, early 4s" (will count as 4.0)
3.6
Average: 3.87. A bit high, but not bad.

Update 2: 9 twitter guesses on this one. Average: 4.211. Actual: 4.2. We only actually need the geologists for the initial calibration exercises. Once that's done, we only need the seismographs to make sure we don't stray.

Thursday, 22 July 2010

Ladders and measurement error

I'd noted yesterday a rather nice piece looking at SES and health outcomes showing that self-perceived status does more to drive health outcomes in one experiment than do objective markers of health status. The result could reflect measurement error in the objective markers or it could reflect that folks place different weights on different aspects of status when deciding on their self-perceived status.

LemmusLemmus pointed out in the comments there that the self-perceived status question was prefaced by a primer having respondents think about their income, education and occupation: "where they stand compared to other persons in the United States in terms of income, education, and occupation". So it's relatively weak support for the multiple ladders hypothesis which, if respondents took the priming seriously, would then only reflect different weightings across those three potential status components. The test isn't strong enough to distinguish much between measurement error and multiple status ladders (or, rather, differentially weighted status ladders).

So an interesting test of measurement error versus more ladders would be a repeating of the experiment but priming different respondents with different versions of the the question above. For some, no primer would be given. For others, the three above. And for a third group, a much broader set. If the no-primer or thick-primer treatments strengthened the effect of self-perceived status relative to the objective markers, then that supports the multiple ladders hypothesis. If self-perceived status does best when the primer relates directly to the objective measures (as opposed to a no primer or thick primer version), then it's probably measurement error.

Little chance the study would be repeated in this way, but at least it's testable in principle.

Wednesday, 12 August 2009

Self-enforcing protocols: property tax edition

The excellent Bruce Schneier posts today on Self-Enforcing Protocols and points to one I'd not heard of before:
Here’s a self-enforcing protocol for determining property tax: the homeowner decides the value of the property and calculates the resultant tax, and the government can either accept the tax or buy the home for that price. Sounds unrealistic, but the Greek government implemented exactly that system for the taxation of antiquities. It was the easiest way to motivate people to accurately report the value of antiquities.
This makes a lot of sense to me for things like Greek antiquities, where the owner of the object has a lot of private and expensive knowledge about the true market value of the good. I'm not sure that it makes sense for housing. Right now, city councils hire assessors to make their tax assessments; it's unclear that distributing the burden of assessment more efficiently extracts information about market prices. I'd expect that the administrative cost minimizing solution is the one where the council hires assessors rather than having every household privately either hire assessors or otherwise come up with a figure.

I wonder if anyone's ever done formal modeling of this arrangement. In contrast to the case with antiquities, the valuer/owner experiences massive transactions costs if the government unexpectedly decides to exercise its option to buy. Consequently, folks would have an incentive to bid willingness to accept rather than expected market price. And that means we'd then be taxing folks' sentimental attachment to pieces of property. However, if market value (presumably the government's strike price for the call option) is below willingness to accept, folks will shade their valuations down to market value. If there's uncertainty about what the government thinks actual market value is, risk-neutral folks will shade their bids such that the probability-weighted sums of utility across keeping the house with a lower tax bill and losing the house are maximized; risk-averse folks might overvalue their house for fear that the government will exercise the call option, though they shouldn't go above willingness to accept.

Another complication is whether the government would really ever be able to exercise its call option. Could they really evict the little old lady who came up with her valuation number using the recommended rule-of-thumb but whose property experienced idiosyncratic appreciation? If not, the system falls apart.

Leaving aside those problems, I can see two reasons for wanting to move to such a system, but I'm not sure that they're sufficient bases for such a move.

First, if the government wanted to base property taxes on the owner's willingness to accept rather than on market value, then this system would be preferable. In a world where official assessments are updated infrequently but are always updated in the case of a market sale, tax assessments can induce inefficiently low turnover in housing: if your property has appreciated since the last assessment, you'll be more reluctant to move than if your property were taxed based on a current assessment. Moving to regular owner-assessment could solve this problem, but so too could more frequent official assessments.

Second, if the system did induce truthful revelation of willingness to accept, with a call option for government based on that price, eminent domain would be a heck of a lot simpler. But again, I'm not sure that these benefits would beat the costs. You could argue that it's an erosion of property rights in giving the government a call option on all our property; it's unclear to me that this system has that much effect at the margin. Kelo v. City of New London is the current precedent in the US; New Zealand also seems plenty able to expropriate property owners for public works as well. At least this system would force payment closer to actual willingness to accept.

Finally, if valuations were in the public domain and folks were posting their real willingness to accept, the real estate market would be a heck of a lot more interesting. If you always fancied that house down the road, you'd know what you'd need to pay for it.

A more worrying downside would be that it would allow an unscrupulous government to force its opponents to pay higher property taxes: if your property tax is based on how far from average valuation your house is, and the government uses a rule of thumb that it exercises its call option if reported valuation is say 5% below market value, you might rightly fear that the rule of thumb in your case would be actual market value, or some higher number. You then have to report a valuation closer to your true willingness to accept than do others; in equilibrium, you pay slightly higher property taxes.

Working out the formal modeling and likely effects of such a rule would be an interesting project for someone, if it hasn't already been done. I have a hard time seeing it being a worthwhile move in property taxation though.

Thursday, 25 June 2009

Motorcycles

Patri Friedman will think less of you if you drive a motorbike: they're just too risky. He cites stats of fatality rates twenty times those of automobiles, corrected for miles driven.

But those numbers don't correct for agent type. What he really needs, and what I don't think exists, is data on relative fatality rates for risk-averse drivers in both types of vehicles. I'm sure motorcycles are still riskier, but twenty-times riskier, correcting for agent type?

Specify that there's an underlying distribution of risk-aversion running from highly risk averse to highly risk/thrill seeking. And, suppose agents sort across vehicle class by underlying risk aversion. So the most risk-averse agents buy a Volvo, the median agent buys a Toyota, and the most risk-preferring agent buys a motorbike. If we then find that motorcycles have higher fatality rates than cars, I don't know what portion of the difference comes from agent heterogeneity and how much comes from motorcycles being more dangerous.

How could you tell? Well, one way would be to check the proportion of motorcycle riders with health insurance as compared to car drivers. David Hemenway's propitious selection story suggests that risk preference is correlated across different types of behaviour, so adverse selection stories in insurance are overstated: he finds that motorcycle riders in accidents without helmets are more likely to be uninsured than those who wore helmets. In other words, folks who like risk take more risks. So, get some measure of risk preference derived from health insurance status (or life insurance, or credit rating, etc), use it in probit estimation for the likelihood of being in accident, then adjust the motorcycle stats for underlying agent type. It would be a big job, and I'm not going to do it, but it would be a cool paper for somebody who had ready access to the data.

So, to the extent that Patri is right to cast aspersions on motorcycle riders, it's because it might be an efficient signal of underlying agent type. But Patri, if you already have all kinds of other signals about somebody's underlying type, perhaps don't downgrade an otherwise risk-averse person quite as much as you otherwise would: it's highly unlikely that they're facing a 20X average risk.