In my last post, I talked about what political ignorance is. In essence, it's the generalized inability to explain how one's political system works.
Because I later want to see whether ignorance matters in the real world, I have to check a survey that includes measures of ignorance, adequate demographic controls, and measure of party and policy preferences. The 2005 New Zealand Election Survey is just about perfect for my task. The survey asks about 3700 potential voters a battery of useful questions. Among these are several that can be used to benchmark a respondent's level of political ignorance.
The first set of questions addressing ignorance ask respondents to place political parties on a left-right spectrum. Now, it's of course going to be hard to place some parties: the Greens are to the left, but really are sitting on an orthogonal "environmental" dimension; the Maori Party are centrist, but really are sitting on an orthogonal "racial issues" dimension. New Zealand First seems to the right on social issues but to the left on economic issues. So, I won't hold it against respondents if they don't do well in placing those parties. Instead, I look at the obvious ones in 2005. Don Brash's National Party was about as right wing as National gets. Helen Clark's Labour Party was a classical labour-left party. United Future marketed itself as the sensible centre, with Peter Dunne constantly reminding voters that his centrist party could work with either side. So, I score respondents on whether they correctly identify National to the right of United Future, National to the right of Labour, and United Future to the right of Labour. A "don't know" response on any pairwise comparison counts as a wrong answer. Respondents then could score zero to three on my simple measure of ideological ignorance. 60% of survey respondents correctly placed the three parties in the right order. It would have been much much worse had I scored their ability to place the more difficult parties.
The second set of questions addresses a respondent's ignorance about how the Mixed Member Proportional electoral system works. I tallied up the number of wrong answers to the following set of questions: is the electoral vote more important than the party vote in determining the composition of Parliament; what are the conditions for a party's entry into Parliament; is the party with the higher fraction of the vote more likely to get a higher fraction of seats under First Past the Post or MMP. I added to that number a tally of inconsistent answers about MMP. It's not necessarily wrong to prefer MMP over First Past the Post...ok, at least it's something we can argue about over beer. But it's certainly wrong to say you prefer MMP but ALSO prefer single-party government. The two just don't go together. Similarly, it's wrong to prefer First Past the Post while preferring coalition governments or to prefer the current number of parties in Parliament while still preferring First Past the Post. Each of those combinations of preferences shows a basic lack of understanding of the mechanics of how the system works. I then had a tally of wrong answers to questions about MMP that ranged from zero to six. Folks really didn't seem to understand how their electoral system works. Only a little over half of respondents knew that the party vote is most important, that winning either 5% of the party vote OR an electorate is sufficient for entry into Parliament, or that MMP is more likely to generate a proportional outcome.
The third set of questions are simple quizzes about Parliament. True or false: is the term of Parliament 4 years? Is enrolling to vote compulsory? Are permanent residents allowed to vote? While 83% of respondents knew that the term of Parliament is three years, only 28% knew that non-citizens can vote. Scores ranged from zero to three.
The fourth set of questions asked respondents to identify the parties that formed government after the 2002 election. Again, there are a few hard questions. The Green Party wasn't a part of the governing coalition but abstained on matters of confidence and supply. So I didn't penalize respondents for answering the wrong way on what seems to me to be a judgement call. Instead, I tallied up as wrong each of the following: not indicating Labour as being part of the government; not indicating Progressive as being part of the government; identifying any of National, New Zealand First, Act, or the Maori Party as being part of the government. Scores ranged from zero to five. 83% of respondents knew that Labour was part of the government. 84 people thought that National was part of the government.
The final set of questions asked whether respondents could identify their district MP and that MP's party affiliation. Respondents could score from zero to three on this measure. How could you score three? By giving the name of a list MP instead of your district MP and getting that MP's party wrong. 44% of respondents could not name both their MP and that MP's party.
So, I have five different measures of ignorance, as shown below. The graph shows the number of incorrect answers for each measure of ignorance. So, a score of 5 or 6 on the "MMP" measure is pretty bad.
How to combine them into a single measure of overall ignorance? I tried a few different mechanisms, but they all wind up correlating very strongly with each other. I can just take the simple sum across all measures, but the measures run on different scales: the simple sum then accords more weight to the measures that run to a maximum of 6 than those running to 3, and we have no reason to think those measures should be accorded higher weight. I can normalize each score to mean zero and take the sum of respondent standard-deviation weighted distance from centre, but that forces all measures to have identical weight which may not be appropriate if one measure really is more important than others in identifying underlying ignorance.
Instead, I take a principal component measure. What principal component analysis does is examine the correlations between the different measures and extracts common factors. In this case, all of the ignorance measures "load" onto a single principal component. That means that most of the variance across the different measures can be explained by variance in this one underlying component, which I'll call ignorance. The method automatically decides how important each of the individual scores are in explaining overall ignorance: if having a high score on one measure didn't really correlate much with having a high score on other measures, it probably isn't picking up the same thing that the other measures are picking up and so loads mostly onto another factor. The method is pretty similar to what's used in the intelligence literature: individuals' scores on different intelligence tests correlate pretty strongly with each other and load on a single factor called g. Intelligence tests then can be evaluated by how strongly they load onto g. All reasonable intelligence tests correlate with g. Similarly here, all reasonable measures of ignorance load onto my single factor: the anti-g.
So, that's my measure of ignorance: the principal factor onto which scores on the five different measures load. By construction, this has a mean of zero and a standard deviation of one. So, it's an ordinal measure: it tells us how ignorant individual respondents are as compared to one another. In my next post, I'll discuss what sorts of demographic variables are associated with my measure of ignorance.
(Footnote: I've tried all of the later analysis with the other aggregate measures of ignorance and it doesn't really affect the results).
Would I be correct in paraphrasing your measure as:
ReplyDelete"a comparative indicator of political ignorance. It can be used to demonstrate the difference in political ignorance between two individuals." ?
Please excuse any ignorance in my question; I have statistics dyslexia.