Friday, 14 December 2018

Suppressed

The murder of Grace Millane is a tragedy.

As much as a number of commentators and one justice minister like to paint it thus, the apparent flouting of a suppression order that has revealed the identity of her alleged killer is not.

Frustrating? Yes. A challenge to our slow-moving justice system? Maybe. But certainly no tragedy.

...

Many will struggle to remember both by the time of the  trial, probably at least 12 months away. Some may even have forgotten by the time the 26-year-old alleged killer makes his next appearance, when suppression is likely to be dropped.

It's worth remembering, too, that the murder of tourists in this country, and associated overseas interest, is still rare. Suppression is observed in the great majority of cases that make their way through our courts.

Those relaxed about the impact of such indiscretions on justice also have evidence to back their ambivalence.

Law expert Warren Young and others researched such influences on juries in 2001, on behalf of the Law Commission.

They concluded that "publicity both before and during the trial currently has little, if any, effect on jurors".

So there is every reason to believe that, despite the level of interest in this case, justice will be served.

Another Law Commission report, 2009's Suppressing Names and Evidence, suggested the issuing of orders to force internet providers to remove information in breach of suppression orders.

But if the internet is a new frontier, then social media is the wild, wild west: once the horse has bolted, it's next to impossible to bring it under control. Even after a person is named, officially, people may be able to track their footprints through Facebook, Google and other sites.

So the challenge is significant, perhaps even insurmountable.

One that our justice system may have to live with, but one we are confident it will survive.
Not only does the editorial make sense, it also links through to the cited work. Many kudos.

Law and regulation always has to be able to respond to large cost shifts in the underlying environment. Suppression orders were pretty easy in the 80s. Anyone who might report on the trial would know that the order was in place. The number of media outlets was limited. And when you needed to get a permission note from Reserve Bank to get the foreign currency to subscribe to a foreign newspaper or magazine that might show up a few weeks after publication - risks that way were pretty trivial.

All of that would lean toward relatively liberal use of suppression orders. If the judge thought that there was at least some benefit in it, enforcement costs weren't much worth worrying about. Enforcement was easy. So the orders could be used in a broader range of cases.

Susie Ferguson's interview this morning with Bar Association's Jonathan Eaton QC had the Bar Association wanting strong enforcement of the existing rules without regard to the tech change that's happened. Susie's questions were great. But Eaton seemed to be expecting the impossible. Google's said that they were never notified about the order; Eaton imagined a world in which Google would somehow back-check, in every jurisdiction in the world, for each and every court case ever as they came up and ongoing in case the situation changed, whether there were a suppression order in place so they could make take the appropriate measures. That seems ...nuts.

Maybe there's some tech way around it, where courts would put suppression orders up into a central repository that were machine readable and Google (and others) could have a running check on that list.

Australia currently has a suppression order out on the verdict in the trial of an Australian high-level Catholic official (he's guilty).  New Zealand media's reported broadly on it; I even got a push notification on it from the Washington Post. There'll always be a way for Australians to read that stuff. And are newspaper apps supposed to run a GPS check on where the phone's owner currently is located before running a push notification? It's just dumb to expect it. It would be completely unreasonable to expect Google's Blogger to be able to tell what the trial is at the start of this paragraph and block it for Australian readers too. And given that mess, it is an absolute nonsense that Australian media has to censor the verdict. Some folks just aren't living in the real world.

Some bottom lines then:
  • Tech change means the costs of implementing a suppression order in high profile cases are very high. Courts should then be more reluctant to issue them than they were in a prior era when those costs were lower.
  • If they want these things to have half a chance of working, they need to figure out the tech of getting a repository of decisions rather than expecting every platform to be watching every court case for every change in whether a suppression order is in place or not. And that's something that should be set through international cooperation so there's one repository for the things using a common standard rather than a pile of them. 
  • And even if the order doesn't work, it's not much of a worry given LC's work on whether jurors are prejudiced by it. That's good, because it is impossible for an order to really work unless it is enforced globally. VPNs exist. And even if platforms like Twitter or Facebook tried to geoblock particular key words, there are a billion ways around it. China has an army of internet censors trying to keep up with the ways that social media users develop euphemisms for things they're not supposed to say.

Micro costs of macro prudence

A few months ago, I was at one of the typical Productivity Hub roundtable sessions where Wellington-types mull over why something in markets isn't working the way they'd like and how they could nudge markets around to improve things.

That time, it was about why there seemed to be problems in small business access to credit. After folks went round for a while about the purported failures, I asked whether anybody had looked at whether the macroprudential rules that came in during the GFC had killed off the financing tier that some small businesses had relied on. Remember how, before the SCF collapse and bailout, there used to be lots of ads for finance companies in the papers inviting retail investment? They'd take the money, invest it across a range of things, and pay a return to the bond investors so long as they didn't tank in the interim. Doesn't much happen now.

There was an awkward silence while folks considered whether Wellington might have caused the problem they were mulling over. Then after a "Nah, nobody's checked that", folks went back to more comfortable worthy suggestions for Making Markets Work Better Through Government.

I was reminded of it in this week's VOX CEPR Policy Portal piece. And here's the underlying IMF paper:
Combining balance sheet data on 900,000 firms from 48 countries with information on the adoption of macroprudential policies during 2003-2011, we find that these policies are associated with lower credit growth. These effects are especially significant for micro, small and medium enterprises (MSMEs) and young firms that, according to the literature, are more financially constrained and bank dependent. Among MSMEs and young firms, those with weaker balance sheets exhibit lower credit growth in conjunction with the adoption of macroprudential policies, suggesting that these policies can enhance financial stability. Finally, our results show that macroprudential policies have real effects, as they are associated with lower investment and sales growth.
So macroprudential regs reduce systematic risk but also make it tough for weaker firms to get credit.

Update: And remember that RBNZ is looking for funding for more people to do prudential regulation. They need more expertise in that area. Hopefully they're looking to improve the quality rather than the quantity margin on prudential regulation.

Thursday, 13 December 2018

Back to the sweet sweet bog

For the past several years, the public health crowd has brushed off John Gibson's work on sugar taxes by saying that they don't worry about things that aren't in refereed journals.

It takes a lot longer to get things published in economics journals than in public health. Inaccuracies in public health work can then go around the world's newspapers several times before economics starts testing the more robust work in series of departmental seminars before sending the revised and improved draft off to a journal.

Gibson's paper with Bonggeun Kim is now up in the Journal of Development Economics and is ungated for the next month via this link

Here's the abstract, which won't be unfamiliar to those who've here been following the debate.
Estimating potential effects of price reforms is a key issue for many developing countries. Demand studies increasingly use household survey data on budget shares, which vary with quantity, price, and quality. If quality response to price is ignored, estimated price elasticities of quantity demand conflate responses on quantity and quality margins. Our review finds over 80% of published studies using budget shares from household survey data have this error. We use survey data from Vietnam, with prices and qualities observed over space, to directly estimate the price elasticity of quality. This is much larger than what is derived from the income elasticity of quality, based on the Deaton (1988) separability restrictions. Across the 45 items we study, the own-price elasticity of quantity demand is overstated by a factor of four, on average, if the response of quality to price is ignored.
The paper covers complicated technical issues in simple language. Economists are sometimes stuck with household expenditure data that only says how much a household spent in total on a product category over the past week, fortnight or month. If they want to know how demand responds to price changes, they have a problem because that data does not say how much was purchased or the price at which things were purchased. The data only says what total expenditure was. Total spending could change because quantities changed, or because
These issues were recognized, and potentially solved, thirty years ago in a set of papers by Angus Deaton (198719881990). Deaton derived the response of quality to price so as to isolate quantity demand elasticities, without needing price data. He assumed weakly separable preferences so that unobserved effects of price on quality could be derived from income elasticities of quality and quantity. Intuitively, by forcing the effect of price on quality to operate as an income effect, Deaton leveraged off what household surveys are good at – measuring incomes or expenditures – to get at what they are bad at or rarely do, which is measuring local prices.1 If one had a household survey with good measures of local prices, and with the usual data on food group expenditures and quantities, one could directly estimate the effect of price on quality by using unit values to indicate consumer quality choice (because the unit value is the product of price and quality). Indeed, Deaton (1990, p.302) concluded that it “would be extremely desirable to have direct measures of market prices against which this method could be tested.”
Gibson and Kim have data allowing this testing. And the testing shows us that weak separability fails, so back to the bog:
Our main title deliberately copies Deaton (1988) because in our view unidentified quality responses are still biasing price elasticities of quantity demand estimated from survey data. Our sub-title is from Gordon Tullock (1985, p.262) describing his role in an intellectual debate:
“… my role in this controversy is to watch people trying to get out of the swamp and then push them back in. Clearly, my role is not a constructive one, but nevertheless, I feel it necessary.”

Our contribution may be viewed similarly; before Deaton, economists used unit values as if they were prices when estimating elasticities of quantity demand. They were in a bog where quality and quantity effects intermixed. Deaton found a way out, pulling himself up just by the bootstraps of separability restrictions, with no firm ground (good price data) in sight.7Standing on firm ground now, with good data on local prices and on consumer's choice of quality, we are pushing people back into the bog by showing that these separability restrictions do not hold. More generally, any model that assumes that the price and income elasticity of quality are closely related is unlikely to hold. The necessary role we play, even if not a constructive one, is to show that we are still bogged down; many estimates of the effect of price on quantity are instead some murky mix of quality and quantity responses. Our defence for our role is that it is only by realizing that we are still bogged that the value of firm ground (good data on local prices and on quality) becomes clear. In our opinion, there will be little headway in using household survey data to accurately estimate quantity responses to spatial price variation until better data on local prices and qualities are collected, so that responses on both the quantity and quality margins can be directly estimated.
I love Tullock references - this one's to Tullock's pushing people back into the bog by showing that explanations around the 'why so little rent-seeking' problem were wanting. Gibson and Kim show that Deaton's path out of the problem doesn't work.

Public Health critiques around hierarchies of evidence and that this is just one paper compared to dozens of other papers so we should look to metastudies - they completely misunderstand the nature of Gibson's contribution here, whether through ignorance or stubbornness, I don't know. Gibson and Kim show that the dozens of other papers that use the Deaton method for estimating price elasticities out of household survey measures are systematically incorrect. Doing a metastudy of papers that have a systematic error in method will not get you a correct answer.
These large biases, from either ignoring quality response to price, or from restricting that response to be what weak separability allows, are in line with the few prior studies on the quality response issue. In the first of these, quantity demand elasticities were inflated to an average of 250% of their unrestricted value, if quality response is ignored (McKelvey, 2011). While that evidence was just for six broad food groups, our results are much the same if based on broad consumption groups or narrower price survey items, so this magnitude of bias may hold more widely. A similar level of bias is seen in studies of soft drinks demand using the unrestricted method and the standard price method. In Melanesia, where spatial price variation is high and product differentiation of soft drinks more limited, the quantity demand elasticities are overstated two-to three-fold if quality response to price is ignored (Gibson and Romeo, 2017). In Mexico, where there is less spatial price variation and more quality variation the bias is four-fold (Andalรณn and Gibson, 2017).

These quality responses may undermine price policies that aim to reduce consumption of unhealthy items. For example, quantity demand elasticities that ignore the quality response to price are used by Grogger (2017), to forecast that Mexico's peso per liter tax on soft drinks will reduce steady state body-mass index (BMI) of Mexicans by up to 1.8%, which is enough to provide some health benefit. However, if quality downgrading in response to tax-induced price rises is accounted for, the average BMI will fall by just 1/200th (Andalรณn and Gibson, 2017). This is salient to Vietnam, where a special consumption tax of 10% on soft drinks, instant tea and flavored milk has been proposed by the finance ministry as a way to reduce the health burden of high sugar intakes. However, if adjustment to higher prices is on the quality margin and quantity consumed falls only a little, as in Mexico, this proposed tax will be largely ineffective in achieving health objectives.
The implications are rather broader than sugar tax.
Extrapolating from income effects to price effects may be unwise, as seen with failure of the weak separability assumptions, but a prior debate in development economics about effects of income on nutrition is germane to this discussion. Early studies derived indirect estimates of the income elasticity of calories from food expenditure data, assuming that higher spending on a food group meant proportionately more nutrients. This ignored within-group quality substitution, and later studies showed that extra spending on food as incomes rose went on attributes other than food quantity. Writing about poor people with rising income, Behrman et al. (1988, p.308) noted that:
“… at the margin they concentrate on food attributes other than nutrients – taste, appearance, odor, degree of processing, variety, status – that are not necessarily highly positively correlated with nutritive value.”

In perhaps the same way that policy makers learnt that effects of income changes on nutrients are mediated by within-group quality substitution, so too may they need to learn that effects of price changes on quantities can likewise be mediated by responses on the quality margin.
While Gibson and Kim don't cover it here, because it is obvious, the analysis applies whether a soda tax or sugar tax is ad valorem or a per-unit excise. Whatever you thought the demand response to your price increase was going to be, it'll be smaller to the extent that your elasticity estimates conflate quantity and quality adjustments.

The only case where this won't be true is for people who are already only consuming product that is at the lowest quality point - and you'd probably then want to have an elasticity estimate for that specific cohort anyway rather than assuming that the estimates that apply elsewhere also apply to that cohort. 

Wednesday, 12 December 2018

Hard Spirits

I write a lot of columns. I rather like how this one turned out. It was up at Newsroom Pro yesterday; it's ungated here, and copied below.
Hard Spirits

It was during the discussions of measuring spiritual capital that the ghost of Sir John James Cowperthwaite hovered near. The shade whispered in my ear, “When I was Financial Secretary of Hong Kong, I refused to collect economic statistics for London. Why? For fear that I might be forced to do something about them.”

I wish more people at last week’s Indicators Aotearoa New Zealand indicator selection event had been able to hear him.

Statistics New Zealand’s Indicators Aotearoa project aims to build a comprehensive suite of (you guessed it) indicators measuring wellbeing. Where SNZ’s 2014 Progress Indicators Tool, now discontinued, included 16 measures ranging from adult educational attainment to distribution of selected native species, the new framework aims to include about 100.

The project is linked to Treasury’s Living Standards Framework and to the Government’s desire for more measures of wellbeing. Treasury has been developing its own suite of wellbeing indicators as part of the Living Standards Framework; whose indicators will reign supreme remains a bit up in the air. If Treasury’s framework and indicators wind up satisfying the Government’s wellbeing needs, then Statistics New Zealand’s indicators project – or at least those indicators that do not make it into Treasury’s framework – might yet follow its predecessor into the bin of discontinued data series.

So the bureaucratic stakes are high, as far as these things go.

Statistics New Zealand took a reasonable approach in trying to figure out which measures to include: they asked New Zealanders what wellbeing means to them in a series of workshops around the country, along with an online campaign. Then data experts tried to figure out which of those might possibly be meaningfully quantified among those that are not already measured.

That led to a long list of indicators to be pared down at last week’s indicator selection event, where I was visited by the ghost of Sir Cowperthwaite.

Inputs, outcomes and A3 posters

There’s a management truism that what gets measured gets managed, and what doesn’t get measured doesn’t get managed. Choosing the right measures then matters, especially if these kinds of indicators get targeted by a Government desperate to be seen to be improving our wellbeing.

The big headline measures should reflect final outcomes (like healthy life expectancy) rather than intermediate outcomes (like cancer rates) or inputs (like the proportion of people using sunscreen appropriately).

If there are lots of intermediate outcomes that are all captured by the final outcome, promoting intermediate outcomes into being final outcomes is a very bad idea. It invites paying more attention to those selected measures rather than to other ones that Government might more usefully target. In the example above, if an intervention targeting diabetes did more to improve healthy life expectancy than an intervention targeting cancer, the latter might nevertheless be preferred if cancer is a headline statistic and diabetes is not.

Turning inputs into headline statistics is at least as bad, for similar reasons. Inputs might be detrimental to some measures of wellbeing, but useful for others. An index of health-related behaviours, counting exercise, diet, and various consumption choices would be a poor one to include in headline statistics on health. To the extent that those behaviours improve health, they are already captured in the healthy life expectancy statistic. But including them as a headline measure risks targeting them for improvement in ways that may hurt other measures of wellbeing.

And, finally, the outcome measures chosen should be things that the Government might plausibly have any business doing something about. We might wish to be careful about what gets measured lest we be managed into improving our wellbeing in ways we might not welcome.

To take an obvious example, I have yet to see a survey of wellbeing or happiness that, if it included satisfaction with one’s sex life as a measure, failed to find that it mattered considerably. If any participant in Statistics New Zealand’s workshops had been honest and reported that sexual fulfilment mattered greatly in their wellbeing, their input did not make it into the workshop. Really, if the number proved to be a bit softer than we might have wanted, what on earth should policy seek to do about it? All options seem atrocious.

Unfortunately, the process of outcome selection at the Statistics New Zealand event was less fruitful than it should have been.

About 150 people attended at the Michael Fowler Centre.

Two hours of introductory remarks included a welcome from the Government statistician, a hand-clapping game, an invitation to check our privilege with the help of a bingo-sheet of privilege indicators like not being red-haired, and an extensive discussion from a PhD candidate who warned (among other things) about consulting with Ngฤi Tahu on issues Mฤori because they are too corporate.

The two hours of introductory remarks did not include any discussion of the difference between final outcomes, intermediate outcomes, and inputs. Nor did it point out that current measures of inputs and intermediate outcomes would continue to be collected regardless of whether they became headline statistics.

After being encouraged to reflect on my privileges in being able-bodied, not hard of hearing, having a loud speaking voice, and not being red-headed, I joined a group of about 50 people who all had to stand for an hour around an A3 poster to discuss the proposed health indicators – before moving on to stand and talk around other A3 posters. Those unable to stand for extended periods were denied effective participation, as were those even of normal hearing if they were not very close to some of the very soft-spoken participants. Microphones and chairs may not have gone amiss, but I am probably in too privileged a position to have standing to comment on that. Our facilitator did his best but had a difficult task.

Because nobody had explained the difference between inputs and outcomes, or that failure to select particular measures did not mean they would cease to be measured, conversations had to keep coming back to those basic points. Every worthy-sounding input measure had its proponent for inclusion in the mix.

And many of the indicators seemed, well, problematic. Some outcomes were listed as “contributing unambiguously to progress” when they seemed manifestly contestable. For example, the response to the Initiative’s report last year lauding the merits of diversity and immigration suggested not everyone agrees diversity is a good thing. But appreciation of diversity was included as a potential outcome variable deemed to contribute unambiguously to progress. It seems a political argument to win rather than an outcome variable to technocratically maximise. If it were unambiguously associated with progress, we would have received less hate mail insisting on the demerits of diversity and that New Zealand needs fewer immigrants from culturally dissimilar places.

A dangerous thing to attempt to measure

Most contentious in the health discussion was the inclusion of spiritual health as a desirable outcome. And it is there that the spirit of Sir John James Cowperthwaite visited me – as I am a very spiritual person in my own utterly non-spiritual way.

Sir John is generally credited with Hong Kong’s transformation from a desolate place at the end of the Second World War into about the richest and most economically free city in the world. He credited some of his success to his refusal to provide Whitehall with the statistics they might attempt to use in applying Atlee-style management to the Hong Kong economy. His policy of positive non-interventionism certainly outperformed the UK’s more scientific-looking approach built on a mountain of economic statistics.

His spirit warned me that spiritual health is a dangerous thing to even attempt to measure as a headline wellbeing outcome, even if it is deeply meaningful to many communities. There is the obvious problem that atheists like me would not have a clue how to begin answering a survey question on whether we are spiritually fulfilled as the area, to those who share my views, is meaningless. But there is the far worse problem that what gets measured may be managed. If a headline outcome measure of spiritual fulfilment were half of what people expected, or double(!), what on earth should the Government ever do about it? The measure invites management. And any attempt at management would be worse than the measured problem.

I do not envy Statistics New Zealand the task ahead of them. The workshop will not have been as useful as it could have been in selecting an appropriate set of indicators. Will they weigh more heavily what makes sense as outcome measures, or what participants claimed to want? Either way, they will be stuck with trying to measure the things – while being unsure whether much of the project will be superseded by Treasury’s work. All this while battling with what seems an ever-worsening problem in getting the Census out.

I wish Statistics New Zealand good spiritual health in the months ahead, as they will need it.
I feel bad for Stats.

They took a fair bit of stick about having us fill out a Privilege Bingo form. I think that we wound up doing the privilege bingo exercise by accident. They had time to fill when a video screwed up, and rather than fill the time with something useful (see the column), the team scheduled to give that presentation scrambled for something tangentially related to their having asked lots of different communities for input into their data exercise.

They were utterly utterly oblivious to that they were stepping in very firmly on one side of the culture wars, perhaps because those involved were so ensconced in one side of it that they couldn't anticipate that it was problematic. If it hadn't been a late fill-in, maybe somebody with some nous would have headed it off. But it wasn't completely last minute. They had had time to have photocopied Privilege Bingo sheets on everyone's seats along with the day's agenda.

And today, the Census problems alluded to in my penultimate paragraph hit the front page of the Dom and Radio NZ.

I wonder about the usefulness of the community consultation exercise. It is very very easy for those things to make it very hard for people with potentially dissenting views to say anything. Remember the kinds of preference falsification exercises that Timur Kuran talked about, and information cascades? I wonder if anybody at Stats reads this kind of stuff.

If you're in a room where a couple people have just made very clear that the inclusion of spiritual wellbeing is more than just important to them, that they'd take it as a slight to their entire way of being and sense of self if anyone challenged that measure's inclusion (like, not explicitly, but in the way they make the argument), how many would be happy to step in and challenge it? At the poster session, we could put stickers on particular measures. Before discussion, there were about 10 red stickers on spiritual health, suggesting ten people wanted it demoted, and 2 green stickers, suggesting two people disagreed with the red stickers. I spoke against the inclusion of the measure, then a couple of people spoke about how sacred the measure was to them, and not a single other person who had put up a red sticker was willing to say a word.

If there's just been strong discussion of the importance of kaitiakitanga in land management, would you be the one willing to say "You know, my family settled our farm a hundred years ago and has worked and loved that land for four generations. Shouldn't we count too?" Like, the two can work together, but it is hard to get to that kind of discussion.

A friend gave an analogy that seems to fit. Suppose you've got a juggler who's really really good at juggling five balls. The business-as-usual stuff at Census is the five balls - there is a pile of it. In a Census year, they have to juggle six or seven balls. And they do a great job of that too, because they can manage six or seven for a short period. But throw another ball in the mix and everything starts falling down because eight is way harder than five. I wonder whether the Indicators project isn't that deadly eighth ball.

Census woes

I don't know what normal people talk about at drinks. For the past few months, whenever I'd catch up with other economists over drinks, it's been rumours about just how bad things are at Census. 

The mess hit the front page of the Dom Post today.
Documents released under the Official Information Act show the department is planning for an August 2019 deadline, and continues to stare down a high risk it may not produce viable results.

A low turnout during Census 2018 has caused considerable pain for Stats NZ, which last week announced it would not make the third deadline it set - July 2019 - and promised to announce a release date in April.

Stats NZ failed to count an estimated one in 10 New Zealanders in Census 2018, previously providing an interim response rate of at least 90 per cent of individuals providing full or partial information.
Some tales have it that the incomplete census responses are on top of the 10% non-response rate rather than part of the 10%.
It was announced in June that other sources of Government data would be used to fill gaps in the results.

Both the actual number of responses received and the number of additional data for 2018 were redacted from documents, and Stats NZ declined to release them.

The documents show "high risks" Stats NZ continues to face include: the failure of methods to patch census data; census data failing to be fit for purpose, leading to "less than ideal" decisions being made; and a failure to provide information for the re-drawing of electorate boundaries.
I really hope that they're flagging interpolated data in the back-end systems so that analysts know whether they're dealing with generated regressors. So, for example, suppose you're trying to estimate the relationship between a bunch of admin back-end variables and some Census result. If the "Census" result was really generated by those back-end variables in the first place for some of the respondents, you're going to botch your analysis unless you know that that's gone on. You could just be rediscovering the algorithm used to generate the data rather than any real relationship.
Also described are a "severe incident", where some data fields on the census forms failed to cross over to another system and contributed to a month-long setback.
... BERL chief economist Dr Ganesh Nana said Stats NZ needed to "draw a line" under Census 2018.

"We all might have to put our hands together and say, 'Okay, the data won't be as rigorous and robust as previous census'. We might just have to admit that."

A standard census release would have BERL using the data intensively over a six-month period from about now, he said.

"That's clearly been put out of kilter this time around. The biggest concern for us is the timing of the data … the data is already starting to become stale."
And drinks chatter has speculation over whether the whole thing will yet require a do-over.

It isn't the first year that they've combined online and paper - I filled in my Census online in Christchurch last time. Quite why everything's gone wrong this time - I'd love to know.

I'd made brief allusion to the problems at Census in my column at NewsRoom yesterday ($). That column wasn't about Census but about the fun they're having with the Indicators Aotearoa project. Will post on that one soon.

Update: Ganesh Nana talks with Kathryn Ryan about it on Nine to Noon.

Update 2: Hoisted from the comments (on the mobile version, where I've not been able to get Disqus to work):
One big difference between this Census and the last one was that last time the online form was opt-in, but this time the paper form was opt-in. In 2013 everyone received a paper form, but people who registered could ignore that and do it online instead. In 2018 you had to register to receive a paper form. If you didn't get one in time then it was do it online or not at all. That's a big difference in practice and could be part of the reason for the problems. 

Tuesday, 11 December 2018

The Globlish solution

A couple weeks ago, I'd suggested NCEA ought to make things simpler for students who do not know basic words like trivial by restricting itself to the 3000 most commonly used words.

Globlish takes it one step further by reducing the language to the 1500 most commonly used words and simplified grammatical structures. It is intended to be the minimal vocabulary necessary to conduct global business among people who are not native English speakers.

Just think about how New Zealand's measured literacy scores could improve if we restricted assessment to only those words that the students really need to know, instead of all those extra ones.

HT: Sam Hammond, who imagines a fun movie based on it. You should follow him on twitter.



Thursday, 6 December 2018

Morning roundup

The morning worthies: