Sunday, 21 October 2012

Quirrell Points

I continue to enjoy Harry Potter and the Methods of Rationality. Eliezer Yudkowsky there proposes a rather interesting pedagogic technique, via the excellent Professor Quirrell:
"And on the rare occasions I offer you a written test, it will mark itself as you go along, and if you get too many related questions wrong, your test will show the names of students who got those questions right, and those students will be able to earn Quirrell points by helping you."

...wow. Why didn't the other professors use a system like that?
If this could be set up, boy would it take a load off of the lecturer.

Alas, in our muggle world, we're constrained by normal physics and haven't access to magic.

First, I can't see how you could set up a self-scoring test that weren't multiple choice, true/false, or restricted to questions where you can have a numeric or algebraic solution. So that makes it a lot less appealing right from the start. There are some courses in economics for which this could work, just not the courses I'm teaching.

Second, the logistics wouldn't be pretty. Here's the best way I can imagine it working; maybe you can help me out if you're more imaginative.

In Phase One of the test, everyone would sit the test at a computer terminal. Each student is assigned a score for Phase One; the system's back-end also keeps track of competences across subcomponents.

After Phase One, students who did well on subcomponents are given the option to have their names released for tutorial assistance. Students who did poorly are told which students who did well on those subcomponents would be suitable tutors, along perhaps with some reputation score indicating how well they've performed in the past in improving others' grades.

The students who did poorly then are given the chance to sit Phase Two, in which they could re-do the parts they did badly for partial credit. The student assistants are given a part of the re-tested students' score improvement as bonus points.

There's a Muggle-world mechanism design hindrance: You need some way of ensuring that the system can tell which tutors helped which students. Tutors will have an incentive to disavow students who are likely to do poorly, to help keep their averages up. Some poorer students may try to claim a tutor relationship where one didn't exist to drag down the average of too-smug higher performing students like Ms. Granger. If tutors can't disavow students, they're blamed for the ones who didn't show up for the tutorial sessions. There's probably a way around the problem, but it doesn't seem easy. Even if you could sort it out, you'd need a way of programming it into the various online course management systems.

If I ever figure out a way of having online-only self-grading tests, I'd start thinking harder about these mechanism design issues.

4 comments:

  1. I don't see why this couldn't be set up manually and ex post. After the original test and subcomponent ranking, you'd assign every willing student to a willing tutor in each topic, and let the rest take care of itself. Willing students who made no improvement would be penalised, as this would be taken as evidence that they made no use of their tutors.

    ReplyDelete
  2. One step on the way: automated essay grading. http://www.nytimes.com/2012/06/10/business/essay-grading-software-as-teachers-aide-digital-domain.html?_r=0 (Disclosure: I work for Kaggle, a platform that hosted the contest to design essay-grading algorithms.)

    ReplyDelete
  3. Possible, but I need much speedier grading techniques for this to be effective.

    ReplyDelete
  4. I can see this working for straight english usage, but I've a hard time seeing how it will parse an economic argument to see if it makes sense.

    ReplyDelete