Wednesday 3 August 2011

Critical mass ... or not

Imagine a population, 90% of whom are truth-seekers who generally believe B to be true but have weak priors and 10% of whom are committed to that A is true. The 90% cannot distinguish other truth-seekers from advocates. Equilibrium then has to be that everyone converts to believing A is true. If you're a truth-seeker and you meet someone claiming better knowledge that A is true, and you believe his knowledge claims, you upweight A.

Pretty trivial. But a few folks who I'd thought otherwise sensible have read a bit too much into this kind of result.

Here's the original paper by a couple of physicists showing that in a world similar (but not identical*) to the one characterised above, the transition to everyone believing A is really fast if 10% are committed A-believers. Fair enough. But it has nothing to say about anything interesting in the world, like how beliefs might be updated if there are also a similar proportion of committed B-believers. Or if the truth-seekers can identify the committed.

Folks seem to be taking the result as saying something like "If only me and the few folks like me keep advocating really hard, eventually everyone will agree with us!" Give your heads a shake.

*It's not quite a Bayesian framework. Agents randomly meet and express an opinion from a list. If you hold opinion B and meet an A agent, you then hold AB. If you meet another A agent who says A, you then hold A; if you instead meet another B agent who says B, you then hold B. If you hold AB, you're randomly likely to voice A or B at your next meeting. But if you are committed, you only ever voice A. Repeat interactions until everyone believes A. This is the nonsense that happens when physicists try social science.


  1. How does the model change if A (e.g., 'No, really, I CAN fly!') produces objectively negative results? Or, what if B-people can profit from the errors of A-people? With only a few A-people, the opportunities for profit are few. As more A-people develop, B-people gain more profits, increasing their fitness.

    More generally, this is the problem of drawing one curve on a diagram. Economics is about drawing the second one and figuring out what happens.

  2. @Bill: Exactly. Why Freakonomics thought that worth posting....

  3. "More generally, this is the problem of drawing one curve on a diagram. Economics is about drawing the second one and figuring out what happens. "

    I like this quote, I like it a lot.

  4. Believing that this scenario could actually play out in real life seem ludicrous in the extreme. Any issue or idea which so firmly entrenches itself into its proponents' minds will likely also have a similar 10% who equally are adamantly opposed to said meme.

    Just one topical example is the debate over cannabis and its synthetic derivatives. I'd guess roughly 10-15% of the population are firmly in favour of legalising and/or decriminalising cannabis, there would be a big chunk who don't particularly care either way, and a similarly rabid 10-15% of nutjobs on talkback radio who vehemently oppose any relaxation of the law, and who in fact would be quite happy to see the other 10-15% stoned to death rather than stoned in life.

  5. Then theres the equal and opposite group that immediately emerges when the first group looks to be getting traction.

    Its almost a natural law that the more certain the first group seem to be and the more publicity they get.. the more another group will oppose them, often/usually with success.