The Allais Paradox is a result in behavioural psychology that shows that humans apparently do not behave in accordance with expected utility theory. Specifically, according to the von Neumann-Morgenstern Independence Axiom, the following two choices must have the same decision outcome, or the deciding agent is not using a consistent utility function, and is thus irrational:
a) Certainty (34/34 chance) of $24000, or
b) 33/34 chance of $27000. (1/34 chance of $0)
a) 34% chance of $24000, or
b) 33% chance of $27000.
But when actually presented with these choices by a human experimenter, human experimental subjects consistently choose 1a and 2b, which choice cannot be produced by any consistent assignment of value to $0, $24000, and $27000 with the given probabilities.
Would-be rationalists are fond of holding this up as an example of the inherent flaws in human intuitive reasoning, and using it to argue for favoring more explicitly calculating methods involving probability and decision theory. For example. They claim that one cannot lightly disagree with the theorems that prove VNM optimal, that refusing to have a consistent utility function opens one up to preference reversal scams and money pumps. Maybe they are right, but let's see if we can't get to the bottom of this.
First of all, good luck constructing a money pump out of the above "inconsistency" that a real human will fall for. (Charging a human to switch from 1b to 1a is not a pump.) Second, such scams, among other assumptions, rely on people being persistently consistent in their inconsistency. They require that the human victim will not notice that you are presenting them with a preference reversal scam, and tell you to piss off. Such a simple vulnerability would not have lasted this long in the genome, I think. So if simple transparent preference reversal scams aren't going to work on real humans, we'll need a better way to motivate the math as something that should override our evolved intuition.
Maybe humans are vulnerable to a big and complicated enough preference reversal scam that we don't notice it as such? Surely the only way to not be vulnerable to that is to have a consistent utility function and live by it? This may be the case, but once you approximate that fancy new math in something actually computable and actually integrable with human senses and motor functions, a few things become apparent:
- You've designed an Artificial Intelligence; the problem is AI complete. (This is extremely hard, and has not been done.)
- Your approximate scheme is all kinds of complex to handle scams and real scenarios that actually come up efficiently, because the raw math is not feasibly computable. Your approximation probably is a) worse than what evolution came up with, and b) vulnerable to scams and full of exploits anyways.
- Even if it's better, you can't just flash the human brain with a new ROM image. Oops.
So any real rationality advice for humans has to take the current decision system for granted, and critique specific decisions with arguments that actually present a specific better way to do it, and which actually apply to that case. So with that in mind, let's look again at the Allais Paradox, the fundamental issue of which is whether the VNM Independence Axiom really applies in such gambles.
I accept the mathematical truth of the Independence Axiom, but does a normal human's preference for certainty in this gamble really violate it? Here's my analysis:
The Allais Paradox isn't actually about money or utility, expected or otherwise. It's about the sense that anyone offering you a 97% chance of winning lots of money might be running a swindle where you have no option of recourse because you cannot tell whether the "no prize" outcome shows up because you were unlucky or because the dealer cheated. Charity happens, and lotteries happen, both of which give away money, but the former gives away money at certainty and the latter at low probability and net-negative expectation. 97% chance of giving away money does not look like either of those, and 'smells' dishonest.
The much-derided "value of certainty" in the first preference is the value of knowing that if the dealer says "no prize" and refuses to hand over the $24000, you have grounds for recourse, e.g. beating up the dealer and taking his (your) money because he's definitely a lying cheating bastard who scammed you out of $24000. Whereas if you pick the 33/34 chance of receiving $27000 and are told "no prize", it's not so obvious.
That is, if we interpret the choices as implicit contracts, a contract that says "There is a 97% chance I will give you your money" is a lot less than 97% as enforceable as one that says "I will definitely give you your money". In the contract view, if we were dealing with real contracts, we would want to be much more explicit about the source of randomness used. With a cryptographically secure source of randomness that the participant could independently verify, the argument for the applicability of the independence axiom becomes much tighter.
But you know what? When the experiment is rearranged to provide the subject with better intuitive understanding that the randomness isn't rigged, for example by letting them examine the mechanism and test it out a bunch until they are satisfied with it, I'll bet that the paradox at least partially goes away. But maybe normal people won't ever reach that level of intuitive certainty, and the paradox doesn't go away.
So in that case, here's another explanation of the Allais "paradox": The experimental subjects are not expressing an irrational preference for certainty, but a preference to avoid having to confidently audit the security of a cryptosystem (entropy source) in a possibly adversarial scenario without personal expertise. In case you hadn't noticed, this is basically the zeroth rule of cryptography as taught by actual security professionals.
One might object that the possibility of lying cheating dealers shouldn't be part of the analysis; that we're trying to characterize and critique human decision making in simple cases first. But in that case, why not define away all this complexity around "money" and "probability" as well? If you want people to answer the strictly mathematical question, ask them the strictly mathematical question: which is larger of (27000*33/34) and 24000? That would be silly, of course, but once you start talking about how people evaluate real scenarios involving randomness sources, other human agents, money, implicit contracts, and so on, you should expect things to get complicated fast, and your nice clean proof of human irrationality starts to look shaky and naive. That the subjects are considering things like lying dealers in their intuitive evaluation of the scenario which were not mentioned in the description of the scenario really isn't that surprising; what kind of lying cheating dealer would advertise the possibility of it being a scam, and what kind of fool wouldn't intuitively take that possibility into account anyways?
Contrary to the usual would-be rationalist narrative of naive human irrationality contradicting the math, it looks to me that what we really have here is naive rationalists taking the nominal reality as presented at face value and trying to "get the right answer" and posture about how smart they are for doing so, instead of actually maximizing expected utility in actual real multi-agent scenarios. If we are going to talk about who, exactly, is vulnerable to being scammed because of their decision framework -- well, someone who takes what is given at face value, disregards the possibility of dishonest counterparties where there is massive incentive for one to exist, ignores their gut feelings, and calculates from there would be a good specimen to start with.
So when we critique would-be rationalists for being naively loyal to the math, and say that we've moved beyond that to taking intuitive judgement a lot more seriously, this is the kind of thing we're talking about. Real reality is messy, and an intuitive judgement system evolved and tuned by experience to deal with that reality shouldn't be lightly second-guessed.