Suppose you wanted to find out someone’s subjective belief in the likelihood of a given event. Your first instinct might be to offer them betting odds on the event. If they would accept odds of 2:1 but no lower, you might conclude that they believe the event to have a 33.3% chance of occurring.
This would be correct if the person you’re offering the bet to is risk neutral. But if they’re risk averse, they might actually believe the event has a 50% likelihood of occurring, and they just require more favourable odds to compensate them for accepting risk.
Luckily, there’s a better way to elicit subjective probabilities. It comes from Karni (2009), a short paper published in Econometrica.* Continue reading Understanding the Karni Belief-Revelation Mechanism
I’ve read a lot of LessWrong recently, and I learned about a particular paradox known as Newcomb’s problem. In the problem, an alien superintelligence called Omega presents you with a choice. He gives you two boxes, box A and box B. He puts $1000 in box A and either $0 or $1,000,000 in box B. You can then choose whether to take both boxes or just box B, but the catch is that Omega will only put $1,000,000 in B if he predicts that you will only take B. And given that Omega is able to perfectly predict the future, everyone who chooses only B will get $1,000,000 while everyone who chooses both will get $1000.
The maddening thing about this problem is that, as you sit puzzling over whether to take both boxes, the monetary amounts have already been decided and placed in the boxes. There’s either $1000 or $1,001,000 sitting in front of you, and so it would seem that taking both would be weakly better than taking only one. And yet, everyone who takes only one gets $999,000 more than everyone who takes both. How could it be that the “right” answer gives less money than the “wrong” answer? Continue reading Newcomb’s Problem and Order in Game Theory