My friend Stuart has two posts on Less Wrong, Let's split the cake, lengthwise, upwise and slantwise and If you don't know the name of the game, just tell me what I mean to you that bring up an interesting problem: how do we bargain in coordination games when we do not know exactly what the game will be beforehand? In games we know beforehand we can decide on how to go about things to make us all satisfied, but in situations where we will have iterated games of uncertain content bargaining gets complex. Stuart shows that one can do this by fixing a common relative measure of value and then maximize it, ignoring fairness considerations.
This seems to be a different take on fairness from Rawls, who assumes that people beyond the veil of ignorance do not even know their conceptions of the good (utilities). However, then maybe the rational choice is just to select a random relative measure or use the Nash Bargaining solution or the Kalai-Smorodinsky Bargaining Solution with an arbitrary disagreement point.
However, this seems to be too arbitrary. Rational agents will know that as rational agents their utility functions will *not* be entirely random, but will tend to maximize certain coherent things in the world. So hence they would have some form of prior about their utility function even beyond the veil of ignorance based on them being rational agents, and this might help fix the initial bargaining agreement. However, in this case the priors for future utility functions will be identical, so it seems that bargaining would be simple: they will expect on average have similar utilities.
In the real world case of me and Stuart meeting and deciding how we will bargain in the future, we of course are helped by actually having utility functions *and* having priors for what games will be played.
Posted by Anders3 at October 27, 2010 03:27 PM