I have an entry on Overcoming Bias: Tell Me Your Politics and I Can Tell You What You Think About Nanotechnology where I bring up the rather worrying results of Kahan et al. about how people decide whether a new technology is risky or not. They don't do it based on fact, but rather emotion and cultural assumptions.
This is hardly news, as anybody who has been involved in the debates about emerging technologies can tell. But it is suggesting that public deliberation and political decisionmaking about these areas is always going to be fundamentally non-rational.
One obvious response is to assume that we should not bring emerging technologies to the attention of the public, since they will not contribute any rational risk analysis. But that assumes that the experts involved are not self-selected to have particular cultural traits that get them interested in the field in the first place, and hence already biased.
Jef Allbright points out that debate 'has never been about facts and truth, but about subjective awareness of how to promote one's values , whatever they might be. We can be thankful that the universe provides a consistent ratchet effect, selecting for "what works."' That may be true, but for some areas we want as much rationality as possible as early as possible - waiting for 'what works' to do its job can sometimes take a surprisingly long time (it took 70 years from the realisation that asbestos was unhealthy to ceasing using it). Especially somewhat dangerous things like AI and replicators might require sane policies early on. Jef has a good point that we should aim at emphasis on principles rather than ends in the discussions, and that echoes a lot that I and the other Eudoxa people have said about the need for thicker debates in emerging technologies.
When dealing with that, it might be interesting to consider this paper, which I think may become one of the truly important papers in conflict resolution: Sacred bounds on rational resolution of violent political conflict by Jeremy Ginges, Scott Atran, Douglas Medin and Khalil Shikaki (PNAS, May 1, 2007 vol. 104 no. 18 7357-7360).
They show that opposition to compromise over sacred issues is increased by offering incentives to compromise (trade something sacred for money? Outrageous!) while it is decreased when the adversary makes a symbolic compromise about their own sacred values (if they are willing to trade that, they are serious about peace).
Maybe this is the way towards constructive compromises on GMOs, nanotech and other emerging technologies. Identify the standard stakeholders who will get involved anyway (ETC will automateically oppose most new technologies because they perceive a bad risk/benefit ratio due to their communitarian views, the individualists will see the risk/benefit ratio as acceptable) and set up the compromising right away. What would ETC give up to get the libertarians to give space to more societal control? What would transhumanists give bioconservatives in exchange for enhancement development? (let's dope everybody with coffee at the meeting too)
Who knows, maybe we will have nanomachines and peace in the Mid East due to the same ideas.
Posted by Anders3 at June 15, 2007 09:00 PM