Personalised weapons of mass destruction: governments and strategic emerging technologies - I blog about personalized biowarfare, extrajudical killings, cyberwarfare, EMP weaponry and other thriller-fodder. Except that these are real issues now and in the near future. My main argument is that the US (and other governments) are pursuing technologies that are potentially most effective in destabilizing their own societies.
Yesterday I had a very stimulating talk with a knowledgeable expert on cybersecurity and strategy. One of the main conclusions from our discussion - filled with historical overviews and interesting anecdotes about the Pentagon lifestyle - was that governments can be amazingly naive and biased about matters of national security, the one area where you would assume they would be coldly rational. The problem is not just individual folly, but institutionalized endemic folly that do cost nations wars and megadeaths. Oops.
As technology advances, individual destructive power goes up (strictly speaking, it is the tail of the power distribution that goes up - the median person is still pretty meek) and this leads to increasing risks when individuals or groups go bad.
None of these approaches are good enough. In theory the monitoring and control approach could be extended to a global surveillance regime with AI support that kept everybody under watch and prevented certain bad behaviours. But such a singleton would be best at handling well-defined risks rather than entirely new ones (so the only really safe approach would be to stop free innovation), and lend itself to totalitarian uses. Moral enhancement might make some groups even more motivated to bring drastic moral change about or become willing to risk all for an ideal. Resiliency is expensive and likely too local to handle certain global catastrophic risks or existential risks - while the approach makes things better, it doesn't provide the coordination needed to meet them.
Most likely we will have to muddle through with a mixture of approaches, imperfect implementations and occasional failures. That is OK. What is not OK is when the strategic technology policy (or the global disaster policy) is irrational: it does not matter if intentions are good if they are not applied rationally to the world. And that suggests an interesting meta-problem: how do we overcome institutionalized endemic folly? Figuring out better ways of making large-scale decisions is probably one of the most high-impact research topics imaginable right now.
Posted by Anders3 at November 2, 2012 10:07 PM