When speculating about technological advances there is an almost unavoidable tendency to extrapolate too narrowly, with insufficient weight given to the uncertain but considerable context within which a specific technology will interact.
In the case of AI, it seems likely that non-sentient artificially intelligent tools will be developed and gain widespread popularity as means to augment human intelligence and human collaborative efforts – before, and on the path to – the development of sentient or recursively self-improving AI.
As the augmented collective intelligence of human organizations increases, and local interests are increasingly superceded by more global interests, a metaethics of cooperation can be expected to evolve and be applied at many scales of human endeavor.
This evolution of human collective intelligence and morality will throw a much different light on currently intractable and paradoxical problems such as the tragedy of the commons and prisoners’ dilemma situations.
The challenge is to raise ourselves to that level of awareness and wisdom before our bad-ass ancestral tendencies do too much damage.
- Jef
I agree that the thing to look for is more the collective augmented intelligence rather than individual superintelligences. They might be around, but unless extreme hard takeoff scenarios hold, they will be surrounded by vast numbers of nearly as smart entities with a combined power far larger than them. A swell rather than a spike, so to say.
I wonder about the collective ethical growth. Miller and Drexler point out in "Comparative Ecology: A Computational Perspective" (http://www.agorics.com/Library/agoricpapers/ce/ce4.html#section4.5)
that ecosystems are far more coercive and violent than markets. To a large extent is seems to be due to the high number of win-win interactions compared to zero-sum interactions. This helps establish and sustain a high degree of cooperation. Given that markets represents a sizeable share of human interactions and the goods traded are anything humans can conceive ways of trading, it seems reasonable to suspect that they are indeed representative of complex cultural systems. As AI enables us to add culture and smarts to nearly any object or interaction, it seems plausible that these win-win interactions will multiply and we will tend towards greater (on average) cooperation. In many ways this change does not require individual awareness or wisdom just as the stable altruist strategies in the iterated prisoners' dilemma do not require altruist agents. The wisdom is a collective phenomenon rather than individual.
What can (and likely will) upset this cozy drift towards niceness is that complex systems have plenty of degrees of freedom and hence many ways of going wrong and many arbitrary reasons for conflicts (say, over different ethical systems or aesthetics). The defectors are probably a smaller problem than the occasional nutcases with enhanced destructive abilities and the statists seeking to impose a 'rational' (i.e. simplistic) order on parts of the system. While I believe large and diverse systems will handle such problems well (likely in some kind of self-organized criticality state with power-law distributed disasters followed by restorative transients), it is not nice to be near one of the disasters, and we better make sure the system is large enough to handle even the biggest conceivable disasters.
Posted by Anders at July 15, 2004 11:14 PM