More writeups of the Cloud Intelligence symposium at Ars Electronica, with videos and presentations. David Sasaki has a writeup of the symposium.
My presentation on cloud superintelligence can be seen here.
Looking back at this event and the Singularity Summit, I have become more and more convinced that we need to examine the alternative to I. J. Good's concept of "intelligence explosion" in the form of an "intelligence flood". As he wrote:
"Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind."
Now consider a collective intelligence - a large bunch of humans connected using suitable software and social networks. I think it is true that such a collective intelligence clearly is ultraintelligent in most respects. It might not be much better than its smartest members at inventing quantum gravity, but it is able to manage enormous projects and handle distributed knowledge far beyond human capacity. If it can be expanded without losing capability, then we have a form of intelligence explosion. This is what drives the radical growth in Robin Hanson's upload economy model, as well as his AI growth paper - not supreme intelligence, just a lot more of it. The real issues here is 1) how hard it is to expand a collective intelligence without losing capability, 2) what domains are easy or hard for collective intelligences and 3) whether expanding collective intelligences is in the easy or hard category. I would guess it is easy, but I am not 100% sure.
Posted by Anders3 at October 21, 2009 03:39 PM