Kevin Kelly speculates about a possible taxonomy of minds (and in another post discuss different kinds of self-improving intelligences). His aim is to consider how a mind might be superior to ours. Nice to see him rediscover/reinvent classic transhumanist ideas.
His list is rather random, a mixture of different implementations, different abilities, different properties and different abilities to improve themselves or other minds. Still, it is an interesting start and a good way for me to check what ought to go in my (still unpublished) paper on the varieties of superintelligence. Here is Kelly's list, ordered by me and with comments in parentheses.
Implementation
I find it interesting that he left out pure AI, a mind created de novo. In my paper we also include biologically enhanced humans.
Abilities
A rather mixed bag of abilities.
Properties
Improvement abilities
I think one can clearly improve on this, and it would both be fun and useful.
What we really need is a better understanding of what would go into self-improving intelligence, since if there is any hint that it could indeed lead to a hard take-off scenario then a lot of existential risk concerns become policy relevant. At the same time being open for the diversity of possible minds is important, since there are likely plenty of choices - and some might be closer than we think.
In my and Tobys paper we argued that there are a few basic dimensions of superintelligence: speed, multiplicity (the ability to run copies in parallel), memory (which includes a sub-hierarchy of kinds of better working memory that likely closely relates to intelligence), I/O abilities and the ability to reorganize. These allow some superhuman abilities even using fairly simple "tricks" like running a group of subminds, creating internal organisations with division of labor just like organisation managers do, and "superknowledge" where the "thinking" is actually data-driven and due to the production of society at large. Different kinds of base minds are differently easy to upgrade along these dimensions, with biology having strong speed limitations, AI and brain emulations being suited for multiplicity, cyborgs by definition being better at interfacing new systems etc. Different kinds of problems would also benefit from different kinds of expansion, suggesting that even if the overall result is an increase of effective general intelligence in practice there are advantages and disadvantages for any particular problem that makes certain mind designs better for them. Ultra-fluid minds could of course adapt to this, but there is a cost to being fluid too.
Given that the collective minds formed of ARG-players can clearly form cheap kinds of superintelligence (or maybe supercompetence is a better word?) today, understanding the space of minds can be quite crucial and profitable.
Posted by Anders3 at September 12, 2008 10:05 PM