This is an essay based on the speech I gave yesterday on my friend David Wood’s birthday party and on discussions I had with the other guests.
One of the tricks in trade among futurologists is to look equally far back as one is trying to look forward, and try to learn from the things that have changed in the meantime. So when I look back 25 years to 1984 I see myself playing with my Sinclair ZX Spectrum (48 kilobytes of memory! 256×192 screen resolution! 16 colors!) On the Swedish state television (only two channels) someone was demonstrating that you could use a home PC to balance your check book or store recipes. That year Apple would launch the famous 1984 advert announcing the Macintosh rebelling against the authoritarian power of IBM.
What could we have predicted in 1984 about the computers in 2009, and what would we have missed? I think most people already knew that computers were getting cheaper, smaller and better. So it would have made sense for us to think that home computers would be in every home. They would also be much better than the ones we had then. But we would have had a far harder time predicting what they would be used for. It was not obvious that the computer would eat the stereo.
The biggest challenge to imagination was the Internet – already existing in the background, foreshadowed by the spread of bulletin board systems run by amateurs (who were already pirating software and spreading *very* low-resolution naughty pictures). The Internet changed the rules of the game, making the individual computers far more powerful (and subversive) by networking them. From what I remember of early 80’s computer speculation the experts were making the same mistake as the television program: they could see uses in communication, data sharing and doing online shopping, but not Wikipedia, propaganda warfare, spam, virtual telescopes and the blogosphere.
Moores law exists in various forms, but the one I prefer to use is that it takes about 5.6 years for the commodity computer performance to become ten times better per dollar. That means that 25 years is worth 30,000 times as much computer power per dollar. This means not just more and more powerful systems, but also smaller and more ubiqitious ones. Most of us do not need supercomputers (except for computer games, one of the big drivers of computer power). But we get more and more computers in more and more places. Some current trends are rebelling against the simple “more power” paradigm and instead looking for more energy-efficient or environmentally friendly computers. That way they can become even more ubiquitous.
Maybe the biggest innovation over the next 25 years will be the biodegradable microprocessor, perhaps printed on conductive and bendable plastic. Just like the RFID tags (which will also be everywhere and possibly merged into the computers) they can be put everywhere. Every object will be able to network, and know what it is, where it is, what it is doing, who owns it and how it is to be recycled.
A world of smart objects is a strange place. Thanks to the environment (and of course one’s personal appliances) one will (in the words of Charles Stross) never be lost, never alone and never forget. It will be a googleable reality. Privacy will be utterly changed, just as concepts of property and security. But just like it was hard in 1984 to imagine Wikipedia we cannot really imagine how such a world is used. The human drivers will be the same, most of the things and services we have today will of course be around in some form, but the truly revolutionary is what is built on top of these capabilities. It would be a world of technological animism, where every little thing has its own techno-spirit. The freecycling grassroots movements that currently avoid waste by giving away things they do not need to other members have become possible/effective thanks to the Internet. In a smart object world a request for a widget may trigger all nearby widgets to check how often they have been used… the unused widgets to ask their owners if they rather not give them away. Less waste, a more efficient market and possibly also stronger social interactions are the result.
The real wildcard in the world of smarts over the next 25 years is of course artificial intelligence. Since day one the field has made overblown claims about the imminence of human-level intelligence, only to have them thrown back in its face shortly afterwards. Yet the field has made enormous and broad progress, it is just that we seldom notice it since AI is seldom embodied in a beeping kitchen robot.
The most impressive AI application right now might not be Deep Blue’s win over Kasparov in 1997 but the rapid improvement of driving in the DARPA grand challenge for driverless cars. Having vehicles get around in a real environment with other vehicles and obey traffic rules is both a very hard and a very useful problem. If the current improvements continue we will see not just military unmanned ground vehicles before 2034, but likely automated traffic systems. Given that deaths in traffic accidents outnumber all other accidental deaths and that cars are already platforms with rapidly growing computer capacity, car automation has a good chance to become a high priority. The individual car smarts become amplified by networking: when the system of cars “knows” where they are going, where they are and what their individual conditions are they can produce emergent improvements such as forming groups moving together, detect accidents or road faults, learn how to navigate treacherous parts – or emergently invent/learn how to plan their traffic better. Whether humans will give up their power over their cars remains to be seen (there are many issues of trust, legal responsibility, the desire for control), but sufficiently good smart traffic systems could become a strong incentive.
I do not see any reason why we could not create software “as smart as” humans or smarter, however ill defined that proposition is. Human thought processes occur embedded in a neural matrix that did not evolve for solving philosophy problems or trading on stock markets – such abilities are the froth on top of a deep system of survival-oriented solutions. That is why it might be easier to make intelligent machines than machines that actually survive well in a real, messy physical environment (most current robots cheat by working in simple environments, by being designed for particular environments or just being rugged enough not to care – but you would not want that for your kitchen robot). But in the environment of human thought, communication and information processing might be ideal for artificial intelligences.
I would give the emergence of “real” AI a decent chance of happening over the next 25 years. I also see no reason why it would be limited to mere human levels of intelligence. But I also believe that intelligence without knowledge and experience is completely useless. So the AI systems are going to have to learn about the world, essentially undergoing a rapid childhood. That will slow things down a bit. But we should not be complacent: once an AI program has learned enough to actually work in the real world, solving real problems, it can be copied. The total amount of smarts can be multiplied extremely rapidly this way. Just like networking makes smart objects much more powerful, so can copying of smart software make even expensive to make smarts very cheap and ubiquitous.
I do not think the AI revolution is likely to be heralded by a strong superintelligence calling all phones simultaneously to tell mankind that there is a new boss. I think it will rather be heralded by the appearance of a reasonably cheap personal secretary program. This program understands people well enough to work as a decent assistant on everyday information tasks – keeping track of projects, filling in details in texts, gathering and ordering information, suggesting useful things to do. If it works, it would improve the efficiency of people, in turn improving the efficiency of the economy worldwide. It might appear less dramatic than the phonecall from the AI-god, but I suspect it will be much more profound. Individuals will be able to do much more than they could before, and they will be able to work better together. As intelligence proliferates everywhere things will speed up. Including, of course, the development of better AI (especially if the secretary software is learning and sharing skills). Rather than a spike of AI divinity we get a swell of a massively empowered mankind.
Intelligence isn’t everything, in any case. One can do just as well with a lot of information. As the saying goes, data is not information is not knowledge is not wisdom. But one can often extract one of the later from one of the earlier. If you have enough information and a clever way of extracting new information from it, you can get what *looks like* superintelligence from it. Google is a good example: using a relatively simple algorithm it extracts very useful information from the text and link structure of web pages. The “80 Million Tiny Pictures” project demonstrated that having enough examples and some semantic knowledge was enough to do very good image recognition. Interface the AI secretary with Google 2.0 and it would appear much smarter, since it would be drawing from the accumulated knowledge of the whole net.
The classical centralized “mind in a box” is intensive intelligence, while the distributed networks of intelligence are extensive intelligence. Intensive intelligence thrives in a world of powerful processors and centralized information. It is separated from the surrounding world by some border, appearing as an individual being. Extensive intelligence thrives in a world of omnipresent computer power, smart objects and fluid networks. It is not being-like, but more of a process. We might have instances already in the form of companies, nations and distributed problem solving like human teams playing alternate reality games. As time goes on I expect the extensive intelligences to become smarter and more powerful, both because of the various technological factors I have mentioned earlier but also because we will work hard on designing better extensive intelligence. Figuring out how to make a company or nation smart is worth a lot.
The real challenge for this scenario might be the right interface for all the smart and intelligent stuff. As the Microsoft paperclip demonstrated, if a device does not interact with us in the way we want it will be obnoxious. On my trip to London the nagging of the car GPS was a constant source of joking conversation: it was interrupting our discussions, it had a tone suggesting an annoyed schoolmarm, it was simply not a team player. Smart objects need to become team players, both technologically and socially. That is very hard, as anybody who has tried to program distributed systems or design something really simple and useful knows. Designing social skills into software might be one of the biggest businesses over the next 25 years. Another approach is to make the smart objects mute and predictable: rather than having unsettling autonomy and trying to engage with us they are just quietly doing their jobs. But under the hood the smarts would be churning, occasionally surprising us anyway.
Can we humans keep up with this kind of development? We are stretched but stimulated by our technological environment. It is not similar enough to our environment of evolutionary adaptedness, so we suffer obesity, back pain and information stress. But we also seem to adapt rapidly to new media, at least if we grow up with them or have a personal use for them. The Flynn effect, the rise of scores on IQ tests in most countries over time, could in part be due to a more stimulating environment where the important tasks are ever more like IQ test questions. Television show plots are apparently growing more and more complex and rapid, driven by viewer demands and stimulating viewers to want more complexity. Computer games train attention in certain ways, making players better at certain kinds of hand-eye coordination and rapid itemization. Whether these changes are good is beside the point; they have their benefits and drawbacks. What they seem to suggest is that we will definitely think differently in a smarter environment.
We can of course enhance ourselves directly. Current cognition enhancing drugs seem able to improve aspects of memory, attention and alertness, and possibly also affect creativity, emotional bonding or even willpower. These effects have their limits and are probably task-specific: the kind of learning ability that is useful for cramming a textbook might not be useful for learning a practical skill, the narrow attention that is great in the office is dangerous on the road. I believe that by 2024 cognition enhancers will be widespread, but their use will be surrounded by a culture of norms, rules-of-thumb and practical knowledge about what enhancer to take in what situation. Not unlike how we today use and mis-use caffeine and alcohol. If we are clever we can make use of the smart environment to support good decisionmaking about our own bodies and minds.
While brains can be trained (for example by computer games intended to stimulate working memory) we might wish to extend them. We can already make crude neurointerfaces that link brains to computers. It is just that the bandwidth is very small and signals are usually sent one-way. More advanced interfaces are on their way, and by 2024 I expect them to have improved the quality of life of many currently profoundly disabled people enormously. But even if the neurointerfaces are worthwhile for someone lacking limbs or a sense, suffering from paralysis or other brain disorders it does not mean they are worth the cost, risk and effort for healthy people. To be viable a neurointerface has to be as cheap, safe and easy to use as lasik surgery. That will likely take a long while. But once the “killer application” is there (be it obesity or attention control) other applications will follow and before long the neurointerface might become a platform just like the cellphone for various useful and life-transforming applications. I’d rather not be the guinea pig, though.
Since I am blondish I think I am allowed to tell a blonde joke:
- How do you confuse a blonde?
- You put him in a circular room and tell him to sit in the corner.
- How does a blonde confuse you?
- He comes out and says he did it.
Intelligence is surprising. That is the nature of truly intelligent thought: it is not possible to predict without having the same knowledge and mental capacity. Since there are so many different kinds of possible minds and situations we should expect the unexpected. We *know* we are going to be surprised in the future. So we should adapt to it and try to make sure the surprises will be predominantly nice surprises.
A world with thousands or millions of times as much smarts as the present will be very strange. Raw intelligence may prove less important there than other traits, just as physical strength in a mechanized world no longer is essential. Instead soft things like social skills, moral compass, kindness and adaptability may become the truly valued traits.
Posted by Anders3 at February 23, 2009 02:17 PM