OK, what about the monday part of the World Futures Meeting?
I started with John Smart's talk about understanding evolutionary development. The core idea is that evolution has no real direction; it is a random exploration of possibilities. But development has a direction, and moves from an undifferentiated or random state towards a certain outcome. The interplay between these two processes produces a complex history. Plenty of interesting history of the concept of progress and a look at broad patterns in the history of life and technology. The core pattern seems to be an ongoing acceleration, an acceleration that appears to be very robust against outside disruptions (asteroid impacts, the black plague) and more surprisingly, inside disruptions (fall of roman empire, dotcom crash). John's suggestion is that there is indeed a strong developmental factor of some kind here, smoothing out the random evolutionary developments into an amazing exponential. Part of it is definitely driven by MEST (mass-energy-space-time) optimization among competing agents, but is that enough to explain it?
I found the talk enormously stimulating, because it went outside the teleology that it could so easily become (there were far too many new age talks about the impending spiritual evolution of mankind at the conference for my taste). It was rather posed as an intriging research issue: does this pattern really exist (after all, it could just be sampling bias), what causes it and what are the implications?
I'm not sure I buy the whole idea, at least not in the fairly strong form John seems to like. Part of it is just general scepticism, I want better data and I want a way to measure the strength of the convergent and divergent factors of history. Part of it is my faith in freedom: if history is largely convergent we might be individually free but all end up in the same place anyway due to the overall development. Also, I cherish the vision on unbounded growth of complexity and the creativity of going in new directions. However, it is not impossible to both have the cake and eat it: we might have strongly convergent factors in some directions (MEST optimization) while this enables strong divergence in others (cultural expression, whose range is increased by the technological efficiencies). In the end we might all end up physically as Matrioshka Brains while on the inside culture is diverging wildly; this would explain the Fermi paradox, but the developmental factors better be extremeley strong to prevent wildfires (this is especially interesting since John's talk discussed how this kind of resource-depleting expansion on Earth might actually be the bootstrap phase for the singularity, but then contended that it was unlikely that we would see off-world expansion).
Sonia Miller talked about converging NBIC and the social implications, although the talk was mostly about the legal challenges. Perhaps not even challenges, because right now most of the issues brought up remain unasked in the legal system. We have hardly begun to look at brain fingerprinting, genetic discrimination, nanotech employment law or financing of enhancing medicine. Overall a very good talk that brought up the need for the legal profession to be part of the public and technical debate, as well as the reverse.
Steven C. Bankes talked about the method of doing long-term policy analysis presented in the RAND report Shaping the Next One Hundred Years: New Methods for Quantitative, Long-Term Policy Analysis (Robert J. Lempert, Steven Popper, and Steven C. Bankes). The basic problem with quantitative predictions is that they tend to be quite wrong, which is embarrasing and undermines confidence in the assumptions. But qualitative models like scenarios are often just as wrong, mere recastings of the present into the future. But can one make a policy analysis ("what should we do?) without doing a prediction? The model from the study group approches this by looking at entire ensembles of future scenarios (generated using a computer varying different parameters) and then look at how well different strategies do. These strategies can be quite complex and change during the simulation. But what seems great to us in the present might be bad in the future (our stoneage ancestors would probably regard the obesity epidemic as a world of beautiful, rich people, a real heaven), so instead of looking at how good the outcomes are in terms of absolute utility, they are compared with the best possible strategy from the perspective of the final future state: one minimizes the regret. This makes the strategies robust, especially since the final steps of the method involves changing assumptions, stress testing strategies and building new strategies from the previously best ones. The methodology is a combination of quantitative simulation, experimentation and scenario planning to create policies that are likely to minimize future regret. They used the Wonderland model (a toy Club of Rome-style model of economy, pollution and population) to demonstrate how it could be used to set up more clever policies than just fixed Kyoto-like goals. One of the nicest things was that one could look at outcomes using several different utility evaluations, such as pure "only the environment counts" and "only the economy of the North counts" views as well as all possible mixtures, and use the combined information to build policies as well as convince mixed, multigoal audiences about the possibilities.
This is the kind of future studies that I like: creative, but attempting stringency and making the most out of our computational abilities. The main drawback is likely that many interesting issues are hard to turn ino the kind of quantitative models this approach needs.
Raj Bawa talked about nanomedicine. Again, no more surprises than usual... i.e., the field is advancing really, really fast and it is already amazing the stuff we can do with current "bulk nanotechnology". The problem is more about uncertainty about regulation, public opinion and business structure than technology. If things advance as fast as many of us think, then these factors will be the main costs of development.
Timothy C. Mack talked about proactive computing. It is the next step beyond interactive computing, where the user or computer is waiting for the other. In proactive computing the system appreciates needs and acts on them. Lots of fun applications were demonstrated, ranging from motes with accelerometers that formed ad hoc networks for seisimic detection to applications in smart agriculture. The key enablers are machine learning (based on various forms of soft computing and statistics; Mack said he didn't like neural networks so he was glad the ANN conferences all have become statistics conferences), and ubiquitious computing and networking. One of the best points was that wasting labor is wrong in an ageing society: most of the things these systems are doing is replacing using humans to walk around with notebooks, taking down numbers for entering into computers in the office for further processing. It is much better to have them do something more creative with the numbers instead. Another point is that this kind of technology enables massive data collection: instead of looking at an earthquake from a few stations every building (or part of building) will be measuring it and respond to it. The idea of every bolt in a spaceship having its own little processor (that I used in a past sf game) seems to be getting closer.
Finally Richard C. Lamm talked about the brave new world of health care. This was a talk very much rooted in the present administrative and economic system looking forward towards anticipiable challenges and trying to deal with them. We all know the litany about the elderly boom and the rising expense of the health care system. His approach was based on the assumption that any new technology would not change anything except make people live longer and hence be even more expensive (both in treatment and through the new cronic diseases they would get instead of cancer or Alzheimer). He advocated a "new moral geography" based on contributive justive: one should suboptimize the health care given to individuals in order to optimize it to the population. If people don't get quite as much intensive but eventually futile treatment at the end of life or quite as many expensive treatments during life more people will get treatments, hence this is a good thing and should be accepted. This is a good example of the logic of public health care systems; they become a struggle between all different groups. Lamm didn't seem concerned with the possibility that the large greying population might not find this system in their best interest and might vote against adding age considerations in health care. Still, his proposed system is still far more free than the current Swedish system: a tax based health care system for everybody (likely running strict rationing), group based systems with their own internal rules, and finally freedom for individuals to pay for their own treatments if they desire and can afford it.
The discussion of the same issue in Peter Schwartz's Inevitable Surprises is much more optimistic, especially since he also takes into account that people will not just passively accept a handed down solution but take it into their own hands to get health. But maybe the risk is that the health altruism and paternalism that is so widespread (maybe for evolutionary reasons) will try to inhibit these ways out of the challenge.
That concluded the conference. Overall very stimulating, but maybe people agreed with each other too much. There were the transhumanist/technoacceleration crowd, the personal growth/consciousness/spiritual evolution crowd, the management/corporate adaptation crowd and the energy/environment crowd. Overlap seemed to be small, and it was too easy to just go to talks that fitted one's preconceptions (this is why I hate parallel sessions - better a week long conference with some challenging surprises than a weekend of rushing between one's friends' talks). Maybe this is a case of experts coordinating to give coordinated answers to their customers (as some readers can probably tell, I have spent the last two weeks reading and pondering Robin Hanson's marvellous ideas - staying at his place for two nights was a futurist conference on its own).
Monday evening was concluded by a dinner with friends, where wolf sociology, the gender/personality correlates of cat and dog ownership, new forms of vegetarianism and other matters were discussed. But that is material for its own blog.
Posted by Anders at August 4, 2004 05:23 PM