This Saturday I spoke at Humanity+ UK2010. Great fun as always when you put a lot of future oriented big thinkers in close vicinity.
I think the morning session was all about complexity and adaptation: Max More explained what he was sceptical about in singularity scenarios (the core there was that most scenarios do not take into account the complexity of real socio-techo-economic systems). I talked about cognition enhancement as a way to become more adaptive and Rachel Armstrong described how synthetic life might allow a very different way of doing "biotechnology" based on carefully tuned emergent properties. The evening session covered everything from radical hedonism with David Pearce to the internet of things (David Orban) to DIY enhancement art (Natasha Vita More, of course) to posthuman perception (Amon Twyman).
One of the key ideas I took with me from the afternoon session was the need to have a sense of proportion.
When thinking about existential risks it is important to have a sense of what the stakes are, and not just think "that is bad" - some things can be many orders of magnitude worse than others. At the same time, as Nick Bostrom pointed out, we have rather minimal research on how to prevent human extinction, about the same size as the literature on dung beetle reproduction. Toby Ord has pointed out that some charities can be up to 10,000 times more efficient in providing health than others (in terms of years of life per dollar donated), just because they focus on particular very effective means. Aubrey de Grey showed a pretty minor advance in biogerontology that was hailed in the media as "the secret of ageing", while rattling of a series of papers with far more profound implications that nobody outside the field has heard of. A graph of cost and size of carbon abatement methods clearly shows that some fix a vastly bigger chunk than others.
It seems to me that the vast differences in effectiveness noticed in charities, news coverage, research focus and abatement methods probably have counterparts everywhere: there are *vast* differences in importance between different things, yet this is often overlooked.
When two things can be put side by side it often appears as if they are roughly equal, even when one of them is far more important than the other. Substitute fly ash for clinker in cement, or replace incandescent light bulbs with LEDs? Sounds about the same. Distribute condoms or mass media educate people about AIDS? Sounds about the same. Research end-of-the-world scenarios or beetles? Sounds about the same. Research arteriosclerosis or ageing? Sounds about the same. Yet in each pair one thing is at least an order of magnitude more important than the other.
One reason is of course that we do not have the data. It takes a bit of statistics to see that the clinker substitution and mass media education is vastly more efficient, and this kind of statistics is not always easy to get or interpret. This is why getting a better "internet of things" is so important: we can actually query a lot of the things in the world to get information about its state. It will not solve the total problem, since much of the really important information needs plenty of deeper analysis to become reliable information (just consider the climate data issues).
Another problem is that even with data many of us do not have a sense of scale. This is where information visualisation and education are important. A visualisation of length scales (wow! gold leaf is amazingly thin!) is helpful, and if it is visually appealing it makes it easier to recall the details. I think we should ensure that there are plenty of good visualisations that give us a sense of scale, not just of length but of time, energy and value. And we should ensure that we and everybody else regularly look and learn from them.
As David MacKay notes, the myth that "every little helps" is quite pernicious: If everyone does a little, we’ll achieve only a little. Sometimes small individual actions do have big effects, like vaccination programs or participating in writing Wikipedia, but this is not the rule. Small efforts tend to be symbolic and make us feel we have done our part.
But there are deeper problems. How do we even judge the importance of new scientific papers? Recently there was a meeting on hard problems in social science where (among other issues) Nick suggested that one key hard problem is indeed to figure out the real worth of academic contributions. Any small improvement here would clearly have a huge impact not only on what papers we should prioritize to read, but also allocations of money and other resources, and presumably on the rate of development.
If there are vast differences in importance between things, it also means that even small advances on an important goal may matter much more than big advances on unimportant goals. Right now designers are having great fun coming up with gadgets that signal to consumers how much energy they are using or warning us about "vampire power". Yet the biggest energy losses occur during producing and distributing power, and the big energy users are not individual consumers but things like transport and industry. If the designers instead focus on making these sectors slightly more efficient they would probably have a much bigger impact.
Except of course that you get cred as a designer by making a gadget people can see and admire. Designers of a better coal furnace or a clever process control interface won't get invited to cool parties and art events as often, despite their greater importance. This is similar to the observation that most people do not care much about how effective the charity they donate to is, or that news stories about science prefer to focus on the new and dramatic readers might like rather than the important (case in point: the extreme media bias in the MMR scare). These social and economic biases hide the true importance of things, or direct us towards fixing less important things because they give us other, tangible rewards.
Of course, one might argue that we shouldn't be tempted by being invited to art events if we could single-handedly fix the climate slightly but significantly: the latter is so much more important than the first one. Unfortunately relatively few people are able to completely follow what they consider to be important. Social rewards and money are often more powerful motivators than the intellectual realization that A is a thousand times more important than B.
So where does this lead us? I think the conclusion is that we must really chip away at the problems of:
This is a set of tremendously hard, complex and *important* problems.
But they are by no means impossible to deal with - some of the above links show information visualisations that help problem 2 and data that attack problem 1 in different domains, there are economic incentives for solving several, new findings in cognitive and social psychology have a direct bearing on them. And given the importance of the issue - since it affects nearly any domain and by definition makes us make very costly misallocations of effort - even small advances can be tremendously valuable. Besides, many of the sub-problems are quite interesting and rewarding on their own.
Did we have a sense of proportion at the Humanity+ meeting? Maybe, maybe not. But I think we were more aware than most people that the "amplitude" of the future - whether it will go well or badly - may be much bigger than most people think. We should seriously consider both the risk of going extinct and the possibilities of enhanced immortality. How good or bad could things be? How likely are different scenarios, and how can we tell whether our estimates are right? How do these estimates compare with other things? That constitutes a first few steps towards getting a *real* sense of proportion.
Posted by Anders3 at April 26, 2010 01:25 PM