Here is the transcript of my talk Global Catastrophic Risks: An Overview
Some other talks from the meeting are emerging on Accelerating future, including JoSHs scary talk on weather control - while this particular application may not be the most dangerous, I think he is spot on about the risky dynamics of dual-use, global technologies with enormous first mover advantages.
My personal list of indicators that something could be *really* dangerous is:
Conversely, risks that have size cut-offs (fires, earthquakes), are not self-amplifying or can be studied are if not safer at least manageable. If we can find tools to introduce cut-offs, stop self-amplification or bound unknowability we can reduce threats quite a lot.
One of the sad things about the Web is that some brilliant old resources disappear. The Xmorphia website at Caltech was a great way of exploring the patterns a reaction-diffusion system could produce, an early landmark of web science visualization.
Fortunately Jonathan Lidbeck at University of Oregon has created this Java demo: 2D Gray-Scott Reaction-Diffusion. It is a great successor, with new features. After all, these days you don't need a supercomputer to run the simulations, they can be done in an applet in realtime.
My talk at the GCR08 conference is now online: Global Catatrophic Risks: An Overview, and Caution about Risk Assessments on Vimeo and as audio. (some liveblogging of the event by John Carter McKnight).
Transcript likely coming soon.
I think my most quotable line is “Very smart people make very stupid mistakes and do so with surprising regularity.” (well, it has been quoted twice so far) Fortunately I'm not very smart, so I do mistakes with either unsurprising regularity or surprising irregularity.
Practical Ethics: Why boost brains? is my comments on the Nature article Towards responsible use of cognitive-enhancing drugs by the healthy by Greely et al. Overall, I agree with them. Enhancement is here, it should be accepted, but it needs to be studied much more so that we can avoid spending too much on placebo, minimize risks and figure out how it can fit into our social norms. That will require professional societies to consider their own policies, and maybe some legislation to avoid for example discrimination.
A fun reference I found last week is IQ in early adulthood and later risk of death by homicide: cohort study of 1 million men, which demonstrates that among swedish males having intelligence above average reduces the risk of being murdered to 27% of the risk among the lower 11%. Why this is so is a bit unclear, but clearly intelligence is health promoting. It reduces injuries and bad driving too.
Of course, it remains to be seen whether enhancers can achieve the kind of long-term improvements that would affect average intelligence. Things like avoiding deficiencies during childhood, a stimulating environment and nutrition certainly help (and should be aimed for). But it seems that given the accumulating evidence that low intelligence is a really bad thing, that we morally ought to research ways of improving it a lot more than is currently done. Maybe the impetus will instead come from expanded research on enhancement of the healthy and smart - but the biggest real benefits of an "IQ pill" are likely to accrue to the cognitively worst off. Most enhancements seem to give bigger boosts to the worst performers than the best performers, and societies that gain overall from enhancement will have more resources to spend on helping.
Practical Ethics: Open source censorship - I blog about the UK censorship of Wikipedia debacle. My conclusion is that to the extent censorship is legitimate (I have my doubts about that) it must be handled in a transparent, accountable way. That does not seem to be the case for the IWF, and this raises serious problems with having their blacklist mandatory. Overall, the real story here isn't a mistaken or excessive attempt to control Wikipedia but the steady creep towards private, non-neutral enforcement on the Internet.
Ken MacLeod writes about the future (of IT security) in All Your Firewall Are Belong to Us. Plenty of interesting points, largely based on a sketched scenario where we see a big lurch to the left, towards New Deal-style infrastructure projects.
However, one claim looks pretty problematic:
Now the problem of IT security will not go away, but the very nature of the problem changes if the education system has to adapt to preparing people for manufacture instead of McJobs (or finance), and if there are big technology-heavy projects to soak up the script kiddies and hackers and spammers and scammers into doing something more productive and useful and indeed profitable.
Did organized crime disappear due to the New Deal? From what I can see, it thrived during the Depression despite having lost the cash from Prohibition (it simply moved into new areas). The main factors blunting it was eventually successful anti-corruption and racketeering campaigns. Similarly non-organized crime was rife.
It seems to be very optimistic to assume that once schools start to promote "real engineering" and Big Green starts hiring the con-men, spammers, hackers and script kiddies will decide to straighten up and become good little cubicle workers. Identity theft is apparently linked to meth abuse; should we assume the drivers for this kind of behavior are easily affected by a change in emphasis on infrastructure?
In fact, MacLeod doesn't seem to have taken his scenario seriously enough. In a world where governments top-down outlaw bad things like trans-fats, why don't they mandate IT security? It seems to be entirely logical: an information society focused on building its way out of a crisis and avoiding foreign and internal threats would be stupid if it did not do anything about the vulnerabilities in its IT infrastructure.
But macro-managed IT security is probably the last thing many current IT security companies would like, since much of the market consists of selling to individual companies and consumers. Macro IT security could end up as mandated central systems on the ISP level, giving lots of money to a few companies, as well as fixed standards for what security needs to be on a computer for it to access the net - bad news for competition and innovation, even if the overall security becomes better.
And macro-managed security is likely to make surveillance much, much easier and harder to avoid.
MacLeod notes that
"lots and lots of things will go horribly wrong, fortunes unimaginable today will be squandered on gigantic schemes that never pay off, and conflicts and contradictions will build up"To me that sounds like a very good reason to demand transparency and accountability in any big, government-funded projects. Especially if the spammers, con-men, hackers and script kiddies actually do join the projects.
Practical Ethics: The perfect cognition enhancer - it is iodine. Iodine deficiency affects up to 2 billion people, impairing at least 18 million children mentally each year and most likely costing mankind several billion IQ points. Iodized salt is cheap, safe and even OK to fairly strong libertarian anti-forced medication views.
The scary part was that I, a researcher in cognition enhancement, had not thought about it before. I have been looking at lead and other heavy metals, even folate and other nutrients, but I had completely overlooked this key micronutrient. A good reminder to read the literature more carefully, and a demonstration that sometimes massive improvements can be achieved surprisingly simply.
A few things I just ran across:
Ask Nature - the Biomimicry Design Portal: biomimetics, architecture, biology, innovation inspired by nature, industrial design - Ask Nature - the Biomimicry Design Portal: biomimetics, architecture, biology, innovation inspired by nature, industrial design - a portal aiming at organising cool and useful solutions to problems where nature already has workable solutions. Slick interface, already some useful information. I'm just concerned with how easy it will be to fill it: while people with particular interests might curate some pages, the ideal process would be to have someone or something scan every new (and old!) biology paper for a solution that fits into the taxonomy and then add it.
The FDA-approved handgun makes Spider Jerusalem's "prescription truncheon" a reality. If one believes the right to bear arms is important, clearly people with arthritis should be helped. But as a commenter on MedGadget points out, it does not look enough like a gun to be useful as a deterrent, and this might reduce its utility quite a lot. I also have a suspicion that it is hard to aim.
Also on MedGadget: their 2008 medical sf contest. "Different Day, Same Chip" may not be too exciting, but it does caricature an all-too likely possibility. "30 Minutes of Clinical Ethics" suggests where all the bioethicists will end up. Maybe having deciders around is not a bad idea, except that I see a lot of potential for blameshifting. "APA 4000" is the kind of story that makes singularitarians happy, without even having any superintelligence around.
And then there is Mathematical undecidability and quantum randomness. A cute little paper that theoretically and experimentally demonstrates that you can tell decideable from undecideable propositions apart by making quantum mechanical measurements on a corresponding system. Except that the undecidability isn't the full Gödel-kind, but a more garden variety kind where information limited axioms cannot constrain the formal system enough to handle even fairly everyday questions. Still, it is an interesting idea.