# The end of the worlds

George Dvorsky has a piece on Io9 about ways we could wreck the solar system, where he cites me in a few places. This is mostly for fun, but I think it links to an important existential risk issue: what conceivable threats have big enough spatial reach to threaten a interplanetary or even star-faring civilization?

This matters, since most existential risks we worry about today (like nuclear war, bioweapons, global ecological/societal crashes) only affect one planet. But if existential risk is the answer to the Fermi question, then the peril has to strike reliably. If it is one of the local ones it has to strike early: a multi-planet civilization is largely immune to the local risks. It will not just be distributed, but it will almost by necessity have fairly self-sufficient habitats that could act as seeds for a new civilization if they survive. Since it is entirely conceivable that we could have invented rockets and spaceflight long before discovering anything odd about uranium or how genetics work it seems unlikely that any of these local risks are “it”. That means that the risks have to be spatially bigger (or, of course, that xrisk is not the answer to the Fermi question).

Of the risks mentioned by George physics disasters are intriguing, since they might irradiate solar systems efficiently. But the reliability of them being triggered before interstellar spread seems problematic. Stellar engineering, stellification and orbit manipulation may be issues, but they hardly happen early – lots of time to escape. Warp drives and wormholes are also likely late activities, and do not seem to be reliable as extinctors. These are all still relatively localized: while able to irradiate a largish volume, they are not fine-tuned to cause damage and does not follow fleeing people. Dangers from self-replicating or self-improving machines seems to be a plausible, spatially unbound risk that could pursue (but also problematic for the Fermi question since now the machines are the aliens). Attracting malevolent aliens may actually be a relevant risk: assuming von Neumann probes one can set up global warning systems or “police probes” that maintain whatever rules the original programmers desire, and it is not too hard to imagine ruthless or uncaring systems that could enforce the great silence. Since early civilizations have the chance to spread to enormous volumes given a certain level of technology, this might matter more than one might a priori believe.

So, in the end, it seems that anything releasing a dangerous energy effect will only affect a fixed volume. If it has energy $E$ and one can survive it below a deposited energy $e$, if it just radiates in all directions the safe range is $r = \sqrt{E/4 \pi e} \propto \sqrt{E}$ – one needs to get into supernova ranges to sterilize interstellar volumes. If it is directional the range goes up, but smaller volumes are affected: if a fraction $f$ of the sky is affected, the range increases as $\propto \sqrt{1/f}$ but the total volume affected scales as $\propto f\sqrt{1/f}=\sqrt{f}$.

Self-sustaining effects are worse, but they need to cross space: if their space range is smaller than interplanetary distances they may destroy a planet but not anything more. For example, a black hole merely absorbs a planet or star (releasing a nasty energy blast) but does not continue sucking up stuff. Vacuum decay on the other hand has indefinite range in space and moves at lightspeed. Accidental self-replication is unlikely to be spaceworthy unless is starts among space-moving machinery; here deliberate design is a more serious problem.

The speed of threat spread also matters. If it is fast enough no escape is possible. However, many of the replicating threats will have sublight speed and could hence be escaped by sufficiently paranoid aliens. The issue here is if lightweight and hence faster replicators can always outrun larger aliens; given the accelerating expansion of the universe it might be possible to outrun them by being early enough, but our calculations do suggest that the margins look very slim.

The more information you have about a target, the better you can in general harm it. If you have no information, merely randomizing it with enough energy/entropy is the only option (and if you have no information of where it is, you need to radiate in all directions). As you learn more, you can focus resources to make more harm per unit expended, up to the extreme limits of solving the optimization problem of finding the informational/environmental inputs that cause desired harm (=hacking). This suggests that mindless threats will nearly always have shorter range and smaller harms than threats designed by (or constituted by) intelligent minds.

In the end, the most likely type of actual civilization-ending threat for an interplanetary civilization looks like it needs to be self-replicating/self-sustaining, able to spread through space, and have at least a tropism towards escaping entities. The smarter, the more effective it can be. This includes both nasty AI and replicators, but also predecessor civilizations that have infrastructure in place. Civilizations cannot be expected to reliably do foolish things with planetary orbits or risky physics.

Stuart Armstrong sent me this email:

I have a new expression: “a lump of cadmium”.

Background: in WW2, Heisenberg was working on the German atomic reactor project (was he bad? see the fascinating play “Copenhagen” to find out!). His team almost finished a nuclear reactor. He thought that a reaction with natural uranium would be self-limiting (spoiler: it wouldn’t), so had no cadmium control rods or other means of stopping a chain reaction.

But, no worries: his team has “a lump of cadmium” that they could toss into the reactor if things got out of hand. So, now, if someone has a level of precaution woefully inadequate to the risk at hand, I will call it a lump of cadmium.

(Based on German Nuclear Program Before and During World War II by Andrew Wendorff)

It reminds me of the story that SCRAM (emergency nuclear reactor shutdowns) stands for “Safety Control Rod Axe Man“, a guy standing next to the rope suspending the control rods with an axe, ready to cut it. It has been argued it was liquid cadmium solution instead. Still, in the US project they did not assume the reaction was self stabilizing.

Going back to the primary citation, we read:

To understand it we must say something about Heisenberg’s concept of reactor design. He persuaded himself that a reactor designed with natural uranium and, say, a heavy water moderator would be self-stabilizing and could not run away. He noted that U(238) has absorption resonances in the 1-eV region, which means that a neutron with this kind of energy has a good chance of being absorbed and thus removed from the chain reaction. This is one of the challenges in reactor design—slowing the neutrons with the moderator without losing them all to absorption. Conversely, if the reactor begins to run away (become supercritical) , these resonances would broaden and neutrons would be more readily absorbed. Moreover, the expanding material would lengthen the mean free paths by decreasing the density and this expansion would also stop the chain reaction. In short, we might experience a nasty chemical explosion but not a nuclear holocaust. Whether Heisenberg realized the consequences of such a chemical explosion is not clear. In any event, no safety elements like cadmium rods were built into Heisenberg’s reactors. At best, a lump of cadmium was kepton hand in case things threatened to get out of control. He also never considered delayed neutrons, which, as we know, play an essential role in reactor safety. Because none of Heisenberg’s reactors went critical, this dubious strategy was never put to the test.
(Jeremy Bernstein, Heisenberg and the critical mass. Am. J. Phys. 70, 911 (2002); http://dx.doi.org/10.1119/1.1495409)

This reminds me a lot of the modelling errors we discuss in the “Probing the improbable” paper, especially of course the (ahem) energetic error giving Castle Bravo 15 megatons of yield instead of the predicted 4-8 megatons. Leaving out Li(7) from the calculations turned out to leave out the major contributor of energy.

Note that Heisenberg did have an argument for his safety, in fact two independent ones! The problem might have been that he was thinking in terms of mostly U(238) and then getting any kind of chain reaction going would be hard, so he was biased against the model of explosive chain reactions (but as the Bernstein paper notes, somebody in the project had correct calculations for explosive critical masses). Both arguments were flawed when dealing with reactors enriched in U(235). Coming at nuclear power from the perspective of nuclear explosions on the other hand makes it natural to consider how to keep things from blowing up.

We may hence end up with lumps of cadmium because we approach a risk from the wrong perspective. The antidote should always be to consider the risks from multiple angles, ideally a few adversarial ones. The more energy, speed or transformative power we expect something to produce, the more we should scrutinize existing safeguards for them being lumps of cadmium. If we think our project does not have that kind of power, we should both question why we are even doing it, and whether it might actually have some hidden critical mass.

# The 12 threats of xrisk

The Global Challenges Foundation has (together with FHI) produced a report on the 12 risks that threaten civilization.

And, yes, the use of “infinite impact” grates on me – it must be interepreted as “so bad that it is never acceptable”, a ruin probability, or something similar, not that the disvalue diverges. But the overall report is a great start on comparing and analysing the big risks. It is worth comparing it with the WEF global risk report, which focuses on people’s perceptions of risk. This one aims at looking at what risks are most likely/impactful. Both try to give reasons and ideas for how to reduce the risks. Hopefully they will also motivate others to make even sharper analysis – this is a first sketch of the domain, rather than a perfect roadmap. Given the importance of the issues, it is a bit worrying that it has taken us this long.

# Canine mechanics and banking

There are some texts that are worth reading, even if you are outside the group they are intended for. Here is one that I think everybody should read at least the first half of:

Andrew G Haldane and Vasileios Madouros: The dog and the frisbee

Haldane, the Executive Director for Financial Stability at Bank of England, brings up the topic is how to act in situations of uncertainty, and the role of our models of reality in making the right decision. How complex should they be in the face of a complex reality? The answer, based on the literature on heuristics, biases and modelling, and the practical world of financial disasters, is simple: they should be simple.

Using too complex models means that they tend to overfit scarce data, weight data randomly, require significant effort to set up – and tends to promote overconfidence. As Haldane then moves on to his own main topic, banking regulation. Complex regulations – which are in a sense models of how banks ought to act – have the same problem, and also act as incentives for playing the rules to gain advantage. The end result is an enormous waste of everybody’s time and effort that does not give the desired reduction of banking risk.

It is striking how many people have been seduced by the siren call of complex regulation or models, thinking their ability to include every conceivable special case is a sign of strength. Finance and finance regulation are full of smart people who make the same mistake, as is science. If there is one thing I learned in computational biology is that your model better produce more nontrivial results than the number of parameters it has.

But coming up with simple rules or models is not easy: knowing what to include and what not to include requires expertise and effort. In many ways this may be why people like complex models, since there are no tricky judgement calls.

# Threat reduction Thursday

Today seems to have been “doing something about risk”-day. Or at least, “let’s investigate risk so we know what we ought to do”-day.
First, the World Economic Forum launched their 2015 risk perception report. (Full disclosure: I am on the advisory committee)
Second, Elon Musk donated \$10M to AI safety research. Yes, this is quite related to the FLI open letter.
Today has been a good day. Of course, it will be an even better day if and when we get actual results in risk mitigation.

# Existential risk and hope

Toby and Owen started 2015 by defining existential hope, the opposite of existential risk.

In their report “Existential Risk and Existential Hope: Definitions” they look at definitions of existential risk. The initial definition was just the extinction of humanity, but that leaves out horrible scenarios where humanity suffers indefinitely, or situations where there is a tiny chance of humanity escaping. Chisholming their way through successive definitions they end up with:

An existential catastrophe is an event which causes the loss of most expected value.

They also get the opposite:

An existential eucatastrophe is an event which causes there to be much more expected value after the event than before.

So besides existential risk, where the value of our future can be lost, there is existential hope: the chance that our future is much greater than we expect. Just as we should work hard to avoid existential threats, we should explore to find potential eucatastrophes that vastly enlarge our future.

## Infinite hope or fear

One problem with the definitions I can see is that expectations can be undefined or infinite, making “loss of most expected value” undefined. That would require potentially unbounded value, and that the probability of reaching a certain level has a sufficiently heavy tail. I guess most people would suspect the unbounded potential to be problematic, but at least some do think there could be infinite value somewhere in existence (I think this is what David Deutsch believes). The definition ought to work regardless of what kind of value structure exists in the universe.

There are a few approaches in Nick’s “Infinite ethics” paper. However, there might be simpler approaches based on stochastic dominance. Cutting off the upper half of a Chauchy distribution does change the situation despite the expectation remaining undefined (and in this case, changes the balance between catastrophe and eucatastrophe completely). It is clear that there is now more probability on the negative side: one can do a (first order) stochastic ordering of the distributions, even though the expectations diverge.

There are many kinds of stochastic orderings; which ones make sense likely depends on the kind of value one uses to evaluate the world. Toby and Owen point out that this what actually does the work in the definitions: without a somewhat precise value theory existential risk and hope will not be well defined. Just as there may be unknown threats and opportunities, there might be surprise twists in what is valuable – we might in the fullness of time discover that some things that looked innocuous or worthless were actually far more weighty than we thought, perhaps so much that they were worth the world.

# Cool risks outside the envelope of nature

How do we apply the precautionary principle to exotic, low-probability risks?

The CUORE collaboration at the INFN Gran Sasso National Laboratory recently set a world record by cooling a cubic meter 400 kg copper vessel down to 6 milliKelvins: it was the coldest cubic meter in the universe for over 15 days. Yay! Applause! (And the rest of this post should in no way be construed as a criticism of the experiment)

## Cold and weird risks

I have not been able to dig up the project documentation, but I would be astonished if there was any discussion of risk due to the experiment. After all, cooling things is rarely dangerous. We do not have any physical theories saying there could be anything risky here. No doubt there are risk assessment of liquid nitrogen or helium practical risks somewhere, but no analysis of any basic physics risks.

Compare this to the debates around the LHC, where critics at least could point to papers suggesting that strangelets, small black holes and vacuum decay were theoretically possible. Yet the LHC could argue back that particle processes like those occurring in the accelerator were already naturally occurring almost everywhere: if the LHC was risky, we ought to see plenty of explosions in the sky. Leaving aside the complications of correcting for anthropic bias, this kind of argument seems reasonably solid: if you do something that is within the envelope of what happens in the universe normally and there are no observed super-dangerous processes linked to it, then this activity is likely fine. We might wish for careful risk assessment, but given that the activity is already happening it can be viewed as just as benign as the normal activity of the universe.

However, the CUORE experiment is actually going outside of the envelope of what we think is going on in the universe. In the past, the universe has been hotter, so there would not have been any large masses at 6 milliKelvins. And with a 3 Kelvin background temperature, there would not be any natural objects this cold. (Since 1995 there have been small Bose-Einstein condensates in the hundred nanoKelvin range on Earth, but the argument is the same.)

How risky is it to generate such an outside of the envelope phenomenon? There is no evidence from the past. There is no cause for alarm given the known laws of physics. Yet this lack of evidence does not argue against risk either. Maybe there is an ice-9 like phase transition of matter below a certain temperature. Maybe it implodes into a black hole because of some macroscale quantum(gravity) effect. Maybe the alien spacegods get angry. There is an endless number of possible hypotheses that cannot be ruled out.

We might think that such “small theories” can safely be ignored. But we have some potential evidence that the universe may be riskier than it looks: the Fermi paradox, the apparent absence of alien intelligence. If we are alone, it is either because there are one or more steps in the evolution of life and intelligence that are very unlikely (the “great filter” is behind us), or there is a high likelihood that intelligence disappears without a trace (a future great filter). Now, we might freely assign our probabilities to (1) that there are aliens around, (2) that the filter is behind us, and (3) that it is ahead. However, given our ignorance we cannot rationally give zero probability to any of these possibilities, and probably not even give any of them less than 1% (since that is about the natural lowest error rate of humans on anything). Anybody saying one of them is less likely than one in a million is likely very overconfident. Yet a 1% risk of a future great filter implies a huge threat. It is a threat that not only reliably wipes out intelligent life, but also does it to civilizations aware of its potential existence!

We then have a slightly odd reason to be slightly concerned with experiments like CUORE. We know there is some probability that intelligence gets reliably wiped out. We know intelligence is likely to explore conditions not found in the natural universe. So a potential explanation could be that there is some threat in this exploration. The probability is not enormous – we might think the filter is behind us or the universe is teeming with aliens, and even if there is a future filter there are many possibilities for what it could be besides low-temperature physics – but nearly any non-infinitesimal probability multiplied by the value of our species (at least 7 billion lives) tends to lead to a too large risk.

## Precaution?

At this point the precautionary principle rears its stupid head (the ugly head is asleep). The stupid head argues that we should hence never do anything that is outside the natural envelope.

The ugly head would argue we should investigate before doing anything risky, but since in this case the empirical studying is causing the risk the head would hence advice just trying out theoretical risk scenarios – not very useful given that we are dealing with something where all potential risk comes from scenarios unconstrained by evidence!

We cannot obey the stupid head much, since most human activity is about pushing the envelope. We are trying to have more and happier people than has ever existed in the universe before. Maybe that is risky (compare to Stapledon’s Last and First Men where it turned out to be dangerous to have too much intelligence in one spot), but it is both practically hard to prevent and this kind of open-ended “let’s not do anything that has not happened in the past” seems unreasonable given that most events are new ones and generally do not lead to disasters. But the pushing of the envelope into radically new directions does carry undefinable risk. We cannot avoid that. What we can do is to discuss whether we are willing to take on such hard to pin down risk.

However, this example also shows a way precaution can break down. Nobody has, to my knowledge, worried about cooling down matter besides me. There is no concerned group urging precaution since there is no empirical nor normative reason to think there is anything wrong specifically with CUORE: we only have a general Fermi paradox-induced inchoate worry. Yet proper precaution requires considering weak possibilities. I suspect that most future big new disasters will turn out to have avoided precautionary considerations just because there was no obvious reason to invoke the principle.

## Conclusion?

Many people are scared more by uncertainty than actual risk. But we cannot escape it. Especially if we want to reduce existential risk, which tends to be more uncertain than most. This little essay is about some of the really tricky limits to what we can know about new risks. We should expect them to be unexpected. And we should expect that the standard decision methods will not behave sensibly.

As for the CUORE team, I wish them the best of luck to find neutrinoless double beta decay. But they should keep an eye open for weird anomalies too – they have a chance to peek outside the envelope of the natural in a well controlled setting, and that is valuable.

# Anthropic negatives

Stuart Armstrong has come up with another twist on the anthropic shadow phenomenon. If existential risk needs two kinds of disasters to coincide in order to kill everybody, then observers will notice the disaster types to be anticorrelated.

The minimal example would be if each risk had 50% independent chance of happening: then the observable correlation coefficient would be -0.5 (not -1, since there is 1/3 chance to get neither risk; the possible outcomes are: no event, risk A, and risk B). If the probability of no disaster happening is N/(N+2) and the risks are equal 1/(N+2), then the correlation will be -1/(N+1).

I tried a slightly more elaborate model. Assume X and Y to be independent power-law distributed disasters (say war and pestillence outbreaks), and that if X+Y is larger than seven billion no observers will remain to see the outcome. If we ramp up their size (by multiplying X and Y with some constant) we get the following behaviour (for alpha=3):

As the situation gets more deadly the correlation becomes more negative. This also happens when allowing the exponent run from the very fat (alpha=1) to the thinner (alpha=3):

The same thing also happens if we multiply X and Y.

I like the phenomenon: it gives us a way to look for anthropic effects by looking for suspicious anticorrelations. In particular, for the same variable the correlation ought to shift from near zero for small cases to negative for large cases. One prediction might be that periods of high superpower tension would be anticorrelated with mishaps in the nuclear weapon control systems. Of course, getting the data might be another matter. We might start by looking at extant companies with multiple risk factors like insurance companies and see if capital risk becomes anticorrelated with insurance risk at the high end.

# Galactic duck and cover

How much does gamma ray bursts (GRBs) produce a “galactic habitable zone”? Recently the preprint “On the role of GRBs on life extinction in the Universe” by Piran and Jimenez has made the rounds, arguing that we are near (in fact, inside) the inner edge of the zone due to plentiful GRBs causing mass extinctions too often for intelligence to arise.

This is somewhat similar to James Annis and Milan Cirkovic’s phase transition argument, where a declining rate of supernovae and GRBs causes global temporal synchronization of the emergence of intelligence. However, that argument has a problem: energetic explosions are random, and the difference in extinctions between lucky and unlucky parts of the galaxy can be large – intelligence might well erupt in a lucky corner long before the rest of the galaxy is ready.

I suspect the same problem is true for the Piran and Jimenez paper, but spatially. GRBs are believed to be highly directional, with beams typically a few degrees across. If we have random GRBs with narrow beams, how much of the center of the galaxy do they miss?

I made a simple model of the galaxy, with a thin disk, thick disk and bar population. The model used cubical cells 250 parsec long; somewhat crude, but likely good enough. Sampling random points based on star density, I generated GRBs. Based on Frail et al. 2001 I gave them lognormal energies and power-law distributed jet angles, directed randomly. Like Piran and Jimenez I assumed that if the fluence was above 100 kJ/m^2 it would be extinction level. The rate of GRBs in the Milky Way is uncertain, but a high estimate seems to be one every 100,000 years. Running 1000 GRBs would hence correspond to 100 million years.

If we look at the galactic plane we find that the variability close to the galactic centre is big: there are plenty of lucky regions with many stars.

When integrating around the entire galaxy to get a measure of risk at different radii and altitudes shows a rather messy structure:

One interesting finding is that the most dangerous place may be above the galactic plane along the axis: while few GRBs happen there, those in the disk and bar can reach there (the chance of being inside a double cone is independent of distance to the center, but along the axis one is within reach for the maximum number of GRBs).

Integrating the density of stars that are not affected as a function of radius and altitude shows that there is a mild galactic habitable zone hole within 4 kpc. That we are close to the peak is neat, but there is a significant number of stars very close to the center.

This is of course not a professional model; it is a slapdash Matlab script done in an evening to respond to some online debate. But I think it shows that directionality may matter a lot by increasing the variance of star fates. Nearby systems may be irradiated very differently, and merely averaging them will miss this.

If I understood Piran and Jimenez right they do not use directionality; instead they employ a scaled rate of observed GRBs, so they do not have to deal with the iffy issue of jet widths. This might be sound, but I suspect one should check the spatial statistics: correlations are tricky things (and were GRB axes even mildly aligned with the galactic axis the risk reduction would be huge). Another way of getting closer to their result is of course to bump up the number of GRBs: with enough, the centre of the galaxy will naturally be inhospitable. I did not do the same careful modelling of the link between metallicity and GRBs, nor the different sizes.

In any case, I suspect that GRBs are weak constraints on where life can persist and too erratic to act as a good answer to the Fermi question – even a mass extinction is forgotten within 10 million years.