Thinking long-term, vast and slow

John Fowler "Long Way Down" https://www.flickr.com/photos/snowpeak/10935459325
John Fowler “Long Way Down” https://www.flickr.com/photos/snowpeak/10935459325

This spring Richard Fisher at BBC Future has commissioned a series of essays about long-termism: Deep Civilisation. I really like this effort (and not just because I get the last word):

“Deep history” is fascinating because it gives us a feeling of the vastness of our roots – not just the last few millennia, but a connection to our forgotten stone-age ancestors, their hominin ancestors, the biosphere evolving over hundreds of millions and billions of years, the planet, and the universe. We are standing on top of a massive sedimentary cliff of past, stretching down to an origin unimaginably deep below.

Yet the sky above, the future, is even more vast and deep. Looking down the 1,857 m into Grand Canyon is vertiginous. Yet above us the troposphere stretches more than five times further up, followed by an even vaster stratosphere and mesosphere, in turn dwarfed by the thermosphere… and beyond the exosphere fades into the endlessness of deep space. The deep future is in many ways far more disturbing since it is moving and indefinite.

That also means there is a fair bit of freedom in shaping it. It is not very easy to shape. But if we want to be more than just some fossils buried inside the rocks we better do it.

Existential risk in Gothenburg

This fall I have been chairing a programme at the Gothenburg Centre for Advanced Studies on existential risk, thanks to Olle Häggström. Visiting researchers come and participate in seminars and discussions on existential risk, ranging from the very theoretical (how do future people count?) to the very applied (should we put existential risk on the school curriculum? How?). I gave a Petrov Day talk about how to calculate risks of nuclear war and how observer selection might mess this up, beside seminars on everything from the Fermi paradox to differential technology development. In short, I have been very busy.

To open the programme we had a workshop on existential risk September 7-8 2017. Now we have the videos up of our talks:

I think so far a few key realisations and themes have in my opinion been

(1) the pronatalist/maximiser assumptions underlying some of the motivations for existential risk reduction were challenged; there is an interesting issue of how “modest futures” rather than “grand futures” play a role and non-maximising goals imply existential risk reduction.

(2) the importance of figuring out how “suffering risks”, potential states of astronomical amounts of suffering, relate to existential risks. Allocating effort between them rationally touches on some profound problems.

(3) The under-determination problem of inferring human values from observed behaviour (a talk by Stuart) resonated with the under-determination of AI goals in Olle’s critique of the convergent instrumental goal thesis and other discussions. Basically, complex agent-like systems might be harder to succinctly describe than we often think.

(4) Stability of complex adaptive systems – brains, economies, trajectories of human history, AI. Why are some systems so resilient in a reliable way, and can we copy it?

(5) The importance of estimating force projection abilities in space and as the limits of physics are approached. I am starting to suspect there is a deep physics answer to the question of attacker advantage, and a trade-off between information and energy in attacks.

We will produce an edited journal issue with papers inspired by our programme, stay tuned. Avancez!

 

Catastrophizing for not-so-fun and non-profit

T-valuesOren Cass has an article in Foreign Affairs about the problem of climate catastrophizing. It is basically how it becomes driven by motivated reasoning but also drives motivated reasoning in a vicious circle. Regardless of whether he himself has motivated reasoning too, I think the text is relevant beyond the climate domain.

Some of FHI research and reports are mentioned in passing. Their role is mainly in showing that there could be very bright futures or other existential risks, which undercuts the climate catastrophists that he is really criticising:

Several factors may help to explain why catastrophists sometimes view extreme climate change as more likely than other worst cases. Catastrophists confuse expected and extreme forecasts and thus view climate catastrophe as something we know will happen. But while the expected scenarios of manageable climate change derive from an accumulation of scientific evidence, the extreme ones do not. Catastrophists likewise interpret the present-day effects of climate change as the onset of their worst fears, but those effects are no more proof of existential catastrophes to come than is the 2015 Ebola epidemic a sign of a future civilization-destroying pandemic, or Siri of a coming Singularity

I think this is an important point for the existential risk community to be aware of. We are mostly interested in existential risks and global catastrophes that look possible but could be impossible (or avoided), rather than trying to predict risks that are going to happen. We deal in extreme cases that are intrinsically uncertain, and leave the more certain things to others (unless maybe they happen to be very under-researched). Siri gives us some singularity-evidence, but we think it is weak evidence, not proof (a hypothetical AI catastrophist would instead say “so, it begins”).

Confirmation bias is easy to fall for. If you are looking for signs of your favourite disaster emerging you will see them, and presumably loudly point at them in order to forestall the disaster. That suggests extra value in checking what might not be xrisks and shouldn’t be emphasised too much.

Catastrophizing is not very effective

The nuclear disarmament movement also used a lot of catastrophizing, with plenty of archetypal cartoons showing Earth blowing up as a result of nuclear war or commonly claiming it would end humanity. The fact that the likely outcome merely would be mega- or gigadeath and untold suffering was apparently not regarded as rhetorically punchy enough. Ironically, Threads, The Day After or the Charlottesville scenario in Effects of Nuclear War may have been far more effective in driving home the horror and undesirability of nuclear war better, largely by giving a smaller-scale more relateable scenarios. Scope insensitivity, psychic numbing, compassion fade and related effects make catastrophizing a weak, perhaps even counterproductive, tool.

Defending bad ideas

Another take-home message: when arguing for the importance of xrisk we should make sure we do not end up in the stupid loop he describes. If something is the most important thing ever, we better argue for it well and backed up with as much evidence and reason as can possibly be mustered. Turning it all into a game of overcoming cognitive bias through marketing or attributing psychological explanations to opposing views is risky.

The catastrophizing problem for very important risks is related to Janet Radcliffe-Richards’ analysis of what is wrong with political correctness (in an extended sense). A community argues for some high-minded ideal X using some arguments or facts Y. Someone points out a problem with Y. The rational response would be to drop Y and replace it with better arguments or facts Z (or, if it is really bad, drop X). The typical human response is to (implicitly or explicitly) assume that since Y is used to argue for X, then criticising Y is intended to reduce support for X. Since X is good (or at least of central tribal importance) the critic must be evil or at least a tribal enemy – get him! This way bad arguments or unlikely scenarios get embedded in a discourse.

Standard groupthink where people with doubts figure out that they better keep their heads down if they want to remain in the group strengthens the effect, and makes criticism even less common (and hence more salient and out-groupish when it happens).

Reasons to be cheerful?

An interesting detail about the opening: the GCR/Xrisk community seems to be way more optimistic than the climate community as described. I mentioned Warren Ellis little novel Normal earlier on this blog, which is about a mental asylum for futurists affected by looking into the abyss. I suspect he was maybe modelling them on the moody climate people but adding an overlay of other futurist ideas/tropes for the story.

Assuming climate people really are that moody.

My adventures in demonology

Wired has an article about the CSER Existential Risk Conference in December 2016, rather flatteringly comparing us to superheroes. Plus a list of more or less likely risks we discussed. Calling them the “10 biggest threats” is perhaps exaggerating a fair bit: nobody is seriously worried about simulation shutdowns. But some of the others are worth working a lot more on.

High-energy demons

Sidewalk pentagramI am cited as talking about existential risk from demon summoning. Since this is bound to be misunderstood, here is the full story:

As noted in the Wired list, we wrote a paper looking at the risk from the LHC, finding that there is a problem with analysing very unlikely (but high impact) risks: the probability of a mistake in the analysis overshadows the risk itself, making the analysis bad at bounding the risk. This can be handled by doing multiple independent risk bounds, which is a hassle, but it is the only (?) way to reliably conclude that things are safe.

I blogged a bit about the LHC issue before we wrote the paper, bringing up the problem of estimating probabilities for unprecedented experiments through the case of Taleb’s demon (which properly should be Taylor’s demon, but Stigler’s law of eponymy strikes again). That probably got me to have a demon association to the wider physics risk issues.

The issue of how to think about unprecedented risks without succumbing to precautionary paralysis is important: we cannot avoid doing new things, yet we should not be stupid about it. This is extra tricky when considering experiments that create things or conditions that are not found in nature.

Not so serious?

A closely related issue is when it is reasonable to regard a proposed risk as non-serious. Predictions of risk from strangelets, black holes, vacuum decay and other “theoretical noise” caused by theoretical physics theories at least is triggered by some serious physics thinking, even if it is far out. Physicists have generally tended to ignore such risks, but when forced by anxious acceleratorphobes the arguments had to be nontrivial: the initial dismissal was not really well founded. Yet it seems totally reasonable to dismiss some risks. If somebody worries that the alien spacegods will take exception to the accelerator we generally look for a psychiatrist rather than take them seriously. Some theories have so low prior probability that it seems rational to ignore them.

But what is the proper de minimis boundary here? One crude way of estimating it is to say that risks of destroying the world with lower probability than one in 10 billion can safely be ignored – they correspond to a risk of less than one person in expectation. But we would not accept that for an individual chemistry experiment: if the chance of being blown up if someone did it was “less than 100%” but still far above some tiny number, they would presumably want to avoid risking their neck. And in the physics risk case the same risk is borne by every living human. Worse, by Bostrom’s astronomical waste argument, existential risks risks more than 1046 possible future lives. So maybe we should put the boundary at less than 10-46: any risk more likely must be investigated in detail. That will be a lot of work. Still, there are risks far below this level: the probability that all humans were to die from natural causes within a year is around 10-7.2e11, which is OK.

One can argue that the boundary does not really exist: Martin Peterson argues that setting it at some fixed low probability, that realisations of the risk cannot be ascertained, or that it is below natural risks do not truly work: the boundary will be vague.

Demons lurking in the priors

Be as it may with the boundary, the real problem is that estimating prior probabilities is not always easy. They can vault over the vague boundary.

Hence my demon summoning example (from a blog post near Halloween I cannot find right now): what about the risk of somebody summoning a demon army? It might cause the end of the world. The theory “Demons are real and threatening” is not a hugely likely theory: atheists and modern Christians may assign it zero probability. But that breaks Cromwell’s rule: once you assign 0% to a probability no amount of evidence – including a demon army parading in front of you – will make you change your mind (or you are not applying probability theory correctly). The proper response is to assume some tiny probability \epsilon, conveniently below the boundary.

…except that there are a lot of old-fashioned believers who do think the theory “Demons are real and threatening” is a totally fine theory. Sure, most academic readers of this blog will not belong to this group and instead to the \epsilon probability group. But knowing that there are people out there that think something different from your view should make you want to update your view in their direction a bit – after all, you could be wrong and they might know something you don’t. (Yes, they ought to move a bit in your direction too.) But now suppose you move 1% in the direction of the believers from your \epsilon belief. You will now believe in the theory to \epsilon + 1\% \approx 1\%. That is, now you have a fairly good reason not to disregard the demon theory automatically. At least you should spend effort on checking it out. And once you are done with that you better start with the next crazy New Age theory, and the next conspiracy theory…

Reverend Bayes doesn’t help the unbeliever (or believer)

One way out is to argue that the probability of believers being right is so low that it can be disregarded. If they have probability \epsilon of being right, then the actual demon risk is of size \epsilon and we can ignore it – updates due to the others do not move us. But that is a pretty bold statement about human beliefs about anything: humans can surely be wrong about things, but being that certain that a common belief is wrong seems to require better evidence.

The believer will doubtlessly claim seeing a lot of evidence for the divine, giving some big update \Pr[belief|evidence]=\Pr[evidence|belief]\Pr[belief]/\Pr[evidence], but the non-believer will notice that the evidence is also pretty compatible with non-belief: \frac{\Pr[evidence|belief]}{\Pr[evidence|nonbelief]}\approx 1 – most believers seem to have strong priors for their belief that they then strengthen by selective evidence or interpretation without taking into account the more relevant ratio \Pr[belief|evidence] / \Pr[nonbelief|evidence]. And the believers counter that the same is true for the non-believers…

Insofar we are just messing around with our own evidence-free priors we should just assume that others might know something we don’t know (maybe even in a way that we do not even recognise epistemically) and update in their direction. Which again forces us to spend time investigating demon risk.

OK, let’s give in…

Another way of reasoning is to say that maybe we should investigate all risks somebody can make us guess a non-negligible prior for. It is just that we should allocate our efforts proportional to our current probability guesstimates. Start with the big risks, and work our way down towards the crazier ones. This is a bit like the post about the best problems to work on: setting priorities is important, and we want to go for the ones where we chew off most uninvestigated risk.

If we work our way down the list this way it seems that demon risk will be analysed relatively early, but also dismissed quickly: within the religious framework it is not a likely existential risk in most religions. In reality few if any religious people hold the view that demon summoning is an existential risk, since they tend to think that the end of the world is a religious drama and hence not intended to be triggered by humans – only divine powers or fate gets to start it, not curious demonologists.

That wasn’t too painful?

Have we defeated the demon summoning problem? Not quite. There is no reason for all those priors to sum to 1 – they are suggested by people with very different and even crazy views – and even if we normalise them we get a very long and heavy tail of weird small risks. We can easily use up any amount of effort on this, effort we might want to spend on doing other useful things like actually reducing real risks or doing fun particle physics.

There might be solutions to this issue by reasoning backwards: instead of looking at how X could cause Y that could cause Z that destroys the world we ask “If the world would be destroyed by Z, what would need to have happened to cause it?” Working backwards to Y, Y’, Y” and other possibilities covers a larger space than our initial chain from X. If we are successful we can now state what conditions are needed to get to dangerous Y-like states and how likely they are. This is a way of removing entire chunks of the risk landscape in efficient ways.

This is how I think we can actually handle these small, awkward and likely non-existent risks. We develop mental tools to efficiently get rid of lots of them in one fell sweep, leaving the stuff that needs to be investigated further. But doing this right… well, the devil lurks in the details. Especially the thicket of totally idiosyncratic risks that cannot be handled in a general way. Which is no reason not to push forward, armed with epsilons and Bayes’ rule.

Addendum (2017-02-14)

That the unbeliever may have to update a bit in the believer direction may look like a win for the believers. But they, if they are rational, should do a small update into the unbeliever direction. The most important consequence is that now they need to consider existential risks due to non-supernatural causes like nuclear war, AI or particle physics. They would assign them a lower credence than the unbeliever, but as per the usual arguments for the super-importance of existential risk this still means they may have to spend effort on thinking about and mitigating these risks that they otherwise would have dismissed as something God would have prevented. This may be far more annoying to them than unbelievers having to think a bit about demonology.

Emlyn O’Regan makes some great points over at Google+, which I think are worth analyzing:

  1. “Should you somehow incorporate the fact that the world has avoided destruction until now into your probabilities?”
  2. “Ideas without a tech angle might be shelved by saying there is no reason to expect them to happen soon.” (since they depend on world properties that have remained unchanged.)
  3. ” Ideas like demon summoning might be limited also by being shown to be likely to be the product of cognitive biases, rather than being coherent free-standing ideas about the universe.”

In the case of (1), observer selection effects can come into play. If there are no observers on a post demon-world (demons maybe don’t count) then we cannot expect to see instances of demon apocalypses in the past. This is why the cosmic ray argument for the safety of the LHC need to point to the survival of the Moon or other remote objects rather than the Earth to argue that being hit by cosmic rays over long periods prove that it is safe. Also, as noted by Emlyn, the Doomsday argument might imply that we should expect a relatively near-term end, given the length of our past: whether this matters or not depends a lot on how one handles observer selection theory.

In the case of (2), there might be development in summoning methods. Maybe medieval methods could not work, but modern computer-aided chaos magick is up to it. Or there could be rare “the stars are right” situations that made past disasters impossible. Still, if you understand the risk domain you may be able to show that the risk is constant and hence must have been low (or that we are otherwise living in a very unlikely world). Traditions that do not believe in a growth of esoteric knowledge would presumably accept that past failures are evidence of future inability.

(3) is an error theory: believers in the risk are believers not because of proper evidence but from faulty reasoning of some kind, so they are not our epistemic peers and we do not need to update in their direction. If somebody is trying to blow up a building with a bomb we call the police, but if they try to do it by cursing we may just watch with amusement: past evidence of the efficacy of magic at causing big effects is nonexistent. So we have one set of evidence-supported theories (physics) and another set lacking evidence (magic), and we make the judgement that people believing in magic are just deluded and can be ignored.

(Real practitioners may argue that there sure is evidence for magic, it is just that magic is subtle and might act through convenient coincidences that look like they could have happened naturally but occur too often or too meaningfully to be just chance. However, the skeptic will want to actually see some statistics for this, and in any case demon apocalypses look like they are way out of the league for this kind of coincidental magic).

Emlyn suggests that maybe we could scoop all the non-physics like human ideas due to brain architecture into one bundle, and assign them one epsilon of probability as a group. But now we have the problem of assigning an idea to this group or not: if we are a bit uncertain about whether it should have \epsilon probability or a big one, then it will get at least some fraction of the big probability and be outside the group. We can only do this if we are really certain that we can assign ideas accurately, and looking at how many people psychoanalyse, sociologise or historicise topics in engineering and physics to “debunk” them without looking at actual empirical content, we should be wary of our own ability to do it.

So, in short, (1) and (2) do not reduce our credence in the risk enough to make it irrelevant unless we get a lot of extra information. (3) is decent at making us sceptical, but our own fallibility at judging cognitive bias and mistakes (which follows from claiming others are making mistakes!) makes error theories weaker than they look. Still, the really consistent lack of evidence of anything resembling the risk being real and that claims internal to the systems of ideas that accept the possibility imply that there should be smaller, non-existential, instances that should be observable (e.g. individual Fausts getting caught on camera visibly succeeding in summoning demons), and hence we can discount these systems strongly in favor of more boring but safe physics or hard-to-disprove but safe coincidental magic.

Best problems to work on?

80,000 hours has a lovely overview of “What are the biggest problems in the world?” The best part is that each problem gets its own profile with a description, arguments in favor and against, and what already exists. I couldn’t resist plotting the table in 3D:

Most important problems according to 80,000 Hours, according to scale, neglectedness, and solvability.
Most important problems according to 80,000 Hours, according to scale, neglectedness, and solvability. Color denotes the sum of the values.

There are of course plenty of problems not listed; even if these are truly the most important there will be a cloud of smaller scale problems to the right. They list a few potential ones like cheap green energy, peace, human rights, reducing migration restrictions, etc.

I recently got the same question, and here are my rough answers:

  • Fixing our collective epistemic systems. Societies work as cognitive systems: acquiring information, storing, filtering and transmitting it, synthesising it, making decisions, and implementing actions. This is done through individual minds, media and institutions. Recently we have massively improved some aspects through technology, but it looks like our ability to filter, organise and jointly coordinate has not improved – in fact, many feel it has become worse. Networked media means that information can bounce around multiple times acquiring heavy bias, while filtering mechanisms relying on authority has lost credibility (rightly or wrongly). We are seeing all sorts of problems of coordinating diverse, polarised, globalised or confused societies. Decision-making that is not reality-tracking due to (rational or irrational) ignorance, bias or misaligned incentives is at best useless, at worst deadly. Figuring out how to improve these systems seem to be something with tremendous scale (good coordination and governance helps solve most problems above), it is fairly neglected (people tend to work on small parts rather than figuring out better systems), and looks decently solvable (again, many small pieces may be useful together rather than requiring a total perfect solution).
  • Ageing. Ageing kills 100,000 people per day. It is a massive cause of suffering, from chronic diseases to loss of life quality. It causes loss of human capital at nearly the same rate as all education and individual development together. A reduction in the health toll from ageing would not just save life-years, it would have massive economic benefits. While this would necessitate changes in society most plausible shifts (changing pensions, the concepts of work and life-course, how families are constituted, some fertility reduction and institutional reform) the cost and trouble with such changes is pretty microscopic compared to the ongoing death toll and losses. The solvability is improving: 20 years ago it was possible to claim that there were no anti-ageing interventions, while today there exist enough lab examples to make this untenable. Transferring these results into human clinical practice will however be a lot of hard work. It is also fairly neglected: far more work is being spent on symptoms and age-related illness and infirmity than root causes, partially for cultural reasons.
  • Existential risk reduction: I lumped together all the work to secure humanity’s future into one category. Right now I think reducing nuclear war risk is pretty urgent (not because of the current incumbent of the White House, but simply because the state risk probability seems to dominate the other current risks), followed by biotechnological risks (where we still have some time to invent solutions before the Collingridge dilemma really bites; I think it is also somewhat neglected) and AI risk (I put it as #3 for humanity, but it may be #1 for research groups like FHI that can do something about the neglectedness while we figure out better how much priority it truly deserves). But a lot of the effort might be on the mitigation side: alternative food to make the world food system more resilient and sun-independent, distributed and more robust infrastructure (whether better software security, geomagnetic storm/EMP-safe power grids, local energy production, distributed internet solutions etc.), refuges and backup solutions. The scale is big, most are neglected and many are solvable.

Another interesting set of problems is Robin Hanson’s post about neglected big problems. They are in a sense even more fundamental than mine: they are problems with the human condition.

As a transhumanist I do think the human condition entails some rather severe problems – ageing and stupidity is just two of them – and that we should work to fix them. Robin’s list may not be the easiest to solve, though (although there might be piecemeal solutions worth doing). Many enhancements, like moral capacity and well-being, have great scope and are very neglected but lose out to ageing because of the currently low solvability level and the higher urgency of coordination and risk reduction. As I see it, if we can ensure that we survive (individually and collectively) and are better at solving problems, then we will have better chances at fixing the tougher problems of the human condition.

Survivorship curves and existential risk

In a discussion Dennis Pamlin suggested that one could make a mortality table/survival curve for our species subject to existential risk, just as one can do for individuals. This also allows demonstrations of how changes in risk affect the expected future lifespan. This post is a small internal FHI paper I did just playing around with survivorship curves and other tools of survival analysis to see what they add to considerations of existential risk. The outcome was more qualitative than quantitative: I do not think we know enough to make a sensible mortality table. But it does tell us a few useful things:

  • We should try to reduce ongoing “state risks” as early as possible
  • Discrete “transition risks” that do not affect state risks matters less; we may want to put them off indefinitely.
  • Indefinite survival is possible if we make hazard decrease fast enough.

Simple model

Survivorship curve with constant risk.
Survivorship curve with constant risk.

A first, very simple model: assume a fixed population and power-law sized disasters that randomly kill a number of people proportional to their size every unit of time (if there are survivors, then they repopulate until next timestep). Then the expected survival curve is an exponential decay.

This is in fact independent of the distribution, and just depends on the chance of exceedance. If disasters happen at a rate \lambda and the probability of extinction \Pr(X>\mathrm{population}) = p, then the curve is S(t) = \exp(-p \lambda t).

This can be viewed as a simple model of state risks, the ongoing background of risk to our species from e.g. asteroids and supernovas.

Correlations

Survivorship curve with gradual rebound from disasters.
Survivorship curve with gradual rebound from disasters.

What if the population rebound is slower than the typical inter-disaster interval? During the rebound the population is more vulnerable to smaller disasters. However, if we average over longer time than the rebound time constant we end up with the same situation as before: an adjusted, slightly higher hazard, but still an exponential.

In ecology there has been a fair number of papers analyzing how correlated environmental noise affects extinction probability, generally concluding that correlated (“red”) noise is bad (e.g. (Ripa and Lundberg 1996), (Ovaskainen and Meerson 2010)) since the adverse conditions can be longer than the rebound time.

If events behave in a sufficiently correlated manner, then the basic survival curve may be misleading since it only shows the mean ensemble effect rather than the tail risks. Human societies are also highly path dependent over long timescales: our responses can create long memory effects, both positive and negative, and this can affect the risk autocorrelation.

Population growth

Survivorship curve with population increase.
Survivorship curve with population increase.

If population increases exponentially at a rate G and is reduced by disasters, then initially some instances will be wiped out, but many realizations achieve takeoff where they grow essentially forever. As the population becomes larger, risk declines as \exp(- \alpha G t).

This is somewhat similar to Stuart’s and my paper on indefinite survival using backups: when we grow fast enough there is a finite chance of surviving indefinitely. The growth may be in terms of individuals (making humanity more resilient to larger and larger disasters), or in terms of independent groups (making humanity more resilient to disasters affecting a location). If risks change in size in proportion to population or occur in different locations in a correlated manner this basic analysis may not apply.

General cases

Survivorship curve with increased state risk.
Survivorship curve with increased state risk.

Overall, if there is a constant rate of risk, then we should expect exponential survival curves. If the rate grows or declines as a power t^k of time, we get a Weibull distribution of time to extinction, which has a “stretched exponential” survival curve: \exp(-t/ \lambda)^k.

If we think of risk increasing from some original level to a new higher level, then the survival curve will essentially be piece-wise exponential with a more or less softly interpolating “knee”.

Transition risks

Survivorship curve with transition risk.
Survivorship curve with transition risk.

A transition risk is essentially an impulse of hazard. We can treat it as a Dirac delta function with some weight w at a certain time t, in which case it just reduces the survival curve so \frac{S(\mathrm{after }t)}{S(\mathrm{before }t)}=w. If t is randomly distributed it produces a softer decline, but with the same magnitude.

Rectangular survival curves

Human individual survival curves are rectangularish because of exponentially increasing hazard plus some constant hazard (the Gompertz-Makeham law of mortality). The increasing hazard is due to ageing: old people are more vulnerable than young people.

Do we have any reason to believe a similar increasing hazard for humanity? Considering the invention of new dangerous technologies as adding more state risk we should expect at least enough of an increase to get a more convex shape of the survival curve in the present era, possibly with transition risk steps added in the future. This was counteracted by the exponential growth of human population until recently.

How do species survival curves look in nature?

There is “van Valen’s law of extinction” claiming the normal extinction rate remains constant at least within families, finding exponential survivorship curves (van Valen 1973). It is worth noting that the extinction rate is different for different ecological niches and types of organisms.

However, fits with Weibull distributions seem to work better for Cenozoic foraminifera than exponentials (Arnold, Parker and Hansard 1995), suggesting the probability of extinction increases with species age. The difference in shape is however relatively small (k≈1.2), making the probability increase from 0.08/Myr at 1 Myr to 0.17/Myr at 40 Myr. Other data hint at slightly slowing extinction rates for marine plankton (Cermeno 2011).

In practice there are problems associated with speciation and time-varying extinction rates, not to mention biased data (Pease 1988). In the end, the best we can say at present appears to be that natural species survival is roughly exponentially distributed.

Conclusions for xrisk research

Survival curves contain a lot of useful information. The median lifespan is easy to read off by checking the intersection with the 50% survival line. The life expectancy is the area under the curve.

Survivorship curve with changed constant risk, semilog plot.
Survivorship curve with changed constant risk, semilog plot.

In a semilog-diagram an exponentially declining survival probability is a line with negative slope. The slope is set by the hazard rate. Changes in hazard rate makes the line a series of segments.
An early reduction in hazard (i.e. the line slope becomes flatter) clearly improves the outlook at a later time more than a later equal improvement: to have a better effect the late improvement needs to reduce hazard significantly more.

A transition risk causes a vertical displacement of the line (or curve) downwards: the weight determines the distance. From a given future time, it does not matter when the transition risk occurs as long as the subsequent hazard rate is not dependent on it. If the weight changes depending on when it occurs (hardware overhang, technology ordering, population) then the position does matter. If there is a risky transition that reduces state risk we should want it earlier if it does not become worse.

Acknowledgments

Thanks to Toby Ord for pointing out a mistake in an earlier version.

Appendix: survival analysis

The main object of interest is the survival function S(t)=\Pr(T>t) where T is a random variable denoting the time of death. In engineering it is commonly called reliability function. It is declining over time, and will approach zero unless indefinite survival is possible with a finite probability.

The event density f(t)=\frac{d}{dt}(1-S(t)) denotes the rate of death per unit time.

The hazard function \lambda(t) is the event rate at time t conditional on survival until time t or later. It is \lambda(t) = - S'(t)/S(t). Note that unlike the event density function this does not have to decline as the number of survivors gets low: this is the overall force of mortality at a given time.

The expected future lifetime given survival to time t_0 is \frac{1}{S(t_0)}\int_{t_0}^\infty S(t)dt. Note that for exponential survival curves (i.e. constant hazard) it remains constant.

The case for Mars

On practical Ethics I post about the goodness of being multi-planetary: is it rational to try to settle Mars as a hedge against existential risk?

The problem is not that it is absurd to care about existential risks or the far future (which was the Economist‘s unfortunate claim), nor that it is morally wrong to have a separate colony, but that there might be better risk reduction strategies with more bang for the buck.

One interesting aspect is that making space more accessible makes space refuges a better option. At some point in the future, even if space refuges are currently not the best choice, they may well become that. There are of course other reasons to do this too (science, business, even technological art).

So while existential risk mitigation right now might rationally aim at putting out the current brushfires and trying to set the long-term strategy right, doing the groundwork for eventual space colonisation seems to be rational.

The end of the worlds

Nikkei existential riskGeorge Dvorsky has a piece on Io9 about ways we could wreck the solar system, where he cites me in a few places. This is mostly for fun, but I think it links to an important existential risk issue: what conceivable threats have big enough spatial reach to threaten a interplanetary or even star-faring civilization?

This matters, since most existential risks we worry about today (like nuclear war, bioweapons, global ecological/societal crashes) only affect one planet. But if existential risk is the answer to the Fermi question, then the peril has to strike reliably. If it is one of the local ones it has to strike early: a multi-planet civilization is largely immune to the local risks. It will not just be distributed, but it will almost by necessity have fairly self-sufficient habitats that could act as seeds for a new civilization if they survive. Since it is entirely conceivable that we could have invented rockets and spaceflight long before discovering anything odd about uranium or how genetics work it seems unlikely that any of these local risks are “it”. That means that the risks have to be spatially bigger (or, of course, that xrisk is not the answer to the Fermi question).

Of the risks mentioned by George physics disasters are intriguing, since they might irradiate solar systems efficiently. But the reliability of them being triggered before interstellar spread seems problematic. Stellar engineering, stellification and orbit manipulation may be issues, but they hardly happen early – lots of time to escape. Warp drives and wormholes are also likely late activities, and do not seem to be reliable as extinctors. These are all still relatively localized: while able to irradiate a largish volume, they are not fine-tuned to cause damage and does not follow fleeing people. Dangers from self-replicating or self-improving machines seems to be a plausible, spatially unbound risk that could pursue (but also problematic for the Fermi question since now the machines are the aliens). Attracting malevolent aliens may actually be a relevant risk: assuming von Neumann probes one can set up global warning systems or “police probes” that maintain whatever rules the original programmers desire, and it is not too hard to imagine ruthless or uncaring systems that could enforce the great silence. Since early civilizations have the chance to spread to enormous volumes given a certain level of technology, this might matter more than one might a priori believe.

So, in the end, it seems that anything releasing a dangerous energy effect will only affect a fixed volume. If it has energy E and one can survive it below a deposited energy e, if it just radiates in all directions the safe range is r = \sqrt{E/4 \pi e} \propto \sqrt{E} – one needs to get into supernova ranges to sterilize interstellar volumes. If it is directional the range goes up, but smaller volumes are affected: if a fraction f of the sky is affected, the range increases as \propto \sqrt{1/f} but the total volume affected scales as \propto f\sqrt{1/f}=\sqrt{f}.

Stable strangeletsSelf-sustaining effects are worse, but they need to cross space: if their space range is smaller than interplanetary distances they may destroy a planet but not anything more. For example, a black hole merely absorbs a planet or star (releasing a nasty energy blast) but does not continue sucking up stuff. Vacuum decay on the other hand has indefinite range in space and moves at lightspeed. Accidental self-replication is unlikely to be spaceworthy unless is starts among space-moving machinery; here deliberate design is a more serious problem.

The speed of threat spread also matters. If it is fast enough no escape is possible. However, many of the replicating threats will have sublight speed and could hence be escaped by sufficiently paranoid aliens. The issue here is if lightweight and hence faster replicators can always outrun larger aliens; given the accelerating expansion of the universe it might be possible to outrun them by being early enough, but our calculations do suggest that the margins look very slim.

The more information you have about a target, the better you can in general harm it. If you have no information, merely randomizing it with enough energy/entropy is the only option (and if you have no information of where it is, you need to radiate in all directions). As you learn more, you can focus resources to make more harm per unit expended, up to the extreme limits of solving the optimization problem of finding the informational/environmental inputs that cause desired harm (=hacking). This suggests that mindless threats will nearly always have shorter range and smaller harms than threats designed by (or constituted by) intelligent minds.

In the end, the most likely type of actual civilization-ending threat for an interplanetary civilization looks like it needs to be self-replicating/self-sustaining, able to spread through space, and have at least a tropism towards escaping entities. The smarter, the more effective it can be. This includes both nasty AI and replicators, but also predecessor civilizations that have infrastructure in place. Civilizations cannot be expected to reliably do foolish things with planetary orbits or risky physics.

[Addendum: Charles Stross has written an interesting essay on the risk of griefers as a threat explanation. ]

[Addendum II: Robin Hanson has a response to the rest of us, where he outlines another nasty scenario. ]

 

The 12 threats of xrisk

The Global Challenges Foundation has (together with FHI) produced a report on the 12 risks that threaten civilization.

infiniteriskP

And, yes, the use of “infinite impact” grates on me – it must be interepreted as “so bad that it is never acceptable”, a ruin probability, or something similar, not that the disvalue diverges. But the overall report is a great start on comparing and analysing the big risks. It is worth comparing it with the WEF global risk report, which focuses on people’s perceptions of risk. This one aims at looking at what risks are most likely/impactful. Both try to give reasons and ideas for how to reduce the risks. Hopefully they will also motivate others to make even sharper analysis – this is a first sketch of the domain, rather than a perfect roadmap. Given the importance of the issues, it is a bit worrying that it has taken us this long.