What kinds of grand futures are there?

I have been working for about a year on a book on “Grand Futures” – the future of humanity, starting to sketch a picture of what we could eventually achieve were we to survive, get our act together, and reach our full potential. Part of this is an attempt to outline what we know is and isn’t physically possible to achieve, part of it is an exploration of what makes a future good.

Here are some things that appear to be physically possible (not necessarily easy, but doable):

  • Societies of very high standards of sustainable material wealth. At least as rich (and likely far above) current rich nation level in terms of what objects, services, entertainment and other lifestyle ordinary people can access.
  • Human enhancement allowing far greater health, longevity, well-being and mental capacity, again at least up to current optimal levels and likely far, far beyond evolved limits.
  • Sustainable existence on Earth with a relatively unchanged biosphere indefinitely.
  • Expansion into space:
    • Settling habitats in the solar system, enabling populations of at least 10 trillion (and likely many orders of magnitude more)
    • Settling other stars in the milky way, enabling populations of at least 1029 people
    • Settling over intergalactic distances, enabling populations of at least 1038 people.
  • Survival of human civilisation and the species for a long time.
    • As long as other mammalian species – on the order of a million years.
    • As long as Earth’s biosphere remains – on the order of a billion years.
    • Settling the solar system – on the order of 5 billion years
    • Settling the Milky Way or elsewhere – on the order of trillions of years if dependent on sunlight
    • Using artificial energy sources – up to proton decay, somewhere beyond 1032 years.
  • Constructing Dyson spheres around stars, gaining energy resources corresponding to the entire stellar output, habitable space millions of times Earth’s surface, telescope, signalling and energy projection abilities that can reach over intergalactic distances.
  • Moving matter and objects up to galactic size, using their material resources for meaningful projects.
  • Performing more than a google (10100) computations, likely far more thanks to reversible and quantum computing.

While this might read as a fairly overwhelming list, it is worth noticing that it does not include gaining access to an infinite amount of matter, energy, or computation. Nor indefinite survival. I also think faster than light travel is unlikely to become possible. If we do not try to settle remote galaxies within 100 billion years accelerating expansion will move them beyond our reach. This is a finite but very large possible future.

What kinds of really good futures may be possible? Here are some (not mutually exclusive):

  • Survival: humanity survives as long as it can, in some form.
  • “Modest futures”: humanity survives for as long as is appropriate without doing anything really weird. People have idyllic lives with meaningful social relations. This may include achieving close to perfect justice, sustainability, or other social goals.
  • Gardening: humanity maintains the biosphere of Earth (and possibly other planets), preventing them from crashing or going extinct. This might include artificially protecting them from a brightening sun and astrophysical disasters, as well as spreading life across the universe.
  • Happiness: humanity finds ways of achieving extreme states of bliss or other positive emotions. This might include local enjoyment, or actively spreading minds enjoying happiness far and wide.
  • Abolishing suffering: humanity finds ways of curing negative emotions and suffering without precluding good states. This might include merely saving humanity, or actively helping all suffering beings in the universe.
  • Posthumanity: humanity deliberately evolves or upgrades itself into forms that are better, more diverse or otherwise useful, gaining access to modes of existence currently not possible to humans but equally or more valuable.
  • Deep thought: humanity develops cognitive abilities or artificial intelligence able to pursue intellectual pursuits far beyond what we can conceive of in science, philosophy, culture, spirituality and similar but as yet uninvented domains.
  • Creativity: humanity plays creatively with the universe, making new things and changing the world for its own sake.

I have no doubt I have missed many plausible good futures.

Note that there might be moral trades, where stay-at-homes agree with expansionists to keep Earth an idyllic world for modest futures and gardening while the others go off to do other things, or long-term oriented groups agreeing to give short-term oriented groups the universe during the stelliferous era in exchange for getting it during the cold degenerate era trillions of years in the future. Real civilisations may also have mixtures of motivations and sub-groups.

Note that the goals and the physical possibilities play out very differently: modest futures do not reach very far, while gardener civilisations may seek to engage in megascale engineering to support the biosphere but not settle space. Meanwhile the happiness-maximizers may want to race to convert as much matter as possible to hedonium, while the deep thought-maximizers may want to move galaxies together to create permanent hyperclusters filled with computation to pursue their cultural goals.

I don’t know what goals are right, but we can examine what they entail. If we see a remote civilization doing certain things we can make some inferences on what is compatible with the behaviour. And we can examine what we need to do today to have the best chances of getting to a trajectory towards some of these goals: avoiding getting extinct, improve our coordination ability, and figure out if we need to perform some global coordination in the long run that we need to agree on before spreading to the stars.

Dissolving the Fermi Paradox

The Universe Today wrote an article about a paper by me, Toby and Eric about the Fermi Paradox. The preprint can be found on Arxiv (see also our supplements: 1,2,3 and 4). Here is a quick popular overview/FAQ.

TL;DR

  • The Fermi question is not a paradox: it just looks like one if one is overconfident in how well we know the Drake equation parameters.
  • Our distribution model shows that there is a large probability of little-to-no alien life, even if we use the optimistic estimates of the existing literature (and even more if we use more defensible estimates).
  • The Fermi observation makes the most uncertain priors move strongly, reinforcing the rare life guess and an early great filter.
  • Getting even a little bit more information can update our belief state a lot!

Contents

So, do you claim we are alone in the universe?

No. We claim we could be alone, and the probability is non-negligible given what we know… even if we are very optimistic about alien intelligence.

What is the paper about?

The Fermi Paradox – or rather the Fermi Question – is “where are the aliens?” The universe is immense and old and intelligent life ought to be able to spread or signal over vast distances, so if it has some modest probability we ought to see some signs of intelligence. Yet we do not. What is going on? The reason it is called a paradox is that is there is a tension between one plausible theory ([lots of sites]x[some probability]=[aliens]) and an observation ([no aliens]).

Dissolving the Fermi paradox: there is not much tension

We argue that people have been accidentally misled to feel there is a problem by being overconfident about the probability. 

N=R_*\cdot f_p \cdot n_e \cdot f_l \cdot f_i \cdot f_c \cdot L

The problem lies in how we estimate probabilities from a product of uncertain parameters (as the Drake equation above). The typical way people informally do this with the equation is to admit that some guesses are very uncertain, give a “representative value” and end up with some estimated number of alien civilisations in the galaxy – which is admitted to be uncertain, yet there is a single number.

Obviously, some authors have argued for very low probabilities, typically concluding that there is just one civilisation per galaxy (“the N\approx 1 school”). This may actually still be too much, since that means we should expect signs of activity from nearly any galaxy. Others give slightly higher guesstimates and end up with many civilisations, typically as many as one expects civilisations to last (“the N\approx L school”). But the proper thing to do is to give a range of estimates, based on how uncertain we actually are, and get an output that shows the implied probability distribution of the number of alien civilisations.

If one combines either published estimates or ranges compatible with current scientific uncertainty we get a distribution that makes observing an empty sky unsurprising – yet is also compatible with us not being alone. 

The reason is that even if one takes a pretty optimistic view (the published estimates are after all biased towards SETI optimism since the sceptics do not write as many papers on the topic) it is impossible to rule out a very sparsely inhabited universe, yet the mean value may be a pretty full galaxy. And current scientific uncertainties of the rates of life and intelligence emergence are more than enough to create a long tail of uncertainty that puts a fair credence on extremely low probability – probabilities much smaller than what one normally likes to state in papers. We get a model where there is 30% chance we are alone in the visible universe, 53% chance in the Milky Way… and yet the mean number is 27 million and the median about 1! (see figure below)

This is a statement about knowledge and priors, not a measurement: armchair astrobiology.

(A) A probability density function for N, the number of civilisations in the Milky Way, generated by Monte Carlo simulation based on the authors’ best estimates of our current uncertainty for each parameter. (B) The corresponding cumulative density function. (C) A cumulative density function for the distance to the nearest detectable civilisation.
(A) A probability density function for N, the number of civilisations in the Milky Way, generated by Monte Carlo simulation based on the authors’ best estimates of our current uncertainty for each parameter. (B) The corresponding cumulative density function. (C) A cumulative density function for the distance to the nearest detectable civilisation.

The Great Filter: lack of obvious aliens is not strong evidence for our doom

After this result, we look at the Great Filter. We have reason to think at least one term in the Drake equation is small – either one of the early ones indicating how much life or intelligence emerges, or one of the last one that indicate how long technological civilisations survive. The small term is “the Filter”. If the Filter is early, that means we are rare or unique but have a potentially unbounded future. If it is a late term, in our future, we are doomed – just like all the other civilisations whose remains would litter the universe. This is worrying. Nick Bostrom argued that we should hope we do not find any alien life.

Our paper gets a somewhat surprising result: when updating our uncertainties in the light of no visible aliens, it reduces our estimate of the rate of life and intelligence emergence (the early filters) much more than the longevity factor (the future filter). 

The reason is that if we exclude the cases where our galaxy is crammed with alien civilisations – something like the Star Wars galaxy where every planet has its own aliens – then that leads to an update of the parameters of the Drake equation. All of them become smaller, since we will have a more empty universe. But the early filter ones – life and intelligence emergence – change much more downwards than the expected lifespan of civilisations since they are much more uncertain (at least 100 orders of magnitude!) than the merely uncertain future lifespan (just 7 orders of magnitude!).

So this is good news: the stars are not foretelling our doom!

Note that a past great filter does not imply our safety.

The conclusion can be changed if we reduce the uncertainty of the past terms to less than 7 orders of magnitude, or the involved  probability distributions have weird shapes. (The mathematical proof is in supplement IV, which applies to uniform and normal distributions. It is possible to add tails and other features that breaks this effect – yet believing such distributions of uncertainty requires believing rather strange things. )

Isn’t this armchair astrobiology?

Yes. We are after all from the philosophy department.

The point of the paper is how to handle uncertainties, especially when you multiply them together or combine them in different ways. It is also about how to take lack of knowledge into account. Our point is that we need to make knowledge claims explicit – if you claim you know a parameter to have the value 0.1 you better show a confidence interval or an argument about why it must have exactly that value (and in the latter case, better take your own fallibility into account). Combining overconfident knowledge claims can produce biased results since they do not include the full uncertainty range: multiplying point estimates together produces a very different result than when looking at the full distribution.

All of this is epistemology and statistics rather than astrobiology or SETI proper. But SETI makes a great example since it is a field where people have been learning more and more about (some) of the factors.

The same approach as we used in this paper can be used in other fields. For example, when estimating risk chains in systems (like the risk of a pathogen escaping a biosafety lab) taking uncertainties in knowledge will sometimes produce important heavy tails that are irreducible even when you think the likely risk is acceptable. This is one reason risk estimates tend to be overconfident.

Probability?

What kind of distributions are we talking about here? Surely we cannot speak of the probability of alien intelligence given the lack of data?

There is a classic debate in probability between frequentists, claiming probability is the frequency of events that we converge to when an experiment is repeated indefinitely often, and Bayesians, claiming probability represents states of knowledge that get updated when we get evidence. We are pretty Bayesian.

The distributions we are talking about are distributions of “credences”: how much you believe certain things. We start out with a prior credence based on current uncertainty, and then discuss how this gets updated if new evidence arrives. While the original prior beliefs may come from shaky guesses they have to be updated rigorously according to evidence, and typically this washes out the guesswork pretty quickly when there is actual data. However, even before getting data we can analyse how conclusions must look if different kinds of information arrives and updates our uncertainty; see supplement II for a bunch of scenarios like “what if we find alien ruins?”, “what if we find a dark biosphere on Earth?” or “what if we actually see aliens at some distance?”

Correlations?

Our use of the Drake equation assumes the terms are independent of each other. This of course is a result of how Drake sliced things into naturally independent factors. But there could be correlations between them. Häggström and Verendel showed that in worlds where the priors are strongly correlated updates about the Great Filter can get non-intuitive.

We deal with this in supplement II, and see also this blog post. Basically, it doesn’t look like correlations are likely showstoppers.

You can’t resample guesses from the literature!

Sure can. As long as we agree that this is not so much a statement about what is actually true out there, but rather the range of opinions among people who have studied the question a bit. If people give answers to a question in the range from ten to a hundred, that tells you something about their beliefs, at least.

What the resampling does is break up the possibly unconscious correlation between answers (“the N\approx 1 school” and “the N\approx L school” come to mind). We use the ranges of answers as a crude approximation to what people of good will think are reasonable numbers.

You may say “yeah, but nobody is really an expert on these things anyway”. We think that is wrong. People have improved their estimates as new data arrives, there are reasons for the estimates and sometimes vigorous debate about them. We warmly recommend Vakoch, D. A., Dowd, M. F., & Drake, F. (2015). The Drake Equation. The Drake Equation, Cambridge, UK: Cambridge University Press, 2015 for a historical overview. But at the same time these estimates are wildly uncertain, and this is what we really care about. Good experts qualify the certainty of their predictions.

But doesn’t resampling from admittedly overconfident literature constitute “garbage in, garbage out”?

Were we trying to get the true uncertainties (or even more hubristically, the true values) this would not work: we have after all good reasons to suspect these ranges are both biased and overconfidently narrow. But our point is not that the literature is right, but that even if one were to use the overly narrow and likely overly optimistic estimates as estimates of actual uncertainty the broad distribution will lead to our conclusions. Using the literature is the most conservative case.

Note that we do not base our later estimates on the literature estimate but our own estimates of scientific uncertainty. If they are GIGO it is at least our own garbage, not recycled garbage. (This reading mistake seems to have been made on Starts With a Bang).

What did the literature resampling show?

An overview can be found in Supplement III. The most important point is just that even estimates of super-uncertain things like the probability of life lies in a surprisingly narrow range of values, far more narrow than is scientifically defensible. For example, f_l has five estimates ranging from 10^{-30} to 10^{-5}, and all the rest are in the range 10^{-3} to 1. f_i is even worse, with one microscopic and nearly all the rest between one in a thousand to one.

(A) The uncertainty in the research community represented via a synthetic probability density function over N — the expected number of detectable civilizations in our galaxy. The curve is generated by random sampling from the literature estimates of each parameter. Direct literature estimates of the number of detectable civilisations are marked with red rings. (B) The corresponding synthetic cumulative density function. (C) A cumulative density function for the distance to the nearest detectable civilisation, estimated via a mixture model of the nearest neighbour functions.
(A) The uncertainty in the research community represented via a synthetic probability density function over N — the expected number of detectable civilisations in our galaxy. The curve is generated by random sampling from the literature estimates of each parameter. Direct literature estimates of the number of detectable civilisations are marked with red rings. (B) The corresponding
synthetic cumulative density function. (C) A cumulative density function for the distance to the nearest detectable civilisation, estimated via a mixture model of the nearest neighbour functions.

It also shows that estimates that are likely biased towards optimism (because of publication bias) can be used to get a credence distribution that dissolves the paradox once they are interpreted as ranges. See the above figure, were we get about 30% chance of being alone in the Milky Way and 8% chance of being alone in the visible universe… but a mean corresponding to 27 million civilisations in the galaxy and a median of about a hundred.

There are interesting patterns in the data. When plotting the expected number of civilisations in the Milky Way based on estimates from different eras the number goes down with time: the community has clearly gradually become more pessimistic. There are some very pessimistic estimates, but even removing them doesn’t change the overall structure.

What are our assumed uncertainties?

A key point in the paper is trying to quantify our uncertainties somewhat rigorously. Here is a quick overview of where I think we are, with the values we used in our synthetic model:

  • N_*: the star formation rate in the Milky Way per year is fairly well constrained. The actual current uncertainty is likely less than 1 order of magnitude (it can vary over 5 orders of magnitude in other galaxies). In our synthetic model we put this parameter as log-uniform from 1 to 100.
  • f_p: the fraction of systems with planets is increasingly clear ≈1. We used log-uniform from 0.1 to 1.
  • n_e: number of Earth-like in systems with planets.
    • This ranges from rare earth arguments (<10^{-12}) to >1. We used log-uniform from 0.1 to 1 since recent arguments have shifted away from rare Earths, but we checked that adding it did not change the conclusions much.
  • f_l: Fraction of Earthlike planets with life.
    • This is very uncertain; see below for our arguments that the uncertainty ranges over perhaps 100 orders of magnitude.
    • There is an absolute lower limit due to ergodic repetition: f_l >10^{-10^{115}} – in an infinite universe there will eventually be randomly generated copies of Earth and even the entire galaxy (at huge distances from each other). Observer selection effects make using the earliness of life on Earth problematic.
    • We used a log-normal rate of abiogenesis that was transformed to a fraction distribution.
  • f_i: Fraction of lifebearing planets with intelligence/complex life.
    • This is very uncertain; see below for our arguments that the uncertainty ranges over perhaps 100 orders of magnitude.
    • One could argue there has been 5 billion species so far and only 1 intelligent, so we know f_i>2\cdot 10^{-10}. But one could argue that we should count assemblages of 10 million species, which gives a fraction 1/500 per assemblage. Observer selection effects may be distorting this kind of argument.
    • We could have used a log-normal rate of complex life emergence that was transformed to a fraction distribution or a broad log-linear distribution. Since this would have made many graphs hard to interpret we used log-uniform from 0.001 to 1, not because we think this likely but just as a simple illustration (the effect of the full uncertainty is shown in Supplement II).
  • f_c: Fraction of time when it is communicating.
    • Very uncertain; humanity is 0.000615 so far. We used log-uniform from 0.01 to 1.
  • L: Average lifespan of a civilisation.
    • Fairly uncertain; 50?<L< 10^9-10^{10} years (upper limit because of the Drake equation applicability: it assumes the galaxy is in a steady state, and if civilisations are long-lived enough they will still be accumulating since the universe is too young.)
    • We used log-uniform from 100 to 10,000,000,000.

Note that this is to some degree a caricature of current knowledge, rather than an attempt to represent it perfectly. Fortunately our argument and conclusions are pretty insensitive to the details – it is the vast ranges of uncertainty that are doing the heavy lifting.

Abiogenesis

Why do we think the fraction of planets with life parameters could have a huge range?

First, instead of thinking in terms of the fraction of planets having life, consider a rate of life formation in suitable environments: what is the induced probability distribution? The emergence is a physical/chemical transition of some kind of primordial soup, and transition events occur in this medium at some rate per unit volume: f_L\approx \lambda V t where V is the available volume and t is the available time. High rates would imply that almost all suitable planets originate life, while low rates would imply that almost no suitable planets originate life.

The uncertainty regarding the length of time when it is possible is at least 3 orders of magnitude (10^7-10^{10} years).

The uncertainty regarding volumes spans 20+ orders of magnitude – from entire oceans to brine pockets on ice floes.

Uncertainty regarding transition rates can span 100+ orders of magnitude! The reason is that this might involve combinatoric flukes (you need to get a fairly longish sequence of parts into the right sequence to get the right kind of replicator), or that it is like the protein folding problem where Levinthal’s paradox shows that it takes literally astronomical time to get entire oceans of copies of a protein to randomly find the correctly folded position (actual biological proteins “cheat” by being evolved to fold neatly and fast). Even chemical reaction rates span 100 orders of magnitude. On the other hand, spontaneous generation could conceivably be common and fast! So we should conclude that \lambda has an uncertainty range of at least 100 orders of magnitude.

Actual abiogenesis will involve several steps. Some are easy, like generating simple organic compounds (plentiful in asteroids, comets and Miller-Urey experiment). Some are likely tough. People often overlook that even how to get proteins and nucleic acids in a watery environment is somewhat of a mystery since these chains tend to hydrolyze; the standard explanation is to look for environments that have a wet-dry cycle allowing complexity to grow. But this means V is much smaller than an ocean.

That we have tremendous uncertainty about abiogenesis does not mean we do not know anything. We know a lot. But at present we have no good scientific reasons to believe we know the rate of life formation per liter-second. That will hopefully change.

Doesn’t creationists argue stuff like this?

There is a fair number of examples of creationists arguing that the origin of life must be super-unlikely and hence we must believe in their particular god.

The problem(s) with this kind of argument is that it presupposes that there is only one planet, and somehow we got a one-in-a-zillion chance on that one. That is pretty unlikely. But the reality is that there is a zillion planets, so even if there is a one-in-a-zillion chance for each of them we should expect to see life somewhere… especially since being a living observer is a precondition for “seeing life”! Observer selection effects really matter.

We are also not arguing that life has to be super-unlikely. In the paper our distribution of life emergence rate actually makes it nearly universal 50% of the time – it includes the possibility that life will spontaneously emerge in any primordial soup puddle left alone for a few minutes. This is a possibility I doubt anybody believes in, but it could be that would-be new life is emerging right under our noses all the time, only to be outcompeted by the advanced life that already exists.

Creationists make a strong claim that they know f_l \ll 1; this is not really supported by what we know. But f_l \ll 1 is totally within possibility.

Complex life

Even if you have life, it might not be particularly good at evolving. The reasoning is that it needs to have a genetic encoding system that is both rigid enough to function efficiently and fluid enough to allow evolutionary exploration.

All life on Earth shares almost exactly the same genetic systems, showing that only rare and minor changes have occurred in \approx 10^{40} cell divisions. That is tremendously stable as a system. Nonetheless, it is fairly commonly believed that other genetic systems preceded the modern form. The transition to the modern form required major changes (think of upgrading an old computer from DOS to Windows… or worse, from CP/M to DOS!). It would be unsurprising if the rate was < 1 per 10^{100} cell divisions given the stability of our current genetic system – but of course, the previous system might have been super-easy to upgrade.

Modern genetics required >1/5 of the age of the universe to evolve intelligence. A genetic system like the one that preceded ours might both be stable over a google cell divisions and evolve more slowly by a factor of 10, and run out the clock. Hence some genetic systems may be incapable of ever evolving intelligence.

This related to a point made by Brandon Carter much earlier, where he pointed out that the timescales of getting life, evolving intelligence and how long biospheres last are independent and could be tremendously different – that life emerged early on Earth may have been a fluke due to the extreme difficulty of also getting intelligence within this narrow interval (on all the more likely worlds there are no observers to notice). If there are more difficult transitions, you get an even stronger observer selection effect.

Evolution goes down branches without looking ahead, and we can imagine that it could have an easier time finding inflexible coding systems (“B life”) unlike our own nice one (“A life”). If the rate of discovering B-life is \lambda_B and the rate of discovering capable A-life is \lambda_A, then the fraction of A-life in the universe is just \lambda_A/\lambda_B – and rates can differ many orders of magnitude, producing a life-rich but evolution/intelligence-poor universe. Multiple step models add integer exponents to rates: these the multiply order of magnitude differences.

So we have good reasons to think there could be a hundred orders of magnitude uncertainty on the intelligence parameter, even without trying to say something about evolution of nervous systems.

How much can we rule out aliens?

Humanity has not scanned that many stars, so obviously we have checked even a tiny part of the galaxy – and could have missed them even if we looked at the right spot. Still, we can model how this weak data updates our beliefs (see Supplement II).

The strongest argument against aliens is the Tipler-Hart argument that settling the Milky Way, even when you are expanding at low speed, will only take a fraction of its age. And once a civilisation is everywhere it is hard to have it go extinct everywhere – it will tend to persist even if local pieces crash. Since we do not seem to be in a galaxy paved over by an alien supercivilisation we have a very strong argument to assume a low rate of intelligence emergence. Yes, even if if 99% of civilisations stay home or we could be in an alien zoo, you still get a massive update against a really settled galaxy. In our model the probability of less than one civilisation per galaxy went from 52% to 99.6% if one include the basic settlement argument.

The G-hat survey of galaxies, looking for signs of K3 civilisations, did not find any. Again, maybe we missed something or most civilisations don’t want to re-engineer galaxies, but if we assume about half of them want to and have 1% chance of succeeding we get an update from 52% chance of less than one civilisation per galaxy to 66%.

Using models of us looking at about 1,000 stars or that we do not think there is any civilisation within 18 pc gives a milder update, from 52% to 53 and 57% respectively. These just rule out super-densely inhabited scenarios.

So what? What is the use of this?

People like to invent explanations for the Fermi paradox that all would have huge implications for humanity if they were true – maybe we are in a cosmic zoo, maybe there are interstellar killing machines out there, maybe singularity is inevitable, maybe we are the first civilisation ever, maybe intelligence is a passing stagemaybe the aliens are sleeping… But if you are serious about thinking about the future of humanity you want to be rigorous about this. This paper shows that current uncertainties actually force us to be very humble about these possible explanations – we can’t draw strong conclusions from the empty sky yet.

But uncertainty can be reduced! We can learn more, and that will change our knowledge.

From a SETI perspective, this doesn’t say that SETI is unimportant or doomed to failure, but rather that if we ever see even the slightest hint of intelligence out there many parameters will move strongly. Including the all-important L.

From an astrobiology perspective, we hope we have pointed at some annoyingly uncertain factors and that this paper can get more people to work on reducing the uncertainty. Most astrobiologists we have talked with are aware of the uncertainty but do not see the weird knock-on-effects from it. Especially figuring out how we got our fairly good coding system and what the competing options are seems very promising.

Even if we are not sure we can also update our plans in the light of this. For example, in my tech report about settling the universe fast I pointed out that if one is uncertain about how much competition there might be for the universe one can use one’s probability estimates to decide on the range to aim for.

Uncertainty matters

Perhaps the most useful insight is that uncertainty matters and we should learn to embrace it carefully rather than assume that apparently specific numbers are better.

Perhaps never in the history of science has an equation been devised yielding values differing by eight orders of magnitude. . . . each scientist seems to bring his own prejudices and assumptions to the problem.
History of Astronomy: An Encyclopedia, ed. by John Lankford, s.v. “SETI,” by Steven J. Dick, p. 458.

When Dick complained about the wide range of results from the Drake equation he likely felt it was too uncertain to give any useful result. But 8 orders of magnitude differences is in this case just a sign of downplaying our uncertainty and overestimating our knowledge! Things gets much better when we look at what we know and don’t know, figuring out the implications from both.

Jill Tarter said the Drake equation was “a wonderful way to organize our ignorance”, which we think is closer to the truth than demanding a single number as an answer.

Ah, but I already knew this!

We have encountered claims that “nobody” really is naive about using the Drake equation. Or at least not any “real” SETI and astrobiology people. Strangely enough people never seem to make this common knowledge visible, and a fair number of papers make very confident statements about “minimum” values for life probabilities that we think are far, far above the actual scientific support.

Sometimes we need to point out the obvious explicitly.

[Edit 2018-06-30: added the GIGO section]

Arguing against killer robot janissaries

Military robot being shown to families at New Scientist Live 2017.
Military robot being shown to families at New Scientist Live 2017.

I have a piece in Dagens Samhälle with Olle Häggström, Carin Ism, Max Tegmark and Markus Anderljung urging the Swedish parliament to consider banning lethal autonomous weapons.

This is of course mostly symbolic; the real debate is happening right now over in Geneva at the CCW. I also participated in a round-table with the Red Cross that led to their report on the issue, which is one of the working papers presented there.

I am not particularly optimistic that we will get a ban – nor that a ban would actually achieve much. However, I am much more optimistic that this debate may force a general agreement about the importance of getting meaningful human control. This is actually an area where most military and peace groups would agree: nobody wants systems that are unaccountable and impossible to control. Making sure there are international agreements that using such systems is irresponsible and maybe even a war crime would be a big win. But there are lots of devils in the details.

When it comes to arguments for why LAWs are morally bad I am personally not so convinced that the bad comes from a machine making the decision to kill a person. Clearly some machine possible decisionmaking does improve proportionality and reduce arbitrariness. Similarly arguments about whether they would increase or reduce the risk of military action and how this would play out in terms of human suffering and death are interesting empirical arguments but we should not be overconfident in that we know the answers. Given that once LAWs are in use it will be hard to roll them back if the answers are bad, we might find it prudent to try to avoid them (but consider the opposing scenario where since time immemorial robots have fought our wars and somebody now suggests using humans too – there is a status quo bias here).

My main reason for being opposed to LAWs is not that they would be inherently immoral, nor that they would necessarily or even likely make war worse or more likely. My view is that the problem is that they give states too much power. Basically they make their monopoly on violence independent of the wishes of the citizens. Once a sufficiently potent LAW military (or police force) exist it will be able to exert coercive and lethal power as ordered without any mediation through citizens. While having humans in the army certainly doesn’t guarantee moral behavior, if ordered to turn against the citizenry or act in a grossly immoral way they can exert moral agency and resist (with varying levels of overtness). The LAW army will instead implement the orders as long as they are formally lawful (assuming there is at least a constraint against unlawful commands). States know that if they mistreat their population too much their army might side with the population, a reason why some of the nastier governments make use of mercenaries or a special separate class of soldier to reduce the risk. If LAWs become powerful enough they might make dictatorships far more stable by removing a potentially risky key component of state power from the internal politics.

Bans and moral arguments are unlikely to work against despots. But building broad moral consensuses on what is acceptable in war does have effects. If R&D emphasis is directed towards finding solutions to how to manage responsibility for autonomous device decisions that will develop a lot of useful technologies for making such systems at least safer – and one can well imagine similar legal and political R&D into finding better solutions to citizen-independent state power.

In fact, far more important than LAWs is what to do about Lethal Autonomous States. Bad governance kills, many institutions/corporations/states behave just as badly as the worst AI risk visions and have a serious value alignment problem, and we do not have great mechanisms for handling responsibility in inter-state conflicts. The UN system is a first stab at the problem but obviously much, much more can be done. In the meantime, we can try avoiding going too quickly down a risky path while we try to find safe-making technologies and agreements.

Admitting blog infidelity with StackExchange

I have neglected Andart II for some time, partly for the good reason of work (The Book is growing!), partly because I got a new addiction: answering (and asking) questions on Physics and Astronomy StackExchange. Very addictive, but also very educational. Here are links to some of the stuff I have been adding, which might be of interest to some readers.

Can you be a death positive transhumanist?

Spes altera vitaeI recently came across the concept of “death positivity”, expressed as the idea that we should accept the inevitability of death and embrace the diversity of attitudes and customs surrounding it. Looking a bit deeper, I found the Order of the Good Death and their statement.

That got me thinking about transhumanist attitudes to death and how they are perceived.

While the brief Kotaku description makes it sound that death positivity is perhaps about celebrating death, the Order of the Good Death mainly is about acknowledging death and dying. That we hide it behind closed doors and avoid public discussion (or even thinking about it) is doing harm to society and arguably our own emotions. Fear and denial are not good approaches. Perhaps the best slogan-description is “Accepting that death itself is natural, but the death anxiety of modern culture is not.”

The Order aims at promoting more honest public discussion, curiosity, innovation and gatherings to discuss death-related topic. Much of this relates to the practices in the “death industry”, some of which definitely should be discussed in terms of economic costs, environmental impact, ethics and legal rights.

Denying death as a bad thing?

Queuing for eternal restThere is an odd paradox here. Transhumanism is often described as death denying, and this description is not meant as a compliment in the public debate. Wanting to live forever is presented as immature, selfish or immoral. Yet we have an overall death denying society, so how can this be held to be bad?

Part of it is that the typical frame of the critique is from a “purveyor of wisdom” (a philosopher, a public intellectual, the local preacher) who no doubt might scold society too had not the transhumanist been a more convenient target.

This critique is rarely applied to established religions that are even more radically death denying – Christianity after all teaches the immortality of the soul, and in Hinduism and Buddhism ending the self is a nearly impossible struggle through countless reincarnations: talk about denying death! You rarely hear people asking how life could have a meaning if there is an ever-lasting hereafter. (In fact, some have like Tolstoy argued that it is only because such ever-lasting states that anything could have meaning). Some of the lack of critique is due to social capital: major religions hold much of it, transhumanism less, so criticising tends to focus on those groups that have less impact. Not just because the “purveyor of wisdom” fears a response but because they are themselves consciously or not embedded inside the norms and myths of these influential groups.

Another reason for criticising the immortalist position is death denial. Immortalism, and its more plausible sibling longevism, directly breaks the taboo against discussing death honestly. It questions core ideas about what human existence is like, and it by necessity delves into the processes of ageing and death. It tries to bring up uncomfortable subjects and does not accept the standard homilies about why life should be like it is, and why we need to accept it. This second reason actually makes transhumanism and death positivity unlikely allies.

Naïve transhumanists sometimes try to recruit people by offering the hope of immortality. Often they are surprised and shocked by the negative reactions. Leaving the appearance of a Faustian bargain aside, people typically respond by shoring up their conventional beliefs and defending their existential views. Few transhumanist ideas cause stronger reactions than life extension – I have lectured about starting new human species, uploading minds, remaking the universe, enhancing love, and many extreme topics, but I rarely get as negative comments as when discussing the feasibility and ethics of longevity.

The reason for this is in my opinion very much fear of death (with a hefty dose of status quo bias mixed in). As we grow up we have to handle our mortality and we build a defensive framework telling us how to handle it – typically by downplaying the problem of death by ignoring it, explaining or hoping via a religious framework, or finding some form of existential acceptance. But since most people rarely are exposed to dissenting views or alternatives they react very badly when this framework is challenged. This is where death positivity would be very useful.

Why strict immortalism is a non-starter

XIII: EntropyGiven our current scientific understanding death is unavoidable. The issue is not whether life extension is possible or not, just the basic properties of our universe. Given the accelerating expansion of the universe we can only gain access to a finite amount of material resources. Using these resources is subject to thermodynamic inefficiencies that cannot be avoided. Basically the third law of thermodynamics and Landauer’s principle imply that there is a finite number of information processing steps that can be undertaken in our future. Eventually the second law of thermodynamics wins (helped by proton decay and black hole evaporation) and nothing that can store information or perform the operations needed for any kind of life will remain. This means that no matter what strange means any being undertakes as far as we understand physics it will eventually dissolve.

One should also not discount plain bad luck: finite beings in a universe where quantum randomness happens will sooner or later be subjected to a life-ending coincidence.

The Heat Death of the Universe and Quantum Murphy’s Law are a very high upper bounds. They are important because they force any transhumanist who doesn’t want to dump rationality overboard and insist that the laws of physics must allow true immortality because it is desired to acknowledge that they will eventually die – perhaps aeons hence and in a vastly changed state, but at some point it will have happened (perhaps so subtly that nobody even noticed: shifts in identity also count).

To this the reasonable transhumanist responds with a shrug: we have more pressing mortality concerns today, when ageing, disease, accidents and existential risk are so likely that we can hardly expect to survive a century. We endlessly try to explain to interviewers that transhumanism is not really seeking capital “I” Immortality but merely indefinitely long lifespans, and actually we are interested in years of health and activity rather than just watching the clock tick as desiccated mummies. The point is, a reasonable transhumanistic view will be focused on getting more and better life.

Running from death or running towards life?

Love triangleOne can strive to extend life because one is scared of drying – death as something deeply negative – or because life is worth living – remaining alive has a high value.

But if one can never avoid having death at some point in one’s lifespan then the disvalue of death will always be present. It will not affect whether living one life is better than another.

An exception may be if one believes that the disvalue can be discounted by being delayed, but this merely affects the local situation in time: at any point one prefers the longest possible life, but the overall utility as seen from the outside when evaluating a life will always suffer the total disvalue.

I believe the death-apologist thinkers have made some good points about why death is not intensely negative (e.g. the Lucretian arguments). I do not think they are convincing in that it is a positive property of the world. If “death gives life meaning” then presumably divorce is what makes love meaningful. If it is a good thing that old people retire from positions of power, why not have mandatory retirement rather than the equivalent of random death-squads? In fact, defences of death as a positive tend to use remarkably weak reasons for motivations, reasons that would never be taken seriously if motivating complacency about a chronic or epidemic disease.

Life-affirming transhumanism on the other hand is not too worried about the inevitability of death. The question is rather how much and what kind of good life is possible. One can view it as a game of seeking to maximise a “score” of meaningfulness and value under risk. Some try to minimise the risk, others to get high points, still others want to figure the rules or structure their life projects to make a meaningful structure across time.

Ending the game properly

Restart, human!This also includes ending life when it is no longer meaningful. Were one to regard death as extremely negative, then one should hang on even if there was nothing but pain and misery in the future. If death merely has zero value, then one can be in bad states where it is better to be dead than alive.

As we have argued in a recent paper many of the anti-euthanasia arguments turn on their head when applied to cryonics: if one regards life as a too precious gift to be thrown away and that the honourable thing is to continue to struggle on, then undergoing cryothanasia (being cryonically suspended well before one would otherwise have died) when suffering a terminal disease in the rational hope that this improves ones chances clearly seems better than to not take the chance or allow disease to reduce one’s chances.

This also shows an important point where one kind of death positivity and transhumanism may part ways. One can frame accepting death as accept that death exists and deal with it. Another frame, equally compatible with the statement, is not struggling too much against it.  The second frame is often what philosophers suggest as a means for equanimity. While possibly psychologically beneficial it clearly has limits: the person not going to the doctor with a treatable disease when they know it will develop into something untreatable (or not stepping out of the way of an approaching truck) is not just “not struggling” but being actively unreasonable. One can and should set some limit where struggle and interventions become unreasonable, but this is always going to be both individual and technology dependent. With modern medicine many previously lethal conditions (e.g. bacterial meningitis, many cancers) have become treatable to such an extent that it is not reasonable to avail oneself to treatment.

Transhumanism puts a greater value in longevity than is usual, partially because of its optimistic outlook (the future is likely to be good, technology is likely to advance), and this leads to a greater willingness to struggle on even when conventional wisdom says it is a good time to give up and become fatalistic. This is a reason transhumanists are far more OK with radical attempts to stave off death than most people, including cryonics.

Cryonics

Long term careCryonics is another surprisingly death-positive aspect of transhumanism. It forces you to confront your mortality head on, and it does not offer very strong reassurance. Quite the opposite: it requires planning for ones (hopefully temporary) demise, consider the various treatment/burial options, likely causes of death, and the risks and uncertainties involved in medicine. I have friends who seriously struggled with their dread of death when trying to sign up.

Talking about the cryonics choice with family is one of the hardest parts of the practice and has caused significant heartbreak, yet keeping silent and springing it as a surprise guarantees even more grief (and lawsuits). This is one area where better openness about death would be extremely helpful.

It is telling that members of the cryonics community seeks out each other, since it is one of the few environments where these things can be discussed openly and without stigma. It seems likely that the death-positive and the cryonics community have more in common than they might think.

Cryonics also has to deal with the bureaucracy and logistics of death, with the added complication that it aims at something slightly different than conventional burial. To a cryonicist the patients are still patients even when they have undergone cardiac arrest, are legally declared dead, solid and immersed in liquid nitrogen: they need care and protection since they may only be temporarily dead. Or deanimated, if we want to reserve “death” as a word for irreversibly non-living. (As a philosopher, I must say I find the cryosuspended state delightfully like a thought-experiment in a philosophy paper).

Final words

Winter dawnI have argued that transhumanism should be death-positive, at least in the sense that discussing death and accepting its long-term inevitability is both healthy and realistic. Transhumanists will not generally support a positive value of death and will tend to react badly to that kind of statement. But assigning it a vastly negative value produces a timid outlook that is unlikely to work well with the other parts of the transhumanist idea complex. Rather, death is bad because life is good but that doesn’t mean we should not think about it.

Indeed, transhumanists may want to become better at talking about death. Respected and liked people who have been part of the movement for a long time have died and we are often awkward about how to handle it. Transhumanists need to handle grief too. Even if the subject may be only temporarily dead in a cryonic tank.

Conversely, transhumanism and cryonics may represent an interesting challenge for the death positive movement in that they certainly represent an unusual take on attitudes and customs towards death. Seeing death as an engineering problem is rather different from how most people see it. Questioning the human condition is risky when dealing with fragile situations. And were transhumanism to be successful in some of its aims there may be new and confusing forms of death.

Existential risk in Gothenburg

This fall I have been chairing a programme at the Gothenburg Centre for Advanced Studies on existential risk, thanks to Olle Häggström. Visiting researchers come and participate in seminars and discussions on existential risk, ranging from the very theoretical (how do future people count?) to the very applied (should we put existential risk on the school curriculum? How?). I gave a Petrov Day talk about how to calculate risks of nuclear war and how observer selection might mess this up, beside seminars on everything from the Fermi paradox to differential technology development. In short, I have been very busy.

To open the programme we had a workshop on existential risk September 7-8 2017. Now we have the videos up of our talks:

I think so far a few key realisations and themes have in my opinion been

(1) the pronatalist/maximiser assumptions underlying some of the motivations for existential risk reduction were challenged; there is an interesting issue of how “modest futures” rather than “grand futures” play a role and non-maximising goals imply existential risk reduction.

(2) the importance of figuring out how “suffering risks”, potential states of astronomical amounts of suffering, relate to existential risks. Allocating effort between them rationally touches on some profound problems.

(3) The under-determination problem of inferring human values from observed behaviour (a talk by Stuart) resonated with the under-determination of AI goals in Olle’s critique of the convergent instrumental goal thesis and other discussions. Basically, complex agent-like systems might be harder to succinctly describe than we often think.

(4) Stability of complex adaptive systems – brains, economies, trajectories of human history, AI. Why are some systems so resilient in a reliable way, and can we copy it?

(5) The importance of estimating force projection abilities in space and as the limits of physics are approached. I am starting to suspect there is a deep physics answer to the question of attacker advantage, and a trade-off between information and energy in attacks.

We will produce an edited journal issue with papers inspired by our programme, stay tuned. Avancez!

 

Fractals and Steiner chains

I recently came across this nice animated gif of a fractal based on a Steiner chain, due to Eric Martin Willén. I immediately wanted to replicate it.

Make Steiner chains easily

First, how do you make a Steiner chain? It is easy using inversion geometry. Just decide on the number of circles tangent to the inner circle (n). Then the ratio of the radii of the inner and outer circle will be r/R = (1-\sin(\pi/n))/(1+\sin(\pi/n)). The radii of the circles in the ring will be (R-r)/2 and their centres are located at distance (R+r)/2 from the origin. This produces a staid concentric arrangement. Now invert with relation to an arbitrary circle: all the circles are mapped to other circles, their tangencies preserved. Voila! A suitably eccentric Steiner chain to play with.

Since the original concentric chain obviously can be rotated continuously without losing touch with the inner and outer circle, this also generates a continuous family of circles after the inversion. This is why Steiner’s porism is true: if you can make the initial chain, you get an infinite number of other chains with the same number of circles.

Iterated function systems with circle maps

The fractal works by putting copies of the whole set of circles in the chain into each circle, recursively. I remap the circles so that the outer circle becomes the unit circle, and then it is easy to see that for a given small circle with (complex) centre z and radius r the map f(w)=(w+z)r maps the interior of the unit circle to it. Use the ease of rotating the original concentric ring to produce an animation, and we can reconstruct the fractal.

Done.

Except… it feels a bit dry.

Ever since I first encountered iterated function systems in the 1980s I have felt they tend towards a geometric aesthetics that is not me, ferns notwithstanding. A lot has to do with the linearity of the transformations. One can of course add rotations, which cheers up the fractal a bit.

But still, I love the nonlinearity and harmony of conformal mappings.

Inversion makes things better!

Enter the circle inversion fractals. They are the sets of the plane that map to themselves when being inverted in any and all of a set of generating circles (or, equivalently, the limit set of points under these inversions). As a rule of thumb, when the circles do not touch the fractal will be Cantor/Fatou-style fractal dust. When the circles are tangent the fractal will pass through the point of tangency. If three circles are tangent the fractal will contain a circle passing through these points. Since Steiner chains have lots of tangencies, we should get a lot of delicious fractals by using them as generators.

I use nearly the same code I used for the elliptic inversion fractals, mostly because I like the colours. The “real” fractal is hidden inside the nested circles, composed of an infinite Apollonian gasket of circles.

Note how the fractal extends outside the generators, forming a web of circles. Convergence is slow near tangent points, making it “fuzzy”. While it is easy to see the circles that belong to the invariant set that are empty, there are also circles going through the foci inside the coloured disks, touching the more obvious circles near those fuzzy tangent points. There is a lot going on here.

But we can complicate things by allowing the chain to slide and see how the fractal changes.

This is pretty neat.

 

Overcoming inertia

Balls

The tremendous accelerations involved in the kind of spaceflight seen on Star Trek would instantly turn the crew to chunky salsa unless there was some kind of heavy-duty protection. Hence, the inertial damping field.
— Star Trek: The Next Generation Technical Manual, page 24.

For a space opera RPG setting I am considering adding inertia manipulation technology. But can one make a self-consistent inertia dampener without breaking conservation laws? What are the physical consequences? How many cool explosions, superweapons, and other tropes can we squeeze out of it? How to avoid the worst problems brought up by the SF community?

What inertia is

As Newton put it, inertia is the resistance of an object to a change in its state of motion. Newton’s force law F=ma is a consequence of the definition of momentum, p=mv (which in a way is more fundamental since it directly ties in with conservation laws). The mass in the formula is the inertial mass. Mass is a measure of how much there is of matter, and we normally multiply it with a hidden constant of 1 to get the inertial mass – this constant is what we will want to mess with.

There are relativistic versions of the laws of motion that handles momentum and inertia for high velocities, where the kinetic energy becomes so large that it starts to add mass to the whole system. This makes the total inertia go up, as seen by an outside observer, and looks like a nice case for inertia-manipulating tech being vaguely possible.

However, Einstein threw a spanner into this: gravity also acts on mass and conveniently does so exactly as much as inertia: gravitational mass (the masses in F=Gm_1m_2/r^2) and inertial mass appear to be equal. At least in my old school physics textbook (early 1980s!) this was presented as a cool unsolved mystery, but it is a consequence of the equivalence principle in general relativity (1907): all test particles accelerate the same way in a gravitational field, and this is only possible if their gravitational mass and inertial mass are proportional to one another.

So, an inertia manipulation technology will have to imply some form of gravity manipulation technology. Which may be fine from my standpoint, since what space opera is complete without antigravity? (In fact, I already had decided to have Alcubierre warp bubble FTL anyway, so gravity manipulation is in.)

Playing with inertia

OK, let’s leave relativity to the side for the time being and just consider the classical mechanics of inertia manipulation. Let us posit that there is a magical field that allows us to dial up or down the proportionality constant for inertial mass: the momentum of a particle will be p=\mu m v, the force law F=\mu m a and the formula for kinetic energy K=(1/2) \mu m v^2. \mu is the effect of the magic field, running from 0<\mu<\infty, with 1 corresponding to it being absent.

I throw a 1 g ping-pong ball at 1 m/s into my inertics device and turn on the field. What happens? Let us assume the field is \mu=1000. Now the momentum and kinetic energy jumps by a factor of 1000 if the velocity remains unchanged. Were I to catch the ball I would have gained 999 times its original kinetic energy: this looks like an excellent perpetual motion machine. Since we do not want that to be possible (a space empire powered by throwing ping-pong balls sounds silly) we must demand that energy is conserved.

Velocity shifting to preserve kinetic energy

Radiation shieldingOne way of doing energy conservation is for the velocity to go down for my heavy ping-pong ball. This means that the new velocity will be v/\sqrt{\mu}. Inertia-increasing fields slow down objects, while inertia-decreasing fields speed them up.

Forcefields/armour

One could have a force-field made of super-high inertia that would slow down incoming projectiles. At first this seems pointless, since once they get through to the other side they speed up and will do the same damage. But we could of course put in a bunch of armour in this field, and have it resist the projectile. The kinetic energy will be the same but it will be a lower velocity collision which means that the strength of the armour has a better chance of stopping it (in fact, as we will see below, we can use superdense armour here too). Consider the difference between being shot with a rifle bullet or being slowly but strongly stabbed by it: in the later case the force can be distributed by a good armour to a vast surface. Definitely a good thing for a space opera.

Spacecraft

A spacecraft that wants to get somewhere fast could just project a low \mu field around itself and boost its speed by a huge 1/\sqrt{\mu} factor. Sounds very useful. But now an impacting meteorite will both have an high relative speed, and when it enters the field get that boosted by the same factor again: impacts will happen at velocities increased by a factor of 1/\mu as measured by the ship. So boosting your speed with a factor of a 1000 will give you dust hitting you at speeds a million times higher. Since typical interplanetary dust already moves a few km/s, we are talking about hyperrelativistic impactors. The armour above sounds like a good thing to have…

Note that any inertia-reducing technology is going to improve rockets even if there is no reactionless drive or other shenanigans: you just reduce the inertia of the reaction mass. The rocket equation no longer bites: sure, your ship is mostly massive reaction mass in storage, but to accelerate the ship you just take a measure of that mass, restore its inertia, expel it, and enjoy the huge acceleration as the big engine pushes the overall very low-inertia ship. There is just a snag in this particular case: when restoring the inertia you somehow need to give the mass enough kinetic energy to be at rest in relation to the ship…

Cannons

This kind of inertics does not make for a great cannon. I can certainly make my projectile speed up a lot in the bore by lowering its inertia, but as soon as it leaves it will slow down. If we assume a given amount of force F accelerating it along the length L bore, it will pick up FL Joules of kinetic energy from the work the cannon does – independent of mass or inertia! The difference may be power: if you can only supply a certain energy per second like in a coilgun, having a slower projectile in the bore is better.

Physics

Note that entering and leaving an inertics field will induce stresses. A metal rod entering an inertia-increasing field will have the part in the field moving more slowly, pushing back against the not slowed part (yet another plus for the armour!). When leaving the field the lighter part outside will pull away strongly.

Another effect of shifting velocities is that gases behave differently. At first it looks like changing speeds would change temperature (since we tend to think of the temperature of a gas as how fast the molecules are bouncing around), but actually the kinetic temperature of a gas depends on (you guessed it) the average kinetic energy. So that doesn’t change at all. However, the speed of sound should scale as \propto 1/\sqrt{\mu}: it becomes far higher in the inertia-dampening field, producing helium-voice like effects. Air molecules inside an inertia-decreasing field would tend to leave more quickly than outside air would enter, producing a pressure difference.

Momentum conservation is a headache

Atlas 6Changing the velocity so that energy is conserved unfortunately has a drawback: momentum is not conserved! I throw a heavy object at my inertics machine at velocity v, momentum mv and energy (1/2)mv^2, it reduces is inertia and increases the speed to v/\sqrt{\mu}, keeps the kinetic energy at (1/2)mv^2, and the momentum is now mv/\sqrt{\mu}.

What if we assume the momentum change comes from the field or machine? When I hit the mass M machine with an object it experiences a force enough to change its velocity by w=mv(1-1/\sqrt{\mu})/M. When set to increase inertia it is pushed back a bit, potentially moving up to speed (m/M)v. When set to decrease inertia it is pushed forward, starting to move towards the direction the object impacted from. In fact, it can get arbitrarily large velocities by reducing \mu close to 0.

This sounds odd. Demanding momentum and energy conservation requires mv = mv/\sqrt{\mu} + Mw (giving the above formula) and mv^2 = \mu m(v/\sqrt{\mu})^2 + Mw^2, which insists that w=0. Clearly we cannot have both.

I don’t know about you, but I’d rather keep energy conserved. It is more obvious when you cheat about energy conservation.

Still, as Einstein pointed out using 4-vectors, momentum and energy conservation are deeply entangled – one reason inertics isn’t terribly likely in the real world is that they cannot be separated. We could of course try to conserve 4-momentum ((E/c,\gamma \mu m v_x, \gamma \mu m v_y, \gamma \mu m v_z)), which would look like changing both energy and normal momentum at the same time.

Energy gain/loss to preserve momentum

Buffer stopsWhat about just retaining the normal momentum rather than the kinetic energy? The new velocity would be v/\mu, the new kinetic energy would be K_1=(1/2) \mu m (v/\mu)^2 = (1/2) mv^2 / \mu = K_0/\mu. Just like in the kinetic energy preserving case the object slows down (or speeds up), but more strongly. And there is an energy debt of K_0 (1-1/\mu) that needs to be fixed.

One way of resolving energy conservation is to demand that the change in energy is supplied by the inertia-manipulation device. My ping-pong ball does not change momentum, but requires 0.999 Joule to gain the new kinetic energy. The device has to provide that. When the ball leaves the field there will be a surge of energy the device needs to absorb back. Some nice potential here for things blowing up in dramatic ways, a requirement for any self-respecting space opera.

Spacecraft

If I want to accelerate my spaceship in this setting, I would point my momentum vector towards the target, reduce my inertia a lot, and then have to provide a lot of kinetic energy from my inertics devices and power supply (actually, store a lot – the energy is a surplus). At first this sounds like it is just as bad as normal rocketry, but in fact it is awesome: I can convert my electricity directly into velocity without having to lug around a lot of reaction mass! I will even get it back when slowing down, a bit like electric brake regeneration systems.  The rocket equation does not apply beyond getting some initial momentum. In fact, the less velocity I have from the start, the better.

At least in this scheme inertia-reduced reaction mass can be restored to full inertia within the conceptual framework of energy addition/subtraction.

One drawback is that now when I run into interplanetary dust it will drain my batteries as the inertics system needs to give it a lot of kinetic energy (which will then go on harming me!)

Another big problem (pointed out by Erik Max Francis) is that turning energy into kinetic energy gives an energy requirement $latex dK/dt=mva$, which depends on an absolute speed. This requires a privileged reference frame, throwing out relativity theory. Oops (but not unexpected).

Forcefields/armour

Energy addition/depletion makes traditional force-fields somewhat plausible: a projectile hits the field, and we use the inertics to reduce its kinetic energy to something manageable. A rifle bullet has a few thousand Joules of energy, and if you can drain that it will now harmlessly bounce off your normal armour. Presumably shields will be depleted when the ship cannot dissipate or store the incoming kinetic energy fast enough, causing the inertics to overload and then leaving the ship unshielded.

Cannons

This kind of inertics allows us to accelerate projectiles using the inertics technology, essentially feeding them as much kinetic energy as we want. If you first make your projectile super-heavy, accelerate it strongly, and then normalise the inertia it will now speed away with a huge velocity.

Physics

A metal rod entering this kind of field will experience the same type of force as in the kinetic energy respecting model, but here the field generator will also be working on providing energy balance: in a sense it will be acting as a generator/motor. Unfortunately it does not look like it could give a net energy gain by having matter flow through.

Note that this kind of device cannot be simply turned off like the previous one: there has to be an energy accounting as everything returns to \mu=1. The really tricky case is if you are in energy-debt: you have an object of lowered inertia in the field, and cut the power. Now the object needs to get a bunch of kinetic energy from somewhere. Sudden absorption of nearby kinetic energy, freezing stuff nearby? That would break thermodynamics (I could set up a perpetual motion heat engine this way). Leaving the inertia-changed object with the changed inertia? That would mean there could be objects and particles with any effective mass – space might eventually be littered with atoms with altered inertia, becoming part of normal chemistry and physics. No such atoms have ever been found, but maybe that is because alien predecessor civilisations were careful with inertial pollution.

Other approaches

Gravity manipulation

Levitating morris dancersAnother approach is to say that we are manipulating spacetime so that inertial forces are cancelled by a suitable gravity force (or, for purists, that the acceleration due to something gets cancelled by a counter-acceleration due to spacetime curvature that makes the object retain the same relative momentum).

The classic is the “gravitic drive” idea, where the spacecraft generates a gravity field somehow and then free-falls towards the destination. The acceleration can be arbitrarily large but the crew will just experience freefall. Same thing for accelerating projectiles or making force-fields: they just accelerate/decelerate projectiles a lot. Since momentum is conserved there will be recoil.

The force-fields will however be wimpy: essentially it needs to be equivalent to an acceleration bringing the projectile to a stop over a short distance. Given that normal interplanetary velocities are in tens of kilometres per second (escape velocity of Earth, more or less) the gravity field needs to be many, many Gs to work. Consider slowing down a 20 km/s railgun bullet to a stop over a distance of 10 meters: it needs to happen over a millisecond and requires a 20 million m/s^2 deceleration (2.03 megaG).

If we go with energy and momentum conservation we may still need to posit that the inertics/antigravity draws power corresponding to the work it does . Make a wheel turn because of an attracting and repulsing field, and the generator has to pay the work (plus experience a torque). Make a spacecraft go from point A to B, and it needs to pay the potential energy difference, momentum change, and at least temporarily the gain in kinetic energy. And if you demand momentum conservation for a gravitic drive, then you have the drive pulling back with the same “force” as the spacecraft experiences. Note that energy and momentum in general relativity are only locally conserved; at least this kind of drive can handwave some excuse for breaking local momentum conservation by positing that the momentum now resides in an extended gravity field (and maybe gravitational waves).

Unlike the previous kinds of inertics this doesn’t change the properties of matter, so the effects on objects discussed below do not apply.

One problem is edge tidal effects. Somewhere there is going to be a transition zone where there is a field gradient: an object passing through is going to experience some extreme shear forces and likely spaghettify. Conversely, this makes for a nifty weapon ripping apart targets.

One problem with gravity manipulation is that it normally has to occur through gravity, which is both very weak and only has positive charges. Electromagnetic technology works so well because we can play positive and negative charges against each other, getting strong effects without using (very) enormous numbers of electrons. Gravity (and gravitomagnetic effects) normally only occurs due to large mass-energy densities and momenta. So for this to work there better be antigravitons, negative mass, or some other way of making gravity behave differently from vanilla relativity. Inertics can typically handwave something about the Higgs field at least.

Forcefield manipulation

This leaves out the gravity part and just posits that you can place force vectors wherever you want. A bit like Iain M. Banks’ effector beams. No real constraints because it is entirely made-up physics; it is not clear it respects any particular conservation laws.

Other physical effects

Here are some of the nontrivial effects of changing inertia of matter (I will leave out gravity manipulation, which has more obvious effects).

Electromagnetism: beware the blue carrot

It is worth noting that this thought experiment does not affect light and other electromagnetic fields: photons are massless. The overall effect is that they will tend to push around charged objects in the field more or less. A low-inertia electron subjected to a given electric field will accelerate more, a high-inertia electron less. This in turn changes the natural frequencies of many systems: a radio antenna will change tuning depending on the inertia change. A receiver inside the inertics field will experience outside signals as being stronger (if the field decreases inertia) or weaker (if it increases it).

Reducing inertia also increases the Bohr magneton, e\hbar/2 \mu m_e. This means that paramagnetic materials become more strongly affected by magnetic fields, and that ferromagnets are boosted. Conversely, higher inertia reduces magnetic effects.

Changing inertia would likely change atomic spectra (see below) and hence optical properties of many compounds. Many pigments gain their colour from absorption due to conjugated systems (think of carotene or heme) that act as antennas: inertia manipulation will change the absorbed frequencies. Carotene with increased inertia will presumably shift its absorption spectra towards lower frequencies, becoming redder, while lowered inertia causes a green or blue shift. An interesting effect is that the rhodopsin in the eye will also be affected and colour vision will experience the same shift (objects will appear to change colour in regions with a different \mu from the place where the observer is, but not inside their field). Strong enough fields will cause shifts so that absorption and transmission outside the visual range will matter, e.g. infrared or UV becomes visible.

However, the above claim that photons should not be affected by inertia manipulation may not have to hold true. Photons carry momentum, p=\hbar k where k is the wave vector. So we could assume a factor of 1/\sqrt{\mu} or 1/\mu gets in there and the field red/blueshifts photons. This would complicate things a lot, so I will leave analysis to the interested reader. But it would likely make inertics fields visible due to refractive effects.

Chemistry: toxic energy levels, plus a shrink-ray

Projectile warningOne area inertics would mess up is chemistry. Chemistry is basically all about the behaviour of the valence electrons of atoms. Their behaviour depends on their distribution between the atomic orbitals, which in turn depends on the Schrödinger equation for the atomic potential. And this equation has a dependency on the mass of the electron and nucleus.

If we look at hydrogen-like atoms, the main effect is that the energy levels become

E_n = - \mu (M Z^2 e^4/8 \epsilon_0^2 h^2 n^2),

where M=m_e m_p/(m_e+m_p) is the reduced mass. In short, the inertial manipulation field scales the energy levels up and down proportionally. One effect is that it becomes much easier to ionise low-inertia materials, and that materials that are normally held together by ionization bonds (say NaCl salt) may spontaneously decay when in high-inertia fields.

The Bohr radius scales as a_0 \propto 1/\mu: low-inertia atoms become larger. This really messes with materials. Placed in a low-inertia field atoms expand, making objects such as metals inflate. In a high inertia-field, electrons keep closer to the nuclei and objects shrink.

As distances change, the effects of electromagnetic forces also change: internal molecular electric forces, van der Waals forces and things like that change in strength, which will no doubt have effects on biology. Not to mention melting points: reducing the inertia will make many materials melt at far lower temperatures due to larger inter-atomic and inter-molecular distances, increasing it can make room-temperature liquids freeze because they are now more closely packed.

This size change also affects the electron-electron interactions, which among other things shield the nucleus and reduce the effective nuclear charge. The changed energy levels do not strongly affect the structure of the lightest atoms, so they will likely form the same kind of chemical bonds and have the same chemistry. However, heavier atoms such as copper, chromium and palladium already have ordering rules that are slightly off because of the quirks of the energy levels. As the field deviates from 1 we should expect lighter and lighter atoms to get alternative filling patterns and this means they will get different chemistry. Given that copper and chromium are essential for some enzymes, this does not bode well – if copper no longer works in cytochrome oxidase, the respiratory chain will lethally crash.

If we allow permanently inertia-altered particles chemistry can get extremely weird. An inertia-changed electron would orbit in a different way than a normal one, giving the atom it resided in entirely different chemical properties. Each changed electron could have its own individual inertia. Presumably such particles would randomise chemistry where they resided, causing all sorts of odd reactions and compounds not normally seen. The overall effect would likely be pretty toxic, since it would on average tend to catalyze metastable high-energy, low-entropy structures in biochemistry to fall down to lower energy, higher entropy states.

Lowering inertia in many ways looks like heating up things: particles move faster, chemicals diffuse more, and things melt. Given that much of biochemistry is tremendously temperature dependent, this suggests that even slight changes of \mu to 0.99 or 1.01 would be enough to create many of the bad effects of high fever or hypothermia, and a bit more would be directly lethal as proteins denaturate.

Fluids: I need a lie down

Inside a lowered inertia field matter responds more strongly to forces, and this means that fluids flow faster for the same pressure difference. Buoyancy cases stronger convection. For a given velocity, the inertial forces  are reduced compared to the viscosity, lowering the Reynolds number and making flows more laminar. Conversely, enhanced inertia fluids are hard to get to move but at a given speed they will be more turbulent.

This will really mess up the sense of balance and likely blood flow.

Gravity: equivalent exchange

I have ignored the equivalence of inertial and gravitational mass. One way for me to get away with it is to claim that they are still equivalent, since everything occurs within some local region where my inertics field is acting: all objects get their inertial mass multiplied by \mu and this also changes their gravitational mass. The equivalence principle still holds.

What if there is no equivalence principle? I could make 1 kg object and a 1 gram object fall at different accelerations. If I had a massless spring between them it would be extended, and I would gain energy. Beside the work done by gravity to bring down the objects (which I could collect and use to put them back where they started) I would now have extra energy – aha, another perpetual motion machine! So we better stick to the equivalence principle.

Given that boosting inertia makes matter both tend to shrink to denser states and have more gravitational force, an important worldbuilding issue is how far I will let this process go. Using it to help fission or fusion seems fine. Allowing it to squeeze matter into degenerate states or neutronium might be more world-changing. And easy making of black holes is likely incompatible with the survival of civilisation.

[ Still, destroying planets with small black holes is harder than it looks. The traditional “everything gets sucked down into the singularity” scenario is surprisingly slow. If you model it using spherical Bondi accretion you need an Earth-mass black hole to make the sun implode within a year or so, and a 3\cdot 10^{19} kg asteroid mass black hole to implode the Earth. And the extreme luminosity slows things a lot more. A better way may be to use an evaporating black hole to irradiate the solar system instead, or blow up something sending big fragments. ]

Another fun use of inertics is of course to mess up stars directly. This does not work with the energy addition/depletion model, but the velocity change model would allow creating a region of increased inertia where density ramps up: plasma enters the volume and may start descending below the spot. Conversely, reducing inertia may open a channel where it is easier for plasma from the interior to ascend (especially since it would be lighter). Even if one cannot turn this into a black hole or trigger surface fusion, it might enable directed flares as the plasma drags electromagnetic field lines with it.

The probe was invisible on the monitor, but its effects were obvious: titanic volumes of solar plasma were sucked together into a strangely geometric sunspot. Suddenly there was a tiny glint in the middle and a shock-wave: the telemetry screens went blank.

“Seems your doomsday weapon has failed, professor. Mad science clearly has no good concept of proper workmanship.”

“Stay your tongue. This is mad engineering: the energy ran out exactly when I had planned. Just watch.”

Without the probe sucking it together the dense plasma was now wildly expanding. As it expanded it cooled. Beyond a certain point it became too cold to remain plasma: there was a bright flash as the protons and electrons recombined and the vortex became transparent. Suddenly neutral the matter no longer constrained the tortured magnetic field lines and they snapped together at the speed of light. The monitor crashed.

“I really hope there is no civilization in this solar system sensitive to massive electromagnetic pulses” the professor gloated in the dark.

Conclusions

Model Pros Cons
Preserve kinetic energy Nice armour. Fast spacecraft with no energy needs (but weird momentum changes). Interplanetary dust is a problem. Inertics cannons inefficient. Toxic effects on biochemistry.
Preserve momentum Nice classical forcefield. Fast spacecraft with energy demands. Inertics cannons work. Potential for cool explosions due to overloads. Interplanetary dust drains batteries. Extremely weird issues of energy-debts: either breaking thermodynamics or getting altered inertia materials. Toxic effects on biochemistry. Breaks relativity.
Gravity manipulation No toxic chemistry effects. Fast spacecraft with energy demands. Inertics cannons work. Forcefields wimpy. Gravitic drives are iffy due to momentum conservation (and are WMDs). Gravity is more obviously hard to manipulate than inertia. Tidal edge forces.

In both cases where actual inertia is changed inertics fields appear pretty lethal. A brief brush with a weak field will likely just be incapacitating, but prolonged exposure is definitely going to kill. And extreme fields are going to do very nasty stuff to most normal materials – making them expand or contract, melt, change chemical structure and whatnot. Hence spacecraft, cannons and other devices using inertics need to be designed to handle these effects. One might imagine placing the crew compartment in a counter-inertics field keeping \mu=1 while the bulk of the spacecraft is surrounded by other fields. A failure of this counter-inertics field does not just instantly turn the crew into tuna paste, but into blue toxic tuna paste.

Gravity manipulation is cleaner, but this is not necessarily a plus from the cool fiction perspective: sometimes bad side effects are exactly what world-building needs. I love the idea of inertics with potential as an anti-personnel or assassination weapon through its biochemical effects, or “forcefields” being super-dense metal with amplified inertia protecting against high-velocity or beam impact.

The atomic rocket page makes a big deal out of how reactionless propulsion makes space opera destroying weapons of mass destruction (if every tramp freighter can be turned into a relativistic missile, how long is the Imperial Capital going to last?) This is a smaller problem here: being hit by a inertia-reduced freighter hurts less, even when it is very fast (think of being hit by a fast ping-pong ball). Gravity propulsion still enables some nasty relativistic weaponry, and if you spend time adding kinetic energy to your inertia-reduced missile it can become pretty nasty. But even if the reactionless aspect does not trivially produce WMDs inertia manipulation will produce a fair number of other risky possibilities. However, given that even a normal space freighter is a hypervelocity missile, the problem lies more in how to conceptualise a civilisation that regularly handles high-energy objects in the vicinity of centres of civilisation.

Not discussed here are issues of how big the fields can be made. Could we reduce the inertia of an asteroid or planet, sending it careening around? That has some big effects on the setting. Similarly, how small can we make the inertics: do they require a starship to power them, or could we have them in epaulettes? Can they be counteracted by another field?

Inertia-changing devices are really tricky to get to work consistently; most space opera SF using them just conveniently ignores the mess – just like how FTL gives rise to time travel or that talking droids ought to transform the global economy totally.

But it is fun to think through the awkward aspects, since some of them make the world-building more exciting. Plus, I would rather discover them before my players, so I can make official handwaves of why they don’t matter if they are brought up.

How much for that neutron in the window?

Zach Weinersmith asked:

That is a great question. I once came up with the answer “50 tons of neutrons are needed” to a serious problem (you don’t want to know). How cheaply could you get that?

Figuring out roughly how many neutrons there are per kilogram of pure elements is pretty easy. Get their standard atomic weights, A, and subtract the atomic number Z since that is the number of protons: N=A-Z. Now we know how many neutrons there are per atom on average (standard atomic weights include the different isotope weights, weighted by their abundance).

[ Since nucleons (protons and neutrons) are about 1830 times heavier than electrons, we can ignore the electrons for an error on order of 0.05%. There is also a binding energy error, since some of the total atomic mass is because of binding energy between nucleons, which is 0.94% or less. These errors are nothing compared to price uncertainties.]

We know that one nucleon weighs about u=1.660539040\cdot 10^{-27} kg, so the number of nucleons per kilogram is N_{\mathrm{nucl}} \approx 1/(Au) and the number of neutrons per kilo is N_n \approx N_{\mathrm{nucl}}(N/A). This ranges from 7.5\cdot 10^{25} for helium down to 1.2\cdot 10^{24} for Oganesson. Hydrogen just has 4.7\cdot 10^{24} neutrons per kilogram, despite having 5.97\cdot 10^{26} nucleons per kilogram – there isn’t that much deuterium and tritium around to contribute neutrons.

Now, the price of elements is badly defined. I can get a kilogram of coal much cheaper than a kilogram of diamond, and ultra-pure elements are very expensive even if the everyday element is cheap. Plus, prices vary. And it is hard to buy plutonium on the open market. Ignoring all that and taking the numbers from Wikipedia (and ignoring the that some values look odd, and some are for compounds, and that the prices are unadjusted for inflation, and that they are lacking for many elements…) we can actually calculate the number of neutrons per dollar:

Neutrons per dollar if one buys one kilogram of the element.
Neutrons per dollar if one buys one kilogram of the element.

And the winner is… aluminium! You can get 8.8\cdot 10^{24} neutrons per dollar from aluminium.

In second place, nitrogen (7.1\cdot 10^{24}) and in third, hydrogen (6.8\cdot 10^{24})! Hydrogen may be very neutron-poor, but since it is rather cheap and you get lots of nucleons per kilo, this balances the lack.

Given that these prices are dodgy, I would expect an uncertainty on the order of a magnitude (at least). So the true winner, given the cheapest actual source of the element, might be hard to find without excruciating price comparisons. But we can be fairly certain it is going to be something with an atomic number less than 25. Uranium is unlikely to be a cheap neutron source in this sense (and just look at poor plutonium!)

So, given that aluminium is 51.8% neutrons by weight I need 96.5 tons. The current aluminium price is $1,650.00 per ton, so I would have to pay $159,225 for the neutrons in my doomsday weapon – I mean, totally innocuous thought experiment!

The Aestivation hypothesis: popular outline and FAQ

Anders Sandberg & Milan Ćirković

Since putting up a preprint for our paper “That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi’s paradox” (Journal of the British Interplanetary Society, in press) we have heard some comments and confusion that suggest to us that it would be useful to try to outline and clarify what our idea is, what we think about it, and some of the implications.

Table of contents

The super-short version of the paper

Maybe we are not seeing alien civilizations because they are all rationally “sleeping” in the current early cosmological era, waiting for a remote future when it is more favourable to exploit the resources of the universe. We show that given current observations we can rule out a big chunk of possibilities like this, but not all.

A bit more unpacked explanation

Information processing requires physical resources, not just computers or brains, but energy to run them. There is a thermodynamic cost to perform information processing that is temperature dependent: in principle, running processing become 10 times more efficient if your computer is 10 times colder (measured in Kelvins). Right now the cosmic background radiation makes nearly everything in the universe hotter than 3 Kelvin, but as the universe expands this background temperature will decline exponentially. So if you want to do as much information processing as possible with the energy you have it makes sense to wait. It becomes exponentially better. Eventually the background temperature bottoms out because of horizon radiation in a few trillion years: at this point it no longer makes sense to wait with the computation.

Hence, an advanced civilization may have explored a big chunk of the universe, done what is doable with existing nature, and now mostly have internal “cultural” things to do. These things can be regarded as information processing. If they want to maximize processing they should not do it today but wait until the cold future when they will get tremendously more done (1030 times more!). They should hence aestivate, leaving their domain protected by some automation until they wake up.

If this is correct, there might be old and powerful civilizations around that are hard to observe, not because they are deliberately hiding but because they are inactive for the time being.

However, were this hypothesis true, they would not want to lose their stuff. We should expect to see fewer processes that reduce resources  that could be useful in the far future. In the paper we look at processes that look like they might waste resources: stars converting mass into energy that is lost, stars imploding into black holes, galactic winds losing gas into intergalactic space, galaxy collisions, and galaxy clusters getting separated by the expansion of the universe. Current observations do not seem to indicate anything preventing these processes (and most interventions would be very visible).

Hence, either:

  1. the answer to the Fermi question “where are they?!” is something else (like there being no aliens),
  2. advanced civilizations aestivate but do so with only modest hoards of resources rather than entire superclusters,
  3. they are mostly interested in spreading far and wide since this gives a lot of stuff with a much smaller effort than retaining it.

Necessary assumptions

The aestivation hypothesis makes the following assumptions:

  1. There are civilizations that mature much earlier than humanity. (not too implausible, given that Earth is somewhat late compared to other planets)
  2. These civilizations can expand over sizeable volumes, gaining power over their contents. (we have argued that this is doable)
  3. These civilizations have solved their coordination problems. (otherwise it would be hard to jointly aestivate; assumption likelihood hard to judge)
  4. A civilization can retain control over its volume against other civilizations. (otherwise it would need to actively defend its turf in the present era and cannot aestivate; likelihood hard to judge)
  5. The fraction of mature civilizations that aestivate is non-zero. (if it is rational at least some will try)
  6. Aestivation is largely invisible. (seems likely, since there would be nearly no energy release)

Have you solved the Fermi question?

We are not claiming we now know the answer to the Fermi question. Rather, we have a way of ruling out some possibilities and a few new possibilities worth looking for (like galaxies with inhibited heavy star formation).

Do you really believe in it?

I (Anders) personally think the likeliest reason we are not seeing aliens is not that they are aestivating, but just that they do not exist or are very far away.

We have an upcoming paper giving some reasons for this belief. The short of it is that we are very uncertain about the probability of life and intelligence given the current state of scientific knowledge. They could be exceedingly low, and this means we have to assign a fairly high credence to the empty universe hypothesis. If that hypothesis is not true, then aestivation is a pretty plausible answer in my personal opinion.

Why write about a hypothesis you do not think is the most likely one? Because we need to cover as much of possibility space as possible, and the aestivation hypothesis is neatly suggested by considerations of the thermodynamics of computation and physical eschatology. We have been looking at other unlikely Fermi hypotheses like the berserker hypothesis to see if we can give good constraints on them (in that case, our existence plus some ecological instability problems make berzerkers unlikely).

What is the point?

Understanding the potential and limits of intelligence in the universe tells us things about our own chances and potential future.

At the very least, this paper shows what a future advanced human-derived civilization may try to achieve, and some of the ultimate limits on far-future information processing. It gives some new numbers to feed into Nick Bostrom’s astronomical waste argument for working very hard on reducing existential risk in the present: the potential future is huge.

In regards to alien civilizations, the paper maps a part of possibility space, showing what is required for this Fermi paradox explanation to work as an explanation. It helps cut down on the possibilities a fair bit.

What about the Great Filter?

We know there has to be at least one the unlikely step between non-living matter and easily observable technological civilizations (“the Great Filter”), otherwise the sky would be full of them. If it is an early filter (life or intelligence is rare) we may be fairly alone but our future is open; were the filter a later step, we should expect to be doomed.

The aestivation hypothesis doesn’t tell us much about the filter. It allows explaining away the quiet sky as evidence for absence of aliens, so without knowing if it is true or not we do not learn anything from the silence. The lack of megascale engineering is evidence against certain kinds of alien goals and activities, but rather weak evidence.

Meaning of life

Depending on what you are trying to achieve, different long-term strategies make sense. This is another way SETI may tell us something interesting about the Big Questions by showing what advanced species are doing (or not):

If the ultimate value you aim for is local such as having as many happy minds as possible, then you want to spread very far and wide, even though the galaxy clusters you have settled will eventually drift apart and be forever separated. The total value doesn’t depend on all those happy minds talking to each other. Here the total amount of value is presumably proportional to the amount of stuff you have gathered times how long it can produce valuable thoughts. Aestivation makes sense, and you want to spread far and wide before doing it.

If the ultimate value you aim for is nonlocal, such as having your civilization produce the deepest possible philosophy, then all parts need to stay in touch with each other. This means that expanding outside a gravitationally bound supercluster is pointless: your expansion will halt at this point. We can be fairly certain there are no advanced civilizations trying to scrape together larger superclusters since it would be very visible.

If the ultimate value you aim for is finite, then at some point you may be done: you have made the perfect artwork or played all the possible chess games. Such a civilization only needs resources enough to achieve the goal, and then presumably will shut down. If the goal is small it might do this without aestivating, while if it is large it may aestivate with a finite hoard.

If the ultimate goal is modest, like enjoying your planetary utopia, then you will not affect the large-scale universe (although launching intergalactic colonization may still be good for security, leading to a nonlocal instrumental goal). Modest civilizations do not affect the overall fate of the universe.

Can we test it?

Yes! The obvious way is to carefully look for odd processes keeping the universe from losing potentially useful raw materials. The suggestions in the paper give some ideas, but there are doubtless other things to look for.

Also, aestivators would protect themselves from late-evolving species that could steal their stuff. If we were to start building self-replicating von Neumann probes in the future, if there are aestivations around they better stop us. This hypothesis test may of course be rather dangerous…

Isn’t there more to life than information processing?

Information is “a difference that makes a difference”: information processing is just going from one distinguishable state to another in a meaningful way. This covers not just computing with numbers and text, but having one brain state follow another, doing economic transactions, and creating art. Falling in love means that a mind goes from one state to another in a very complex way. Maybe the important subjective aspect is something very different from states of brain, but unless you think that it is possible to fall in love without having the brain change state there will be an information processing element to it. And that information processing is bound by the laws of thermodynamics.

Some theories of value place importance on how or that something is done rather than the consequences or intentions (which can be viewed as information states): maybe a perfect Zen action holds value on its own. If the start and end state are the same, then an infinite amount of such actions can be done and an equal amount of value achieved – yet there is no way of telling if they have ever happened, since there will not be a memory of them occurring.

In short, information processing is something we instrumentally need for the mental or practical activities that truly matter.

“Aestivate”?

Like hibernate, but through summer (latin aestus=heat, aestivate=spend the summer). Hibernate (latin hibernus=wintry) is more common, but since this is about avoiding heat we choose the slightly rarer term.

Can’t you put your computer in a fridge?

Yes, it is possible to cool below 3 K. But you need to do work to achieve it, spending precious energy on the cooling. If you want your computing done *now* and do not care about the total amount of computing, this is fine. But if you want as much computing as possible, then fridges are going to waste some of your energy.

There are some cool (sorry) possibilities by using very large black holes as heat sinks, since their temperature would be lower than the background radiation. But this will only last for a few hundred billion years, then the background will be cooler.

Does computation costs have to be temperature dependent?

The short answer is no, but we do not think this matters for our conclusion.

The irreducible energy cost of computation is due to the Landauer limit (this limit or principle has also been ascribed to Brillouin, Shannon, von Neumann and many others): to erase one bit of information you need to pay an energy cost equal to kT\ln(2) or more. Otherwise you could cheat the second law of thermodynamics.

However, logically reversible computation can work without paying this by never erasing information. The problem is of course that eventually memory runs out, but Bennett showed that one can then “un-compute” the computation by running it backwards, removing the garbage. The problem is that reversible computation needs to run very close to the average energy of the system (taking a long time) and that error correction is irreversible and temperature dependent. Same thing is true for quantum computation.

If one has a pool of negentropy, that is, something ordered that can be randomized, then one can “pay” for bit erasure using this pool until it runs out. This is potentially temperature independent! One can imagine having access to a huge memory full of zero bits. By swapping your garbage bit for a zero, you can potentially run computations without paying an energy cost (if the swapping is free): it has essentially zero temperature.

If there are natural negentropy pools aestivation is pointless: advanced civilizations would be dumping their entropy there in the present. But as far as we know, there are no such pools. We can make them by ordering matter or energy, but that has a work cost that depends on temperature (or using yet another pool of negentropy).

Space-time as a resource?

Maybe the flatness of space-time is the ultimate negentropy pool, and by wrinkling it up we can get rid of entropy: this is in a sense how the universe has become so complex thanks to matter lumping together. The total entropy due to black holes dwarfs the entropy of normal matter by several orders of magnitude.

Were space-time lumpiness a useful resource we should expect advanced civilizations to dump matter into black holes on a vast scale; this does not seem to be going on.

Lovecraft, wasn’t he, you know… a bit racist?

Yup. Very racist. And fearful of essentially everything in the modern world: globalisation, large societies, changing traditions, technology, and how insights from science make humans look like a small part of the universe rather than the centre of creation. Part of what make his horror stories interesting is that they are horror stories about modernity and the modern world-view. From a modernist perspective these things are not evil in themselves.

His vision of a vast universe inhabited by incomprehensible alien entities far outside the range of current humanity does fit in with Dysonian SETI and transhumanism: we should not assume we are at the pinnacle of power and understanding, we can look for signs that there are far more advanced civilizations out there (and if there is, we better figure out how to relate to this fact), and we can aspire to become something like them – which of course would have horrified Lovecraft to no end. Poor man.