Dissolving the Fermi Paradox

The Universe Today wrote an article about a paper by me, Toby and Eric about the Fermi Paradox. The preprint can be found on Arxiv (see also our supplements: 1,2,3 and 4). Here is a quick popular overview/FAQ.


  • The Fermi question is not a paradox: it just looks like one if one is overconfident in how well we know the Drake equation parameters.
  • Our distribution model shows that there is a large probability of little-to-no alien life, even if we use the optimistic estimates of the existing literature (and even more if we use more defensible estimates).
  • The Fermi observation makes the most uncertain priors move strongly, reinforcing the rare life guess and an early great filter.
  • Getting even a little bit more information can update our belief state a lot!


So, do you claim we are alone in the universe?

No. We claim we could be alone, and the probability is non-negligible given what we know… even if we are very optimistic about alien intelligence.

What is the paper about?

The Fermi Paradox – or rather the Fermi Question – is “where are the aliens?” The universe is immense and old and intelligent life ought to be able to spread or signal over vast distances, so if it has some modest probability we ought to see some signs of intelligence. Yet we do not. What is going on? The reason it is called a paradox is that is there is a tension between one plausible theory ([lots of sites]x[some probability]=[aliens]) and an observation ([no aliens]).

Dissolving the Fermi paradox: there is not much tension

We argue that people have been accidentally misled to feel there is a problem by being overconfident about the probability. 

N=R_*\cdot f_p \cdot n_e \cdot f_l \cdot f_i \cdot f_c \cdot L

The problem lies in how we estimate probabilities from a product of uncertain parameters (as the Drake equation above). The typical way people informally do this with the equation is to admit that some guesses are very uncertain, give a “representative value” and end up with some estimated number of alien civilisations in the galaxy – which is admitted to be uncertain, yet there is a single number.

Obviously, some authors have argued for very low probabilities, typically concluding that there is just one civilisation per galaxy (“the N\approx 1 school”). This may actually still be too much, since that means we should expect signs of activity from nearly any galaxy. Others give slightly higher guesstimates and end up with many civilisations, typically as many as one expects civilisations to last (“the N\approx L school”). But the proper thing to do is to give a range of estimates, based on how uncertain we actually are, and get an output that shows the implied probability distribution of the number of alien civilisations.

If one combines either published estimates or ranges compatible with current scientific uncertainty we get a distribution that makes observing an empty sky unsurprising – yet is also compatible with us not being alone. 

The reason is that even if one takes a pretty optimistic view (the published estimates are after all biased towards SETI optimism since the sceptics do not write as many papers on the topic) it is impossible to rule out a very sparsely inhabited universe, yet the mean value may be a pretty full galaxy. And current scientific uncertainties of the rates of life and intelligence emergence are more than enough to create a long tail of uncertainty that puts a fair credence on extremely low probability – probabilities much smaller than what one normally likes to state in papers. We get a model where there is 30% chance we are alone in the visible universe, 53% chance in the Milky Way… and yet the mean number is 27 million and the median about 1! (see figure below)

This is a statement about knowledge and priors, not a measurement: armchair astrobiology.

(A) A probability density function for N, the number of civilisations in the Milky Way, generated by Monte Carlo simulation based on the authors’ best estimates of our current uncertainty for each parameter. (B) The corresponding cumulative density function. (C) A cumulative density function for the distance to the nearest detectable civilisation.
(A) A probability density function for N, the number of civilisations in the Milky Way, generated by Monte Carlo simulation based on the authors’ best estimates of our current uncertainty for each parameter. (B) The corresponding cumulative density function. (C) A cumulative density function for the distance to the nearest detectable civilisation.

The Great Filter: lack of obvious aliens is not strong evidence for our doom

After this result, we look at the Great Filter. We have reason to think at least one term in the Drake equation is small – either one of the early ones indicating how much life or intelligence emerges, or one of the last one that indicate how long technological civilisations survive. The small term is “the Filter”. If the Filter is early, that means we are rare or unique but have a potentially unbounded future. If it is a late term, in our future, we are doomed – just like all the other civilisations whose remains would litter the universe. This is worrying. Nick Bostrom argued that we should hope we do not find any alien life.

Our paper gets a somewhat surprising result: when updating our uncertainties in the light of no visible aliens, it reduces our estimate of the rate of life and intelligence emergence (the early filters) much more than the longevity factor (the future filter). 

The reason is that if we exclude the cases where our galaxy is crammed with alien civilisations – something like the Star Wars galaxy where every planet has its own aliens – then that leads to an update of the parameters of the Drake equation. All of them become smaller, since we will have a more empty universe. But the early filter ones – life and intelligence emergence – change much more downwards than the expected lifespan of civilisations since they are much more uncertain (at least 100 orders of magnitude!) than the merely uncertain future lifespan (just 7 orders of magnitude!).

So this is good news: the stars are not foretelling our doom!

Note that a past great filter does not imply our safety.

The conclusion can be changed if we reduce the uncertainty of the past terms to less than 7 orders of magnitude, or the involved  probability distributions have weird shapes. (The mathematical proof is in supplement IV, which applies to uniform and normal distributions. It is possible to add tails and other features that breaks this effect – yet believing such distributions of uncertainty requires believing rather strange things. )

Isn’t this armchair astrobiology?

Yes. We are after all from the philosophy department.

The point of the paper is how to handle uncertainties, especially when you multiply them together or combine them in different ways. It is also about how to take lack of knowledge into account. Our point is that we need to make knowledge claims explicit – if you claim you know a parameter to have the value 0.1 you better show a confidence interval or an argument about why it must have exactly that value (and in the latter case, better take your own fallibility into account). Combining overconfident knowledge claims can produce biased results since they do not include the full uncertainty range: multiplying point estimates together produces a very different result than when looking at the full distribution.

All of this is epistemology and statistics rather than astrobiology or SETI proper. But SETI makes a great example since it is a field where people have been learning more and more about (some) of the factors.

The same approach as we used in this paper can be used in other fields. For example, when estimating risk chains in systems (like the risk of a pathogen escaping a biosafety lab) taking uncertainties in knowledge will sometimes produce important heavy tails that are irreducible even when you think the likely risk is acceptable. This is one reason risk estimates tend to be overconfident.


What kind of distributions are we talking about here? Surely we cannot speak of the probability of alien intelligence given the lack of data?

There is a classic debate in probability between frequentists, claiming probability is the frequency of events that we converge to when an experiment is repeated indefinitely often, and Bayesians, claiming probability represents states of knowledge that get updated when we get evidence. We are pretty Bayesian.

The distributions we are talking about are distributions of “credences”: how much you believe certain things. We start out with a prior credence based on current uncertainty, and then discuss how this gets updated if new evidence arrives. While the original prior beliefs may come from shaky guesses they have to be updated rigorously according to evidence, and typically this washes out the guesswork pretty quickly when there is actual data. However, even before getting data we can analyse how conclusions must look if different kinds of information arrives and updates our uncertainty; see supplement II for a bunch of scenarios like “what if we find alien ruins?”, “what if we find a dark biosphere on Earth?” or “what if we actually see aliens at some distance?”


Our use of the Drake equation assumes the terms are independent of each other. This of course is a result of how Drake sliced things into naturally independent factors. But there could be correlations between them. Häggström and Verendel showed that in worlds where the priors are strongly correlated updates about the Great Filter can get non-intuitive.

We deal with this in supplement II, and see also this blog post. Basically, it doesn’t look like correlations are likely showstoppers.

You can’t resample guesses from the literature!

Sure can. As long as we agree that this is not so much a statement about what is actually true out there, but rather the range of opinions among people who have studied the question a bit. If people give answers to a question in the range from ten to a hundred, that tells you something about their beliefs, at least.

What the resampling does is break up the possibly unconscious correlation between answers (“the N\approx 1 school” and “the N\approx L school” come to mind). We use the ranges of answers as a crude approximation to what people of good will think are reasonable numbers.

You may say “yeah, but nobody is really an expert on these things anyway”. We think that is wrong. People have improved their estimates as new data arrives, there are reasons for the estimates and sometimes vigorous debate about them. We warmly recommend Vakoch, D. A., Dowd, M. F., & Drake, F. (2015). The Drake Equation. The Drake Equation, Cambridge, UK: Cambridge University Press, 2015 for a historical overview. But at the same time these estimates are wildly uncertain, and this is what we really care about. Good experts qualify the certainty of their predictions.

But doesn’t resampling from admittedly overconfident literature constitute “garbage in, garbage out”?

Were we trying to get the true uncertainties (or even more hubristically, the true values) this would not work: we have after all good reasons to suspect these ranges are both biased and overconfidently narrow. But our point is not that the literature is right, but that even if one were to use the overly narrow and likely overly optimistic estimates as estimates of actual uncertainty the broad distribution will lead to our conclusions. Using the literature is the most conservative case.

Note that we do not base our later estimates on the literature estimate but our own estimates of scientific uncertainty. If they are GIGO it is at least our own garbage, not recycled garbage. (This reading mistake seems to have been made on Starts With a Bang).

What did the literature resampling show?

An overview can be found in Supplement III. The most important point is just that even estimates of super-uncertain things like the probability of life lies in a surprisingly narrow range of values, far more narrow than is scientifically defensible. For example, f_l has five estimates ranging from 10^{-30} to 10^{-5}, and all the rest are in the range 10^{-3} to 1. f_i is even worse, with one microscopic and nearly all the rest between one in a thousand to one.

(A) The uncertainty in the research community represented via a synthetic probability density function over N — the expected number of detectable civilizations in our galaxy. The curve is generated by random sampling from the literature estimates of each parameter. Direct literature estimates of the number of detectable civilisations are marked with red rings. (B) The corresponding synthetic cumulative density function. (C) A cumulative density function for the distance to the nearest detectable civilisation, estimated via a mixture model of the nearest neighbour functions.
(A) The uncertainty in the research community represented via a synthetic probability density function over N — the expected number of detectable civilisations in our galaxy. The curve is generated by random sampling from the literature estimates of each parameter. Direct literature estimates of the number of detectable civilisations are marked with red rings. (B) The corresponding
synthetic cumulative density function. (C) A cumulative density function for the distance to the nearest detectable civilisation, estimated via a mixture model of the nearest neighbour functions.

It also shows that estimates that are likely biased towards optimism (because of publication bias) can be used to get a credence distribution that dissolves the paradox once they are interpreted as ranges. See the above figure, were we get about 30% chance of being alone in the Milky Way and 8% chance of being alone in the visible universe… but a mean corresponding to 27 million civilisations in the galaxy and a median of about a hundred.

There are interesting patterns in the data. When plotting the expected number of civilisations in the Milky Way based on estimates from different eras the number goes down with time: the community has clearly gradually become more pessimistic. There are some very pessimistic estimates, but even removing them doesn’t change the overall structure.

What are our assumed uncertainties?

A key point in the paper is trying to quantify our uncertainties somewhat rigorously. Here is a quick overview of where I think we are, with the values we used in our synthetic model:

  • N_*: the star formation rate in the Milky Way per year is fairly well constrained. The actual current uncertainty is likely less than 1 order of magnitude (it can vary over 5 orders of magnitude in other galaxies). In our synthetic model we put this parameter as log-uniform from 1 to 100.
  • f_p: the fraction of systems with planets is increasingly clear ≈1. We used log-uniform from 0.1 to 1.
  • n_e: number of Earth-like in systems with planets.
    • This ranges from rare earth arguments (<10^{-12}) to >1. We used log-uniform from 0.1 to 1 since recent arguments have shifted away from rare Earths, but we checked that adding it did not change the conclusions much.
  • f_l: Fraction of Earthlike planets with life.
    • This is very uncertain; see below for our arguments that the uncertainty ranges over perhaps 100 orders of magnitude.
    • There is an absolute lower limit due to ergodic repetition: f_l >10^{-10^{115}} – in an infinite universe there will eventually be randomly generated copies of Earth and even the entire galaxy (at huge distances from each other). Observer selection effects make using the earliness of life on Earth problematic.
    • We used a log-normal rate of abiogenesis that was transformed to a fraction distribution.
  • f_i: Fraction of lifebearing planets with intelligence/complex life.
    • This is very uncertain; see below for our arguments that the uncertainty ranges over perhaps 100 orders of magnitude.
    • One could argue there has been 5 billion species so far and only 1 intelligent, so we know f_i>2\cdot 10^{-10}. But one could argue that we should count assemblages of 10 million species, which gives a fraction 1/500 per assemblage. Observer selection effects may be distorting this kind of argument.
    • We could have used a log-normal rate of complex life emergence that was transformed to a fraction distribution or a broad log-linear distribution. Since this would have made many graphs hard to interpret we used log-uniform from 0.001 to 1, not because we think this likely but just as a simple illustration (the effect of the full uncertainty is shown in Supplement II).
  • f_c: Fraction of time when it is communicating.
    • Very uncertain; humanity is 0.000615 so far. We used log-uniform from 0.01 to 1.
  • L: Average lifespan of a civilisation.
    • Fairly uncertain; 50?<L< 10^9-10^{10} years (upper limit because of the Drake equation applicability: it assumes the galaxy is in a steady state, and if civilisations are long-lived enough they will still be accumulating since the universe is too young.)
    • We used log-uniform from 100 to 10,000,000,000.

Note that this is to some degree a caricature of current knowledge, rather than an attempt to represent it perfectly. Fortunately our argument and conclusions are pretty insensitive to the details – it is the vast ranges of uncertainty that are doing the heavy lifting.


Why do we think the fraction of planets with life parameters could have a huge range?

First, instead of thinking in terms of the fraction of planets having life, consider a rate of life formation in suitable environments: what is the induced probability distribution? The emergence is a physical/chemical transition of some kind of primordial soup, and transition events occur in this medium at some rate per unit volume: f_L\approx \lambda V t where V is the available volume and t is the available time. High rates would imply that almost all suitable planets originate life, while low rates would imply that almost no suitable planets originate life.

The uncertainty regarding the length of time when it is possible is at least 3 orders of magnitude (10^7-10^{10} years).

The uncertainty regarding volumes spans 20+ orders of magnitude – from entire oceans to brine pockets on ice floes.

Uncertainty regarding transition rates can span 100+ orders of magnitude! The reason is that this might involve combinatoric flukes (you need to get a fairly longish sequence of parts into the right sequence to get the right kind of replicator), or that it is like the protein folding problem where Levinthal’s paradox shows that it takes literally astronomical time to get entire oceans of copies of a protein to randomly find the correctly folded position (actual biological proteins “cheat” by being evolved to fold neatly and fast). Even chemical reaction rates span 100 orders of magnitude. On the other hand, spontaneous generation could conceivably be common and fast! So we should conclude that \lambda has an uncertainty range of at least 100 orders of magnitude.

Actual abiogenesis will involve several steps. Some are easy, like generating simple organic compounds (plentiful in asteroids, comets and Miller-Urey experiment). Some are likely tough. People often overlook that even how to get proteins and nucleic acids in a watery environment is somewhat of a mystery since these chains tend to hydrolyze; the standard explanation is to look for environments that have a wet-dry cycle allowing complexity to grow. But this means V is much smaller than an ocean.

That we have tremendous uncertainty about abiogenesis does not mean we do not know anything. We know a lot. But at present we have no good scientific reasons to believe we know the rate of life formation per liter-second. That will hopefully change.

Doesn’t creationists argue stuff like this?

There is a fair number of examples of creationists arguing that the origin of life must be super-unlikely and hence we must believe in their particular god.

The problem(s) with this kind of argument is that it presupposes that there is only one planet, and somehow we got a one-in-a-zillion chance on that one. That is pretty unlikely. But the reality is that there is a zillion planets, so even if there is a one-in-a-zillion chance for each of them we should expect to see life somewhere… especially since being a living observer is a precondition for “seeing life”! Observer selection effects really matter.

We are also not arguing that life has to be super-unlikely. In the paper our distribution of life emergence rate actually makes it nearly universal 50% of the time – it includes the possibility that life will spontaneously emerge in any primordial soup puddle left alone for a few minutes. This is a possibility I doubt anybody believes in, but it could be that would-be new life is emerging right under our noses all the time, only to be outcompeted by the advanced life that already exists.

Creationists make a strong claim that they know f_l \ll 1; this is not really supported by what we know. But f_l \ll 1 is totally within possibility.

Complex life

Even if you have life, it might not be particularly good at evolving. The reasoning is that it needs to have a genetic encoding system that is both rigid enough to function efficiently and fluid enough to allow evolutionary exploration.

All life on Earth shares almost exactly the same genetic systems, showing that only rare and minor changes have occurred in \approx 10^{40} cell divisions. That is tremendously stable as a system. Nonetheless, it is fairly commonly believed that other genetic systems preceded the modern form. The transition to the modern form required major changes (think of upgrading an old computer from DOS to Windows… or worse, from CP/M to DOS!). It would be unsurprising if the rate was < 1 per 10^{100} cell divisions given the stability of our current genetic system – but of course, the previous system might have been super-easy to upgrade.

Modern genetics required >1/5 of the age of the universe to evolve intelligence. A genetic system like the one that preceded ours might both be stable over a google cell divisions and evolve more slowly by a factor of 10, and run out the clock. Hence some genetic systems may be incapable of ever evolving intelligence.

This related to a point made by Brandon Carter much earlier, where he pointed out that the timescales of getting life, evolving intelligence and how long biospheres last are independent and could be tremendously different – that life emerged early on Earth may have been a fluke due to the extreme difficulty of also getting intelligence within this narrow interval (on all the more likely worlds there are no observers to notice). If there are more difficult transitions, you get an even stronger observer selection effect.

Evolution goes down branches without looking ahead, and we can imagine that it could have an easier time finding inflexible coding systems (“B life”) unlike our own nice one (“A life”). If the rate of discovering B-life is \lambda_B and the rate of discovering capable A-life is \lambda_A, then the fraction of A-life in the universe is just \lambda_A/\lambda_B – and rates can differ many orders of magnitude, producing a life-rich but evolution/intelligence-poor universe. Multiple step models add integer exponents to rates: these the multiply order of magnitude differences.

So we have good reasons to think there could be a hundred orders of magnitude uncertainty on the intelligence parameter, even without trying to say something about evolution of nervous systems.

How much can we rule out aliens?

Humanity has not scanned that many stars, so obviously we have checked even a tiny part of the galaxy – and could have missed them even if we looked at the right spot. Still, we can model how this weak data updates our beliefs (see Supplement II).

The strongest argument against aliens is the Tipler-Hart argument that settling the Milky Way, even when you are expanding at low speed, will only take a fraction of its age. And once a civilisation is everywhere it is hard to have it go extinct everywhere – it will tend to persist even if local pieces crash. Since we do not seem to be in a galaxy paved over by an alien supercivilisation we have a very strong argument to assume a low rate of intelligence emergence. Yes, even if if 99% of civilisations stay home or we could be in an alien zoo, you still get a massive update against a really settled galaxy. In our model the probability of less than one civilisation per galaxy went from 52% to 99.6% if one include the basic settlement argument.

The G-hat survey of galaxies, looking for signs of K3 civilisations, did not find any. Again, maybe we missed something or most civilisations don’t want to re-engineer galaxies, but if we assume about half of them want to and have 1% chance of succeeding we get an update from 52% chance of less than one civilisation per galaxy to 66%.

Using models of us looking at about 1,000 stars or that we do not think there is any civilisation within 18 pc gives a milder update, from 52% to 53 and 57% respectively. These just rule out super-densely inhabited scenarios.

So what? What is the use of this?

People like to invent explanations for the Fermi paradox that all would have huge implications for humanity if they were true – maybe we are in a cosmic zoo, maybe there are interstellar killing machines out there, maybe singularity is inevitable, maybe we are the first civilisation ever, maybe intelligence is a passing stagemaybe the aliens are sleeping… But if you are serious about thinking about the future of humanity you want to be rigorous about this. This paper shows that current uncertainties actually force us to be very humble about these possible explanations – we can’t draw strong conclusions from the empty sky yet.

But uncertainty can be reduced! We can learn more, and that will change our knowledge.

From a SETI perspective, this doesn’t say that SETI is unimportant or doomed to failure, but rather that if we ever see even the slightest hint of intelligence out there many parameters will move strongly. Including the all-important L.

From an astrobiology perspective, we hope we have pointed at some annoyingly uncertain factors and that this paper can get more people to work on reducing the uncertainty. Most astrobiologists we have talked with are aware of the uncertainty but do not see the weird knock-on-effects from it. Especially figuring out how we got our fairly good coding system and what the competing options are seems very promising.

Even if we are not sure we can also update our plans in the light of this. For example, in my tech report about settling the universe fast I pointed out that if one is uncertain about how much competition there might be for the universe one can use one’s probability estimates to decide on the range to aim for.

Uncertainty matters

Perhaps the most useful insight is that uncertainty matters and we should learn to embrace it carefully rather than assume that apparently specific numbers are better.

Perhaps never in the history of science has an equation been devised yielding values differing by eight orders of magnitude. . . . each scientist seems to bring his own prejudices and assumptions to the problem.
History of Astronomy: An Encyclopedia, ed. by John Lankford, s.v. “SETI,” by Steven J. Dick, p. 458.

When Dick complained about the wide range of results from the Drake equation he likely felt it was too uncertain to give any useful result. But 8 orders of magnitude differences is in this case just a sign of downplaying our uncertainty and overestimating our knowledge! Things gets much better when we look at what we know and don’t know, figuring out the implications from both.

Jill Tarter said the Drake equation was “a wonderful way to organize our ignorance”, which we think is closer to the truth than demanding a single number as an answer.

Ah, but I already knew this!

We have encountered claims that “nobody” really is naive about using the Drake equation. Or at least not any “real” SETI and astrobiology people. Strangely enough people never seem to make this common knowledge visible, and a fair number of papers make very confident statements about “minimum” values for life probabilities that we think are far, far above the actual scientific support.

Sometimes we need to point out the obvious explicitly.

[Edit 2018-06-30: added the GIGO section]

The Aestivation hypothesis: popular outline and FAQ

Anders Sandberg & Milan Ćirković

Since putting up a preprint for our paper “That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi’s paradox” (Journal of the British Interplanetary Society, in press) we have heard some comments and confusion that suggest to us that it would be useful to try to outline and clarify what our idea is, what we think about it, and some of the implications.

Table of contents

The super-short version of the paper

Maybe we are not seeing alien civilizations because they are all rationally “sleeping” in the current early cosmological era, waiting for a remote future when it is more favourable to exploit the resources of the universe. We show that given current observations we can rule out a big chunk of possibilities like this, but not all.

A bit more unpacked explanation

Information processing requires physical resources, not just computers or brains, but energy to run them. There is a thermodynamic cost to perform information processing that is temperature dependent: in principle, running processing become 10 times more efficient if your computer is 10 times colder (measured in Kelvins). Right now the cosmic background radiation makes nearly everything in the universe hotter than 3 Kelvin, but as the universe expands this background temperature will decline exponentially. So if you want to do as much information processing as possible with the energy you have it makes sense to wait. It becomes exponentially better. Eventually the background temperature bottoms out because of horizon radiation in a few trillion years: at this point it no longer makes sense to wait with the computation.

Hence, an advanced civilization may have explored a big chunk of the universe, done what is doable with existing nature, and now mostly have internal “cultural” things to do. These things can be regarded as information processing. If they want to maximize processing they should not do it today but wait until the cold future when they will get tremendously more done (1030 times more!). They should hence aestivate, leaving their domain protected by some automation until they wake up.

If this is correct, there might be old and powerful civilizations around that are hard to observe, not because they are deliberately hiding but because they are inactive for the time being.

However, were this hypothesis true, they would not want to lose their stuff. We should expect to see fewer processes that reduce resources  that could be useful in the far future. In the paper we look at processes that look like they might waste resources: stars converting mass into energy that is lost, stars imploding into black holes, galactic winds losing gas into intergalactic space, galaxy collisions, and galaxy clusters getting separated by the expansion of the universe. Current observations do not seem to indicate anything preventing these processes (and most interventions would be very visible).

Hence, either:

  1. the answer to the Fermi question “where are they?!” is something else (like there being no aliens),
  2. advanced civilizations aestivate but do so with only modest hoards of resources rather than entire superclusters,
  3. they are mostly interested in spreading far and wide since this gives a lot of stuff with a much smaller effort than retaining it.

Necessary assumptions

The aestivation hypothesis makes the following assumptions:

  1. There are civilizations that mature much earlier than humanity. (not too implausible, given that Earth is somewhat late compared to other planets)
  2. These civilizations can expand over sizeable volumes, gaining power over their contents. (we have argued that this is doable)
  3. These civilizations have solved their coordination problems. (otherwise it would be hard to jointly aestivate; assumption likelihood hard to judge)
  4. A civilization can retain control over its volume against other civilizations. (otherwise it would need to actively defend its turf in the present era and cannot aestivate; likelihood hard to judge)
  5. The fraction of mature civilizations that aestivate is non-zero. (if it is rational at least some will try)
  6. Aestivation is largely invisible. (seems likely, since there would be nearly no energy release)

Have you solved the Fermi question?

We are not claiming we now know the answer to the Fermi question. Rather, we have a way of ruling out some possibilities and a few new possibilities worth looking for (like galaxies with inhibited heavy star formation).

Do you really believe in it?

I (Anders) personally think the likeliest reason we are not seeing aliens is not that they are aestivating, but just that they do not exist or are very far away.

We have an upcoming paper giving some reasons for this belief. The short of it is that we are very uncertain about the probability of life and intelligence given the current state of scientific knowledge. They could be exceedingly low, and this means we have to assign a fairly high credence to the empty universe hypothesis. If that hypothesis is not true, then aestivation is a pretty plausible answer in my personal opinion.

Why write about a hypothesis you do not think is the most likely one? Because we need to cover as much of possibility space as possible, and the aestivation hypothesis is neatly suggested by considerations of the thermodynamics of computation and physical eschatology. We have been looking at other unlikely Fermi hypotheses like the berserker hypothesis to see if we can give good constraints on them (in that case, our existence plus some ecological instability problems make berzerkers unlikely).

What is the point?

Understanding the potential and limits of intelligence in the universe tells us things about our own chances and potential future.

At the very least, this paper shows what a future advanced human-derived civilization may try to achieve, and some of the ultimate limits on far-future information processing. It gives some new numbers to feed into Nick Bostrom’s astronomical waste argument for working very hard on reducing existential risk in the present: the potential future is huge.

In regards to alien civilizations, the paper maps a part of possibility space, showing what is required for this Fermi paradox explanation to work as an explanation. It helps cut down on the possibilities a fair bit.

What about the Great Filter?

We know there has to be at least one the unlikely step between non-living matter and easily observable technological civilizations (“the Great Filter”), otherwise the sky would be full of them. If it is an early filter (life or intelligence is rare) we may be fairly alone but our future is open; were the filter a later step, we should expect to be doomed.

The aestivation hypothesis doesn’t tell us much about the filter. It allows explaining away the quiet sky as evidence for absence of aliens, so without knowing if it is true or not we do not learn anything from the silence. The lack of megascale engineering is evidence against certain kinds of alien goals and activities, but rather weak evidence.

Meaning of life

Depending on what you are trying to achieve, different long-term strategies make sense. This is another way SETI may tell us something interesting about the Big Questions by showing what advanced species are doing (or not):

If the ultimate value you aim for is local such as having as many happy minds as possible, then you want to spread very far and wide, even though the galaxy clusters you have settled will eventually drift apart and be forever separated. The total value doesn’t depend on all those happy minds talking to each other. Here the total amount of value is presumably proportional to the amount of stuff you have gathered times how long it can produce valuable thoughts. Aestivation makes sense, and you want to spread far and wide before doing it.

If the ultimate value you aim for is nonlocal, such as having your civilization produce the deepest possible philosophy, then all parts need to stay in touch with each other. This means that expanding outside a gravitationally bound supercluster is pointless: your expansion will halt at this point. We can be fairly certain there are no advanced civilizations trying to scrape together larger superclusters since it would be very visible.

If the ultimate value you aim for is finite, then at some point you may be done: you have made the perfect artwork or played all the possible chess games. Such a civilization only needs resources enough to achieve the goal, and then presumably will shut down. If the goal is small it might do this without aestivating, while if it is large it may aestivate with a finite hoard.

If the ultimate goal is modest, like enjoying your planetary utopia, then you will not affect the large-scale universe (although launching intergalactic colonization may still be good for security, leading to a nonlocal instrumental goal). Modest civilizations do not affect the overall fate of the universe.

Can we test it?

Yes! The obvious way is to carefully look for odd processes keeping the universe from losing potentially useful raw materials. The suggestions in the paper give some ideas, but there are doubtless other things to look for.

Also, aestivators would protect themselves from late-evolving species that could steal their stuff. If we were to start building self-replicating von Neumann probes in the future, if there are aestivations around they better stop us. This hypothesis test may of course be rather dangerous…

Isn’t there more to life than information processing?

Information is “a difference that makes a difference”: information processing is just going from one distinguishable state to another in a meaningful way. This covers not just computing with numbers and text, but having one brain state follow another, doing economic transactions, and creating art. Falling in love means that a mind goes from one state to another in a very complex way. Maybe the important subjective aspect is something very different from states of brain, but unless you think that it is possible to fall in love without having the brain change state there will be an information processing element to it. And that information processing is bound by the laws of thermodynamics.

Some theories of value place importance on how or that something is done rather than the consequences or intentions (which can be viewed as information states): maybe a perfect Zen action holds value on its own. If the start and end state are the same, then an infinite amount of such actions can be done and an equal amount of value achieved – yet there is no way of telling if they have ever happened, since there will not be a memory of them occurring.

In short, information processing is something we instrumentally need for the mental or practical activities that truly matter.


Like hibernate, but through summer (latin aestus=heat, aestivate=spend the summer). Hibernate (latin hibernus=wintry) is more common, but since this is about avoiding heat we choose the slightly rarer term.

Can’t you put your computer in a fridge?

Yes, it is possible to cool below 3 K. But you need to do work to achieve it, spending precious energy on the cooling. If you want your computing done *now* and do not care about the total amount of computing, this is fine. But if you want as much computing as possible, then fridges are going to waste some of your energy.

There are some cool (sorry) possibilities by using very large black holes as heat sinks, since their temperature would be lower than the background radiation. But this will only last for a few hundred billion years, then the background will be cooler.

Does computation costs have to be temperature dependent?

The short answer is no, but we do not think this matters for our conclusion.

The irreducible energy cost of computation is due to the Landauer limit (this limit or principle has also been ascribed to Brillouin, Shannon, von Neumann and many others): to erase one bit of information you need to pay an energy cost equal to kT\ln(2) or more. Otherwise you could cheat the second law of thermodynamics.

However, logically reversible computation can work without paying this by never erasing information. The problem is of course that eventually memory runs out, but Bennett showed that one can then “un-compute” the computation by running it backwards, removing the garbage. The problem is that reversible computation needs to run very close to the average energy of the system (taking a long time) and that error correction is irreversible and temperature dependent. Same thing is true for quantum computation.

If one has a pool of negentropy, that is, something ordered that can be randomized, then one can “pay” for bit erasure using this pool until it runs out. This is potentially temperature independent! One can imagine having access to a huge memory full of zero bits. By swapping your garbage bit for a zero, you can potentially run computations without paying an energy cost (if the swapping is free): it has essentially zero temperature.

If there are natural negentropy pools aestivation is pointless: advanced civilizations would be dumping their entropy there in the present. But as far as we know, there are no such pools. We can make them by ordering matter or energy, but that has a work cost that depends on temperature (or using yet another pool of negentropy).

Space-time as a resource?

Maybe the flatness of space-time is the ultimate negentropy pool, and by wrinkling it up we can get rid of entropy: this is in a sense how the universe has become so complex thanks to matter lumping together. The total entropy due to black holes dwarfs the entropy of normal matter by several orders of magnitude.

Were space-time lumpiness a useful resource we should expect advanced civilizations to dump matter into black holes on a vast scale; this does not seem to be going on.

Lovecraft, wasn’t he, you know… a bit racist?

Yup. Very racist. And fearful of essentially everything in the modern world: globalisation, large societies, changing traditions, technology, and how insights from science make humans look like a small part of the universe rather than the centre of creation. Part of what make his horror stories interesting is that they are horror stories about modernity and the modern world-view. From a modernist perspective these things are not evil in themselves.

His vision of a vast universe inhabited by incomprehensible alien entities far outside the range of current humanity does fit in with Dysonian SETI and transhumanism: we should not assume we are at the pinnacle of power and understanding, we can look for signs that there are far more advanced civilizations out there (and if there is, we better figure out how to relate to this fact), and we can aspire to become something like them – which of course would have horrified Lovecraft to no end. Poor man.

Likely not even a microDyson

XIX: The Dyson SunRight now KIC 8462852 is really hot, and not just because it is a F3 V/IV type star: the light curve, as measured by Kepler, has irregular dips that looks like something (or rather, several somethings) are obscuring the star. The shapes of the dips are odd. The system is too old and IR-clean to have a remaining protoplanetary disk, dust clumps would coalesce, the aftermath of a giant planet impact is very unlikely (and hard to fit with the aperiodicity); maybe there is a storm of comets due to a recent stellar encounter, but comets are not very good at obscuring stars. So a lot of people on the net are quietly or not so quietly thinking that just maybe this is a Dyson sphere under construction.

I doubt it.

My basic argument is this: if a civilization builds a Dyson sphere it is unlikely to remain small for a long period of time. Just as planetary collisions are so rare that we should not expect to see any in the Kepler field, the time it takes to make a Dyson sphere is also very short: seeing it during construction is very unlikely.

Fast enshrouding

In my and Stuart Armstrong’s paper “Eternity in Six Hours” we calculated that disassembling Mercury to make a partial Dyson shell could be done in 31 years. We did not try to push things here: our aim was to show that using a small fraction of the resources in the solar system it is possible to harness enough energy to launch a massive space colonization effort (literally reaching every reachable galaxy, eventually each solar system). Using energy from already built solar captors more material is mined and launched, producing an exponential feedback loop. This was originally discussed by Robert Bradbury. The time to disassemble terrestrial planets is not much longer than for Mercury, while the gas giants would take a few centuries.

If we imagine the history of a F5 star 1,000 years is not much. Given the estimated mass of KIC 8462852 as 1.46 solar masses, it will have a main sequence lifespan of 4.1 billion years. The chance of seeing it while being enshrouded is one in 4.3 million. This is the same problem as the giant impact theory.

A ruin?

An abandoned Dyson shell would likely start clumping together; this might at first sound like a promising – if depressing – explanation of the observation. But the timescale is likely faster than planetary formation timescales of 10^510^6 years – the pieces are in nearly identical orbits – so the probability problem remains.

But it is indeed more likely to see the decay of the shell than the construction by several orders of magnitude. Just like normal ruins hang around far longer than the time it took to build the original building.

Laid-back aliens?

Maybe the aliens are not pushing things? Obviously one can build a Dyson shell very slowly – in a sense we are doing it (and disassembling Earth to a tiny extent!) by launching satellites one by one. So if an alien civilization wanted to grow at a leisurely rate or just needed a bit of Dyson shell they could of course do it.

However, if you need something like 2.87\cdot 10^{19} Watt (a 100,000 km collector at 1 AU around the star) your demands are not modest. Freeman Dyson originally proposed the concept based on the observation that human energy needs were growing exponentially, and this was the logical endpoint. Even at 1% growth rate a civilization quickly – in a few millennia – need most of the star’s energy.

In order to get a reasonably high probability of seeing an incomplete shell we need to assume growth rates that are exceedingly small (on the order of less than a millionth per year). While it is not impossible, given how the trend seems to be towards more intense energy use in many systems and that entities with higher growth rates will tend to dominate a population, it seems rather unlikely. Of course, one can argue that we currently can more easily detect the rare laid-back civilizations than the ones that aggressively enshrouded their stars, but Dyson spheres do look pretty rare.

Other uses?

Dyson shells are not the only megastructures that could cause intriguing transits.

C. R. McInnes has a suite of fun papers looking at various kinds of light-related megastructures. One can sort asteroid material using light pressure, engineer climate, adjust planetary orbits, and of course travel using solar sails. Most of these are smallish compared to stars (and in many cases dust clouds), but they show some of the utility of obscuring objects.

Duncan Forgan has a paper on detecting stellar engines (Shkadov thrusters) using light curves; unfortunately the calculated curves do not fit KIC8462852 as far as I can tell.

Luc Arnold analysed the light curves produced by various shapes of artificial objectsHe suggested that one could make a weirdly shaped mask for signalling one’s presence using transits. In principle one could make nearly any shape, but for signalling something unusual yet simple enough to be artificial would make most sense: I doubt the KIC transits fit this.

More research is needed (duh)

In the end, we need more data. I suspect we will find that it is yet another odd natural phenomenon or coincidence. But it makes sense to watch, just in case.

Were we to learn that there is (or was) a technological civilization acting on a grand scale it would be immensely reassuring: we would know intelligent life could survive for at least some sizeable time. This is the opposite side of the Great Filter argument for why we should hope not to see any extraterrestrial life: life without intelligence is evidence for intelligence either being rare or transient, but somewhat non-transient intelligence in our backyard (just 1,500 light-years away!) is evidence that it is neither rare nor transient. Which is good news, unless we fancy ourselves as unique and burdened by being stewards of the entire reachable universe.

But I think we will instead learn that the ordinary processes of astrophysics can produce weird transit curves, perhaps due to weird objects (remember when we thought hot jupiters were exotic?) The universe is full of strange things, which makes me happy I live in it.

[An edited version of this post can be found at The Conversation: What are the odds of an alien megastructure blocking light from a distant star? ]

ET, phone for you!

TelescopeI have been in the media recently since I became the accidental spokesperson for UKSRN at the British Science Festival in Bradford:

BBC / The Telegraph / The Guardian / Iol SciTech / The Irish Times / Bt.com

(As well as BBC 5 Live, BBC Newcastle and BBC Berkshire… so my comments also get sent to space as a side effect).

My main message is that we are going to send in something for the Breakthrough Message initiative: a competition to write a good message to be sent to aliens. The total pot is a million dollars (it seems that was misunderstood in some reporting: it is likely not going to be a huge prize, but rather several). The message will not actually be sent to the stars: this is an intellectual exercise rather than a practical one.

(I also had some comments about the link between Langsec and SETI messages – computer security is actually a bit of an issue for fun reasons. Watch this space.)

Should we?

One interesting issue is whether there are any good reasons not to signal. Stephen Hawking famously argued against it (but he is a strong advocate of SETI), as does David Brin. A recent declaration argues that we should not signal unless there was a widespread agreement about it. Yet others have made the case that we should signal, perhaps a bit cautiously. In fact, an eminent astronomer just told he could not take concerns about sending a message seriously.

Some of the arguments are (in no particular order):

Pro Con
SETI will not work if nobody speaks. Malign ETI.
ETI is likely to be far more advanced than us and could help us. Past meetings between different civilizations have often ended badly.
Knowing if there is intelligence out there is important. Giving away information about ourselves may expose us to accidental or deliberate hacking.
Hard to prevent transmissions.  Waste of resources.
 Radio transmissions are already out there.  If the ETI is quiet, it is for a reason.
 Maybe they are waiting for us to make the first move.  We should listen carefully first, then transmit.

It is actually an interesting problem: how do we judge the risks and benefits in a situation like this? Normal decision theory runs into trouble (not that it stops some of my colleagues). The problem here is that the probability and potential gain/loss are badly defined. We may have our own personal views on the likelihood of intelligence within radio reach and its nature, but we should be extremely uncertain given the paucity of evidence.

[ Even the silence in the sky is some evidence, but it is somewhat tricky to interpret given that it is compatible with both no intelligence (because of rarity or danger), intelligence not communicating or looking in spectra we see, cultural convergence towards quietness (the zoo hypothesis, everybody hiding, everybody becoming Jupiter brains), or even the simulation hypothesis. The first category is at least somewhat concise, while the later categories have endless room for speculation. One could argue that since later categories can fit any kind of evidence they are epistemically weak and we should not trust them much.]

Existential risks also tends to take precedence over almost anything. If we can avoid doing something that could cause existential risk the maxiPOK principle tells us not to do it: we can avoid sending and sending might bring down the star wolves on us, so we should avoid it.

There is also a unilateralist curse issue. It is enough that one group somewhere thinks transmitting is a good idea and hence do it to get the consequences, whatever they are. So the more groups that consider transmitting, even if they are all rational, well-meaning and consider the issue at length the more likely it is that somebody will do it even if it is a stupid thing to do. In situations like this we have argued it behoves us to be more conservative individually than we would otherwise have been – we should simply think twice just because sending messages is in the unilateralist curse category. We also argue in that paper that it is even better to share information and make collectively coordinated decisions.

That these arguments strengthen the con side – but largely independently of what the actual anti-message arguments are. They are general arguments that we should be careful, not final arguments.

Conversely, Alan Penny argued that given the high existential risk to humanity we may actually have little to lose: if our risk per century is 12-40% of extinction, then adding a small ETI risk has little effect on the overall risk level, yet a small chance of friendly ETI advice (“By the way, you might want to know about this…”) that decreases existential risk may be an existential hope. Suppose we think it is 50% likely that ETI is friendly, and 1% chance it is out there. If it is friendly it might give us advice that reduces our existential risk by 50%, otherwise it will eat us with 1% probability. So if we do nothing our risk is (say) 12%. If we signal, then the risk is 0.12*0.99 + 0.01*(0.5*0.12*0.5 + 0.5*(0.12*0.99+0.01))=11.9744% – a slight improvement. Like the Drake equation one can of course plug in different numbers and get different effects.

Truth to the stars

Considering the situation over time, sending a message now may also be irrelevant since we could wipe ourselves out before any response will arrive. That brings to mind a discussion we had at the press conference yesterday about what the point of sending messages far away would be: wouldn’t humanity be gone by then? Also, we were discussing what to present to ETI: an honest or whitewashed version of ourselves? (my co-panelist Dr Jill Stuart made some great points about the diversity issues in past attempts).

My own view is that I’d rather have an honest epitaph for our species than a polished but untrue one. This is both relevant to us, since we may want to be truthful beings even if we cannot experience the consequences of the truth, and relevant to ETI, who may find the truth more useful than whatever our culture currently would like to present.