# A small step for machinekind?

(Originally published at https://qz.com/1666726/we-should-stop-sending-humans-into-space/ with a title that doesn’t quite fit my claim)

50 years ago humans left their footprints on the moon. We have left footprints on Earth for millions of years, but the moon is the only other body with human footprints.

Yet there are track marks on Mars. There is a deliberately made crater on the comet 9P/Tempel. There are landers on the Moon, Venus, Mars, Titan, the asteroid Eros, and the comet Churyumov–Gerasimenko. Not to mention a number of probes of varying levels of function across and outside the solar system.

As people say, Mars is the only planet in the solar system solely inhabited by robots. In 50 years, will there be a human on Mars… or just even more robots?

There are of course entirely normal reasons to go to space – communication satellites, GPS, espionage, ICBMs – and massive scientific reasons. But were they the only reasons to explore space it would be about as glorious as marine exploration. Worth spending taxpayer and private money on, but hardly to the extent we have done it.

Space is inconceivably harsher than any terrestrial environment, but also fundamentally different. It is vast beyond imagination. It contains things that have no counterpart on Earth. In many ways it has replaced our supernatural realms and gods with a futuristic realm of exotic planets and – maybe – extra-terrestrial life and intelligence. It is fundamentally The Future.

Again, there are good objective reasons for this focus. In the long run we are not going to survive as a species if we are not distributed across different biospheres or can leave this one when the sun turns a red giant.

Is space a suitable place for a squishy species?

Humans are adapted to a narrow range of conditions. A bit too much or too little pressure, oxygen, water, temperature, radiation and acceleration and we die. In fact, most of the Earth’s surface is largely uninhabitable unless we surround ourselves with protective clothing and technology. In going to space we need to not just bring with ourselves a controlled environment hermit-crab style, but we need to function in conditions we have not evolved for at all. All our ancestors lived with gravity. All our ancestors had reflexes and intuitions that were adequate for earth’s environment. But this means that our reflexes and intuitions are likely to be wrong in deadly ways without extensive retraining.

Meanwhile robots can be designed to not requite the life support, have reactions suited to the space environment and avoid the whole mortality thing. Current robotic explorers are rare and hence extremely expensive, motivating endless pre-mission modelling and careful actions. But robotics is becoming cheaper and more adaptable and if space access becomes cheaper we should expect a more ruthless use of robots. Machine learning allows robots to learn from their experiences, and if a body breaks down or is lost another copy of the latest robot software can be downloaded.

Our relations to robots and artificial intelligence are complicated. For time immemorial we have imagined making artificial servants or artificial minds, yet such ideas invariably become mirrors for ourselves. When we consider the possibility we begin to think about humanity’s place in the world (if man was made in God’s image, whose image is the robot?), our own failings (endless stories about unwise creators and rebellious machines), mysteries about what we are (what is intelligence, consciousness, emotions, dignity…?) When trying to build them we have learned that tasks that are simple for a 5-year old are hard to do while tasks that stump PhDs can be done easily, that our concepts of ethics may be in for a very practical stress test in the near future…

In space robots have so far not been seen as very threatening. Few astronauts have worried about their job security. Instead people seem to adopt their favourite space probes and rovers, becoming sentimental about their fate.

(Full disclosure: I did not weep for the end of Opportunity, but I did shed a tear for Cassini)

What kind of exploration do we wish for?

So, should we leave space to tele-operated or autonomous robots reporting back their findings for our thrills and education while patiently building useful installations for our benefit?

My thesis is: we want to explore space. Space is unsuitable for humans. Robots and telepresence may be better for exploration. Yet what we want is not just exploration in the thin sense of knowing stuff. We want exploration in the thick sense of being there.

There is a reason MarsOne got volunteers despite planning a one-way trip to Mars. There is a reason we keep astronauts at fabulous expense on the ISS doing experiments (beside that their medical state in a sense is the most valuable experiment): getting glimpses of our planet from above and touching the fringe of the Overview Effect is actually healthy for our culture.

Were we only interested in the utilitarian and scientific use of space we would be happy to automate it. The value from having people be present is deeper: it is aspirational, not just in the sense that maybe one day we or our grandchildren could go there but in the sense that at least some humans are present in the higher spheres. It literally represents the “giant leap for humanity” Neil Armstrong referred to.

A sceptic may wonder if it is worth it. But humanity seldom performs grand projects based on a practical utility calculation. Maybe it should. But the benefits of building giant telescopes, particle accelerators, the early Internet, or cathedrals were never objective and clear. A saner species might not perform these projects and would also abstain from countless vanity projects, white elephants and overinvestments, saving much resources for other useful things… yet this species would likely never have discovered much astronomy or physics, the peculiarities of masonry and managing Internetworks. It might well have far slower technological advancement, becoming poorer in the long run despite the reasonableness of its actions.

This is why so many are unenthusiastic about robotic exploration. We merely send tools when we want to send heroes.

Maybe future telepresence will be so excellent that we can feel and smell the Martian environment through our robots, but as evidenced by the queues in front of the Mona Lisa or towards the top of Mount Everest we put a premium on authenticity. Not just because it is rare and expensive but because we often think it is worthwhile.

As artificial intelligence advances those tools may become more like us, but it will always be a hard sell to argue that they represent us in the same way a human would. I can imagine future AI having just as vivid or even better awareness of its environment than we could, and in a sense being a better explorer. But to many people this would not be a human exploring space, just another (human-made) species exploring space: it is not us. I think this might be a mistake if the AI actually is a proper continuation of our species in terms of culture, perception, and values but I have no doubt this will be a hard sell.

What kind of settlement do we wish for?

We may also want to go to space to settle it. If we could get it prepared by automation, that is great.

While exploration is about establishing a human presence, relating to an environment from the peculiar human perspective of the world and maybe having the perspective changed, settlement is about making a home. By its nature it involves changing the environment into a human environment.

A common fear in science fiction and environmental literature is that humans would transform everything into more of the same: a suburbia among the stars. Against this another vision is contrasted: to adapt and allow the alien to change us to a suitable extent. Utopian visions of living in space not only deal with the instrumental freedom of a post-scarcity environment but the hope that new forms of culture can thrive in the radically different environment.

Some fear/hope we may have to become cyborgs to do it. Again, there is the issue of who “we” are. Are we talking about us personally, humanity-as-we-know-it, transhumanity, or the extension of humanity in the form of our robotic mind children? We might have some profound disagreements about this. But to adapt to space we will likely have to adapt more than ever before as a species, and that will include technological changes to our lifestyle, bodies and minds that will call into question who we are on an even more intimate level than the mirror of robotics.

A small step

If a time traveller told me that in 50 years’ time only robots had visited the moon, I would be disappointed. It might be the rational thing to do, but it shows a lack of drive on behalf of our species that would be frankly worrying – we need to get out of our planetary cradle.

If the time traveller told me that in 50 years’ time humans but no robots had visited the moon, I would also be disappointed. Because that implies that we either fail to develop automation to be useful – a vast loss of technological potential – or that we make space to be all about showing off our emotions rather than a place we are serious about learning, visiting and inhabiting.

# What kinds of grand futures are there?

I have been working for about a year on a book on “Grand Futures” – the future of humanity, starting to sketch a picture of what we could eventually achieve were we to survive, get our act together, and reach our full potential. Part of this is an attempt to outline what we know is and isn’t physically possible to achieve, part of it is an exploration of what makes a future good.

Here are some things that appear to be physically possible (not necessarily easy, but doable):

• Societies of very high standards of sustainable material wealth. At least as rich (and likely far above) current rich nation level in terms of what objects, services, entertainment and other lifestyle ordinary people can access.
• Human enhancement allowing far greater health, longevity, well-being and mental capacity, again at least up to current optimal levels and likely far, far beyond evolved limits.
• Sustainable existence on Earth with a relatively unchanged biosphere indefinitely.
• Expansion into space:
• Settling habitats in the solar system, enabling populations of at least 10 trillion (and likely many orders of magnitude more)
• Settling other stars in the milky way, enabling populations of at least 1029 people
• Settling over intergalactic distances, enabling populations of at least 1038 people.
• Survival of human civilisation and the species for a long time.
• As long as other mammalian species – on the order of a million years.
• As long as Earth’s biosphere remains – on the order of a billion years.
• Settling the solar system – on the order of 5 billion years
• Settling the Milky Way or elsewhere – on the order of trillions of years if dependent on sunlight
• Using artificial energy sources – up to proton decay, somewhere beyond 1032 years.
• Constructing Dyson spheres around stars, gaining energy resources corresponding to the entire stellar output, habitable space millions of times Earth’s surface, telescope, signalling and energy projection abilities that can reach over intergalactic distances.
• Moving matter and objects up to galactic size, using their material resources for meaningful projects.
• Performing more than a google (10100) computations, likely far more thanks to reversible and quantum computing.

While this might read as a fairly overwhelming list, it is worth noticing that it does not include gaining access to an infinite amount of matter, energy, or computation. Nor indefinite survival. I also think faster than light travel is unlikely to become possible. If we do not try to settle remote galaxies within 100 billion years accelerating expansion will move them beyond our reach. This is a finite but very large possible future.

What kinds of really good futures may be possible? Here are some (not mutually exclusive):

• Survival: humanity survives as long as it can, in some form.
• “Modest futures”: humanity survives for as long as is appropriate without doing anything really weird. People have idyllic lives with meaningful social relations. This may include achieving close to perfect justice, sustainability, or other social goals.
• Gardening: humanity maintains the biosphere of Earth (and possibly other planets), preventing them from crashing or going extinct. This might include artificially protecting them from a brightening sun and astrophysical disasters, as well as spreading life across the universe.
• Happiness: humanity finds ways of achieving extreme states of bliss or other positive emotions. This might include local enjoyment, or actively spreading minds enjoying happiness far and wide.
• Abolishing suffering: humanity finds ways of curing negative emotions and suffering without precluding good states. This might include merely saving humanity, or actively helping all suffering beings in the universe.
• Posthumanity: humanity deliberately evolves or upgrades itself into forms that are better, more diverse or otherwise useful, gaining access to modes of existence currently not possible to humans but equally or more valuable.
• Deep thought: humanity develops cognitive abilities or artificial intelligence able to pursue intellectual pursuits far beyond what we can conceive of in science, philosophy, culture, spirituality and similar but as yet uninvented domains.
• Creativity: humanity plays creatively with the universe, making new things and changing the world for its own sake.

I have no doubt I have missed many plausible good futures.

Note that there might be moral trades, where stay-at-homes agree with expansionists to keep Earth an idyllic world for modest futures and gardening while the others go off to do other things, or long-term oriented groups agreeing to give short-term oriented groups the universe during the stelliferous era in exchange for getting it during the cold degenerate era trillions of years in the future. Real civilisations may also have mixtures of motivations and sub-groups.

Note that the goals and the physical possibilities play out very differently: modest futures do not reach very far, while gardener civilisations may seek to engage in megascale engineering to support the biosphere but not settle space. Meanwhile the happiness-maximizers may want to race to convert as much matter as possible to hedonium, while the deep thought-maximizers may want to move galaxies together to create permanent hyperclusters filled with computation to pursue their cultural goals.

I don’t know what goals are right, but we can examine what they entail. If we see a remote civilization doing certain things we can make some inferences on what is compatible with the behaviour. And we can examine what we need to do today to have the best chances of getting to a trajectory towards some of these goals: avoiding getting extinct, improve our coordination ability, and figure out if we need to perform some global coordination in the long run that we need to agree on before spreading to the stars.

The Universe Today wrote an article about a paper by me, Toby and Eric about the Fermi Paradox. The preprint can be found on Arxiv (see also our supplements: 1,2,3 and 4). Here is a quick popular overview/FAQ.

# TL;DR

• The Fermi question is not a paradox: it just looks like one if one is overconfident in how well we know the Drake equation parameters.
• Our distribution model shows that there is a large probability of little-to-no alien life, even if we use the optimistic estimates of the existing literature (and even more if we use more defensible estimates).
• The Fermi observation makes the most uncertain priors move strongly, reinforcing the rare life guess and an early great filter.
• Getting even a little bit more information can update our belief state a lot!

# So, do you claim we are alone in the universe?

No. We claim we could be alone, and the probability is non-negligible given what we know… even if we are very optimistic about alien intelligence.

# What is the paper about?

The Fermi Paradox – or rather the Fermi Question – is “where are the aliens?” The universe is immense and old and intelligent life ought to be able to spread or signal over vast distances, so if it has some modest probability we ought to see some signs of intelligence. Yet we do not. What is going on? The reason it is called a paradox is that is there is a tension between one plausible theory ([lots of sites]x[some probability]=[aliens]) and an observation ([no aliens]).

## Dissolving the Fermi paradox: there is not much tension

We argue that people have been accidentally misled to feel there is a problem by being overconfident about the probability.

$N=R_*\cdot f_p \cdot n_e \cdot f_l \cdot f_i \cdot f_c \cdot L$

The problem lies in how we estimate probabilities from a product of uncertain parameters (as the Drake equation above). The typical way people informally do this with the equation is to admit that some guesses are very uncertain, give a “representative value” and end up with some estimated number of alien civilisations in the galaxy – which is admitted to be uncertain, yet there is a single number.

Obviously, some authors have argued for very low probabilities, typically concluding that there is just one civilisation per galaxy (“the $N\approx 1$ school”). This may actually still be too much, since that means we should expect signs of activity from nearly any galaxy. Others give slightly higher guesstimates and end up with many civilisations, typically as many as one expects civilisations to last (“the $N\approx L$ school”). But the proper thing to do is to give a range of estimates, based on how uncertain we actually are, and get an output that shows the implied probability distribution of the number of alien civilisations.

If one combines either published estimates or ranges compatible with current scientific uncertainty we get a distribution that makes observing an empty sky unsurprising – yet is also compatible with us not being alone.

The reason is that even if one takes a pretty optimistic view (the published estimates are after all biased towards SETI optimism since the sceptics do not write as many papers on the topic) it is impossible to rule out a very sparsely inhabited universe, yet the mean value may be a pretty full galaxy. And current scientific uncertainties of the rates of life and intelligence emergence are more than enough to create a long tail of uncertainty that puts a fair credence on extremely low probability – probabilities much smaller than what one normally likes to state in papers. We get a model where there is 30% chance we are alone in the visible universe, 53% chance in the Milky Way… and yet the mean number is 27 million and the median about 1! (see figure below)

This is a statement about knowledge and priors, not a measurement: armchair astrobiology.

## The Great Filter: lack of obvious aliens is not strong evidence for our doom

After this result, we look at the Great Filter. We have reason to think at least one term in the Drake equation is small – either one of the early ones indicating how much life or intelligence emerges, or one of the last one that indicate how long technological civilisations survive. The small term is “the Filter”. If the Filter is early, that means we are rare or unique but have a potentially unbounded future. If it is a late term, in our future, we are doomed – just like all the other civilisations whose remains would litter the universe. This is worrying. Nick Bostrom argued that we should hope we do not find any alien life.

Our paper gets a somewhat surprising result: when updating our uncertainties in the light of no visible aliens, it reduces our estimate of the rate of life and intelligence emergence (the early filters) much more than the longevity factor (the future filter).

The reason is that if we exclude the cases where our galaxy is crammed with alien civilisations – something like the Star Wars galaxy where every planet has its own aliens – then that leads to an update of the parameters of the Drake equation. All of them become smaller, since we will have a more empty universe. But the early filter ones – life and intelligence emergence – change much more downwards than the expected lifespan of civilisations since they are much more uncertain (at least 100 orders of magnitude!) than the merely uncertain future lifespan (just 7 orders of magnitude!).

So this is good news: the stars are not foretelling our doom!

Note that a past great filter does not imply our safety.

The conclusion can be changed if we reduce the uncertainty of the past terms to less than 7 orders of magnitude, or the involved  probability distributions have weird shapes. (The mathematical proof is in supplement IV, which applies to uniform and normal distributions. It is possible to add tails and other features that breaks this effect – yet believing such distributions of uncertainty requires believing rather strange things. )

# Isn’t this armchair astrobiology?

Yes. We are after all from the philosophy department.

The point of the paper is how to handle uncertainties, especially when you multiply them together or combine them in different ways. It is also about how to take lack of knowledge into account. Our point is that we need to make knowledge claims explicit – if you claim you know a parameter to have the value 0.1 you better show a confidence interval or an argument about why it must have exactly that value (and in the latter case, better take your own fallibility into account). Combining overconfident knowledge claims can produce biased results since they do not include the full uncertainty range: multiplying point estimates together produces a very different result than when looking at the full distribution.

All of this is epistemology and statistics rather than astrobiology or SETI proper. But SETI makes a great example since it is a field where people have been learning more and more about (some) of the factors.

The same approach as we used in this paper can be used in other fields. For example, when estimating risk chains in systems (like the risk of a pathogen escaping a biosafety lab) taking uncertainties in knowledge will sometimes produce important heavy tails that are irreducible even when you think the likely risk is acceptable. This is one reason risk estimates tend to be overconfident.

# Probability?

What kind of distributions are we talking about here? Surely we cannot speak of the probability of alien intelligence given the lack of data?

There is a classic debate in probability between frequentists, claiming probability is the frequency of events that we converge to when an experiment is repeated indefinitely often, and Bayesians, claiming probability represents states of knowledge that get updated when we get evidence. We are pretty Bayesian.

The distributions we are talking about are distributions of “credences”: how much you believe certain things. We start out with a prior credence based on current uncertainty, and then discuss how this gets updated if new evidence arrives. While the original prior beliefs may come from shaky guesses they have to be updated rigorously according to evidence, and typically this washes out the guesswork pretty quickly when there is actual data. However, even before getting data we can analyse how conclusions must look if different kinds of information arrives and updates our uncertainty; see supplement II for a bunch of scenarios like “what if we find alien ruins?”, “what if we find a dark biosphere on Earth?” or “what if we actually see aliens at some distance?”

# Correlations?

Our use of the Drake equation assumes the terms are independent of each other. This of course is a result of how Drake sliced things into naturally independent factors. But there could be correlations between them. Häggström and Verendel showed that in worlds where the priors are strongly correlated updates about the Great Filter can get non-intuitive.

We deal with this in supplement II, and see also this blog post. Basically, it doesn’t look like correlations are likely showstoppers.

# You can’t resample guesses from the literature!

Sure can. As long as we agree that this is not so much a statement about what is actually true out there, but rather the range of opinions among people who have studied the question a bit. If people give answers to a question in the range from ten to a hundred, that tells you something about their beliefs, at least.

What the resampling does is break up the possibly unconscious correlation between answers (“the $N\approx 1$ school” and “the $N\approx L$ school” come to mind). We use the ranges of answers as a crude approximation to what people of good will think are reasonable numbers.

You may say “yeah, but nobody is really an expert on these things anyway”. We think that is wrong. People have improved their estimates as new data arrives, there are reasons for the estimates and sometimes vigorous debate about them. We warmly recommend Vakoch, D. A., Dowd, M. F., & Drake, F. (2015). The Drake Equation. The Drake Equation, Cambridge, UK: Cambridge University Press, 2015 for a historical overview. But at the same time these estimates are wildly uncertain, and this is what we really care about. Good experts qualify the certainty of their predictions.

## But doesn’t resampling from admittedly overconfident literature constitute “garbage in, garbage out”?

Were we trying to get the true uncertainties (or even more hubristically, the true values) this would not work: we have after all good reasons to suspect these ranges are both biased and overconfidently narrow. But our point is not that the literature is right, but that even if one were to use the overly narrow and likely overly optimistic estimates as estimates of actual uncertainty the broad distribution will lead to our conclusions. Using the literature is the most conservative case.

Note that we do not base our later estimates on the literature estimate but our own estimates of scientific uncertainty. If they are GIGO it is at least our own garbage, not recycled garbage. (This reading mistake seems to have been made on Starts With a Bang).

# What did the literature resampling show?

An overview can be found in Supplement III. The most important point is just that even estimates of super-uncertain things like the probability of life lies in a surprisingly narrow range of values, far more narrow than is scientifically defensible. For example, $f_l$ has five estimates ranging from $10^{-30}$ to $10^{-5}$, and all the rest are in the range $10^{-3}$ to 1. $f_i$ is even worse, with one microscopic and nearly all the rest between one in a thousand to one.

It also shows that estimates that are likely biased towards optimism (because of publication bias) can be used to get a credence distribution that dissolves the paradox once they are interpreted as ranges. See the above figure, were we get about 30% chance of being alone in the Milky Way and 8% chance of being alone in the visible universe… but a mean corresponding to 27 million civilisations in the galaxy and a median of about a hundred.

There are interesting patterns in the data. When plotting the expected number of civilisations in the Milky Way based on estimates from different eras the number goes down with time: the community has clearly gradually become more pessimistic. There are some very pessimistic estimates, but even removing them doesn’t change the overall structure.

# What are our assumed uncertainties?

A key point in the paper is trying to quantify our uncertainties somewhat rigorously. Here is a quick overview of where I think we are, with the values we used in our synthetic model:

• $N_*$: the star formation rate in the Milky Way per year is fairly well constrained. The actual current uncertainty is likely less than 1 order of magnitude (it can vary over 5 orders of magnitude in other galaxies). In our synthetic model we put this parameter as log-uniform from 1 to 100.
• $f_p$: the fraction of systems with planets is increasingly clear ≈1. We used log-uniform from 0.1 to 1.
• $n_e$: number of Earth-like in systems with planets.
• This ranges from rare earth arguments ($<10^{-12}$) to >1. We used log-uniform from 0.1 to 1 since recent arguments have shifted away from rare Earths, but we checked that adding it did not change the conclusions much.
• $f_l$: Fraction of Earthlike planets with life.
• This is very uncertain; see below for our arguments that the uncertainty ranges over perhaps 100 orders of magnitude.
• There is an absolute lower limit due to ergodic repetition: $f_l >10^{-10^{115}}$ – in an infinite universe there will eventually be randomly generated copies of Earth and even the entire galaxy (at huge distances from each other). Observer selection effects make using the earliness of life on Earth problematic.
• We used a log-normal rate of abiogenesis that was transformed to a fraction distribution.
• $f_i$: Fraction of lifebearing planets with intelligence/complex life.
• This is very uncertain; see below for our arguments that the uncertainty ranges over perhaps 100 orders of magnitude.
• One could argue there has been 5 billion species so far and only 1 intelligent, so we know $f_i>2\cdot 10^{-10}$. But one could argue that we should count assemblages of 10 million species, which gives a fraction 1/500 per assemblage. Observer selection effects may be distorting this kind of argument.
• We could have used a log-normal rate of complex life emergence that was transformed to a fraction distribution or a broad log-linear distribution. Since this would have made many graphs hard to interpret we used log-uniform from 0.001 to 1, not because we think this likely but just as a simple illustration (the effect of the full uncertainty is shown in Supplement II).
• $f_c$: Fraction of time when it is communicating.
• Very uncertain; humanity is 0.000615 so far. We used log-uniform from 0.01 to 1.
• $L$: Average lifespan of a civilisation.
• Fairly uncertain; $50? years (upper limit because of the Drake equation applicability: it assumes the galaxy is in a steady state, and if civilisations are long-lived enough they will still be accumulating since the universe is too young.)
• We used log-uniform from 100 to 10,000,000,000.

Note that this is to some degree a caricature of current knowledge, rather than an attempt to represent it perfectly. Fortunately our argument and conclusions are pretty insensitive to the details – it is the vast ranges of uncertainty that are doing the heavy lifting.

## Abiogenesis

Why do we think the fraction of planets with life parameters could have a huge range?

First, instead of thinking in terms of the fraction of planets having life, consider a rate of life formation in suitable environments: what is the induced probability distribution? The emergence is a physical/chemical transition of some kind of primordial soup, and transition events occur in this medium at some rate per unit volume: $f_L\approx \lambda V t$ where $V$ is the available volume and $t$ is the available time. High rates would imply that almost all suitable planets originate life, while low rates would imply that almost no suitable planets originate life.

The uncertainty regarding the length of time when it is possible is at least 3 orders of magnitude ($10^7-10^{10}$ years).

The uncertainty regarding volumes spans 20+ orders of magnitude – from entire oceans to brine pockets on ice floes.

Uncertainty regarding transition rates can span 100+ orders of magnitude! The reason is that this might involve combinatoric flukes (you need to get a fairly longish sequence of parts into the right sequence to get the right kind of replicator), or that it is like the protein folding problem where Levinthal’s paradox shows that it takes literally astronomical time to get entire oceans of copies of a protein to randomly find the correctly folded position (actual biological proteins “cheat” by being evolved to fold neatly and fast). Even chemical reaction rates span 100 orders of magnitude. On the other hand, spontaneous generation could conceivably be common and fast! So we should conclude that $\lambda$ has an uncertainty range of at least 100 orders of magnitude.

Actual abiogenesis will involve several steps. Some are easy, like generating simple organic compounds (plentiful in asteroids, comets and Miller-Urey experiment). Some are likely tough. People often overlook that even how to get proteins and nucleic acids in a watery environment is somewhat of a mystery since these chains tend to hydrolyze; the standard explanation is to look for environments that have a wet-dry cycle allowing complexity to grow. But this means $V$ is much smaller than an ocean.

That we have tremendous uncertainty about abiogenesis does not mean we do not know anything. We know a lot. But at present we have no good scientific reasons to believe we know the rate of life formation per liter-second. That will hopefully change.

## Doesn’t creationists argue stuff like this?

There is a fair number of examples of creationists arguing that the origin of life must be super-unlikely and hence we must believe in their particular god.

The problem(s) with this kind of argument is that it presupposes that there is only one planet, and somehow we got a one-in-a-zillion chance on that one. That is pretty unlikely. But the reality is that there is a zillion planets, so even if there is a one-in-a-zillion chance for each of them we should expect to see life somewhere… especially since being a living observer is a precondition for “seeing life”! Observer selection effects really matter.

We are also not arguing that life has to be super-unlikely. In the paper our distribution of life emergence rate actually makes it nearly universal 50% of the time – it includes the possibility that life will spontaneously emerge in any primordial soup puddle left alone for a few minutes. This is a possibility I doubt anybody believes in, but it could be that would-be new life is emerging right under our noses all the time, only to be outcompeted by the advanced life that already exists.

Creationists make a strong claim that they know $f_l \ll 1$; this is not really supported by what we know. But $f_l \ll 1$ is totally within possibility.

## Complex life

Even if you have life, it might not be particularly good at evolving. The reasoning is that it needs to have a genetic encoding system that is both rigid enough to function efficiently and fluid enough to allow evolutionary exploration.

All life on Earth shares almost exactly the same genetic systems, showing that only rare and minor changes have occurred in $\approx 10^{40}$ cell divisions. That is tremendously stable as a system. Nonetheless, it is fairly commonly believed that other genetic systems preceded the modern form. The transition to the modern form required major changes (think of upgrading an old computer from DOS to Windows… or worse, from CP/M to DOS!). It would be unsurprising if the rate was < 1 per $10^{100}$ cell divisions given the stability of our current genetic system – but of course, the previous system might have been super-easy to upgrade.

Modern genetics required >1/5 of the age of the universe to evolve intelligence. A genetic system like the one that preceded ours might both be stable over a google cell divisions and evolve more slowly by a factor of 10, and run out the clock. Hence some genetic systems may be incapable of ever evolving intelligence.

This related to a point made by Brandon Carter much earlier, where he pointed out that the timescales of getting life, evolving intelligence and how long biospheres last are independent and could be tremendously different – that life emerged early on Earth may have been a fluke due to the extreme difficulty of also getting intelligence within this narrow interval (on all the more likely worlds there are no observers to notice). If there are more difficult transitions, you get an even stronger observer selection effect.

Evolution goes down branches without looking ahead, and we can imagine that it could have an easier time finding inflexible coding systems (“B life”) unlike our own nice one (“A life”). If the rate of discovering B-life is $\lambda_B$ and the rate of discovering capable A-life is $\lambda_A$, then the fraction of A-life in the universe is just $\lambda_A/\lambda_B$ – and rates can differ many orders of magnitude, producing a life-rich but evolution/intelligence-poor universe. Multiple step models add integer exponents to rates: these the multiply order of magnitude differences.

So we have good reasons to think there could be a hundred orders of magnitude uncertainty on the intelligence parameter, even without trying to say something about evolution of nervous systems.

# How much can we rule out aliens?

Humanity has not scanned that many stars, so obviously we have checked even a tiny part of the galaxy – and could have missed them even if we looked at the right spot. Still, we can model how this weak data updates our beliefs (see Supplement II).

The strongest argument against aliens is the Tipler-Hart argument that settling the Milky Way, even when you are expanding at low speed, will only take a fraction of its age. And once a civilisation is everywhere it is hard to have it go extinct everywhere – it will tend to persist even if local pieces crash. Since we do not seem to be in a galaxy paved over by an alien supercivilisation we have a very strong argument to assume a low rate of intelligence emergence. Yes, even if if 99% of civilisations stay home or we could be in an alien zoo, you still get a massive update against a really settled galaxy. In our model the probability of less than one civilisation per galaxy went from 52% to 99.6% if one include the basic settlement argument.

The G-hat survey of galaxies, looking for signs of K3 civilisations, did not find any. Again, maybe we missed something or most civilisations don’t want to re-engineer galaxies, but if we assume about half of them want to and have 1% chance of succeeding we get an update from 52% chance of less than one civilisation per galaxy to 66%.

Using models of us looking at about 1,000 stars or that we do not think there is any civilisation within 18 pc gives a milder update, from 52% to 53 and 57% respectively. These just rule out super-densely inhabited scenarios.

# So what? What is the use of this?

People like to invent explanations for the Fermi paradox that all would have huge implications for humanity if they were true – maybe we are in a cosmic zoo, maybe there are interstellar killing machines out there, maybe singularity is inevitable, maybe we are the first civilisation ever, maybe intelligence is a passing stagemaybe the aliens are sleeping… But if you are serious about thinking about the future of humanity you want to be rigorous about this. This paper shows that current uncertainties actually force us to be very humble about these possible explanations – we can’t draw strong conclusions from the empty sky yet.

But uncertainty can be reduced! We can learn more, and that will change our knowledge.

From a SETI perspective, this doesn’t say that SETI is unimportant or doomed to failure, but rather that if we ever see even the slightest hint of intelligence out there many parameters will move strongly. Including the all-important $L$.

From an astrobiology perspective, we hope we have pointed at some annoyingly uncertain factors and that this paper can get more people to work on reducing the uncertainty. Most astrobiologists we have talked with are aware of the uncertainty but do not see the weird knock-on-effects from it. Especially figuring out how we got our fairly good coding system and what the competing options are seems very promising.

Even if we are not sure we can also update our plans in the light of this. For example, in my tech report about settling the universe fast I pointed out that if one is uncertain about how much competition there might be for the universe one can use one’s probability estimates to decide on the range to aim for.

## Uncertainty matters

Perhaps the most useful insight is that uncertainty matters and we should learn to embrace it carefully rather than assume that apparently specific numbers are better.

Perhaps never in the history of science has an equation been devised yielding values differing by eight orders of magnitude. . . . each scientist seems to bring his own prejudices and assumptions to the problem.
History of Astronomy: An Encyclopedia, ed. by John Lankford, s.v. “SETI,” by Steven J. Dick, p. 458.

When Dick complained about the wide range of results from the Drake equation he likely felt it was too uncertain to give any useful result. But 8 orders of magnitude differences is in this case just a sign of downplaying our uncertainty and overestimating our knowledge! Things gets much better when we look at what we know and don’t know, figuring out the implications from both.

Jill Tarter said the Drake equation was “a wonderful way to organize our ignorance”, which we think is closer to the truth than demanding a single number as an answer.

# Ah, but I already knew this!

We have encountered claims that “nobody” really is naive about using the Drake equation. Or at least not any “real” SETI and astrobiology people. Strangely enough people never seem to make this common knowledge visible, and a fair number of papers make very confident statements about “minimum” values for life probabilities that we think are far, far above the actual scientific support.

Sometimes we need to point out the obvious explicitly.

[Edit 2018-06-30: added the GIGO section]

# The Aestivation hypothesis: popular outline and FAQ

Anders Sandberg & Milan Ćirković

Since putting up a preprint for our paper “That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi’s paradox” (Journal of the British Interplanetary Society, in press) we have heard some comments and confusion that suggest to us that it would be useful to try to outline and clarify what our idea is, what we think about it, and some of the implications.

# The super-short version of the paper

Maybe we are not seeing alien civilizations because they are all rationally “sleeping” in the current early cosmological era, waiting for a remote future when it is more favourable to exploit the resources of the universe. We show that given current observations we can rule out a big chunk of possibilities like this, but not all.

# A bit more unpacked explanation

Information processing requires physical resources, not just computers or brains, but energy to run them. There is a thermodynamic cost to perform information processing that is temperature dependent: in principle, running processing become 10 times more efficient if your computer is 10 times colder (measured in Kelvins). Right now the cosmic background radiation makes nearly everything in the universe hotter than 3 Kelvin, but as the universe expands this background temperature will decline exponentially. So if you want to do as much information processing as possible with the energy you have it makes sense to wait. It becomes exponentially better. Eventually the background temperature bottoms out because of horizon radiation in a few trillion years: at this point it no longer makes sense to wait with the computation.

Hence, an advanced civilization may have explored a big chunk of the universe, done what is doable with existing nature, and now mostly have internal “cultural” things to do. These things can be regarded as information processing. If they want to maximize processing they should not do it today but wait until the cold future when they will get tremendously more done (1030 times more!). They should hence aestivate, leaving their domain protected by some automation until they wake up.

If this is correct, there might be old and powerful civilizations around that are hard to observe, not because they are deliberately hiding but because they are inactive for the time being.

However, were this hypothesis true, they would not want to lose their stuff. We should expect to see fewer processes that reduce resources  that could be useful in the far future. In the paper we look at processes that look like they might waste resources: stars converting mass into energy that is lost, stars imploding into black holes, galactic winds losing gas into intergalactic space, galaxy collisions, and galaxy clusters getting separated by the expansion of the universe. Current observations do not seem to indicate anything preventing these processes (and most interventions would be very visible).

Hence, either:

1. the answer to the Fermi question “where are they?!” is something else (like there being no aliens),
2. advanced civilizations aestivate but do so with only modest hoards of resources rather than entire superclusters,
3. they are mostly interested in spreading far and wide since this gives a lot of stuff with a much smaller effort than retaining it.

## Necessary assumptions

The aestivation hypothesis makes the following assumptions:

1. There are civilizations that mature much earlier than humanity. (not too implausible, given that Earth is somewhat late compared to other planets)
2. These civilizations can expand over sizeable volumes, gaining power over their contents. (we have argued that this is doable)
3. These civilizations have solved their coordination problems. (otherwise it would be hard to jointly aestivate; assumption likelihood hard to judge)
4. A civilization can retain control over its volume against other civilizations. (otherwise it would need to actively defend its turf in the present era and cannot aestivate; likelihood hard to judge)
5. The fraction of mature civilizations that aestivate is non-zero. (if it is rational at least some will try)
6. Aestivation is largely invisible. (seems likely, since there would be nearly no energy release)

# Have you solved the Fermi question?

We are not claiming we now know the answer to the Fermi question. Rather, we have a way of ruling out some possibilities and a few new possibilities worth looking for (like galaxies with inhibited heavy star formation).

# Do you really believe in it?

I (Anders) personally think the likeliest reason we are not seeing aliens is not that they are aestivating, but just that they do not exist or are very far away.

We have an upcoming paper giving some reasons for this belief. The short of it is that we are very uncertain about the probability of life and intelligence given the current state of scientific knowledge. They could be exceedingly low, and this means we have to assign a fairly high credence to the empty universe hypothesis. If that hypothesis is not true, then aestivation is a pretty plausible answer in my personal opinion.

Why write about a hypothesis you do not think is the most likely one? Because we need to cover as much of possibility space as possible, and the aestivation hypothesis is neatly suggested by considerations of the thermodynamics of computation and physical eschatology. We have been looking at other unlikely Fermi hypotheses like the berserker hypothesis to see if we can give good constraints on them (in that case, our existence plus some ecological instability problems make berzerkers unlikely).

# What is the point?

Understanding the potential and limits of intelligence in the universe tells us things about our own chances and potential future.

At the very least, this paper shows what a future advanced human-derived civilization may try to achieve, and some of the ultimate limits on far-future information processing. It gives some new numbers to feed into Nick Bostrom’s astronomical waste argument for working very hard on reducing existential risk in the present: the potential future is huge.

In regards to alien civilizations, the paper maps a part of possibility space, showing what is required for this Fermi paradox explanation to work as an explanation. It helps cut down on the possibilities a fair bit.

## What about the Great Filter?

We know there has to be at least one the unlikely step between non-living matter and easily observable technological civilizations (“the Great Filter”), otherwise the sky would be full of them. If it is an early filter (life or intelligence is rare) we may be fairly alone but our future is open; were the filter a later step, we should expect to be doomed.

The aestivation hypothesis doesn’t tell us much about the filter. It allows explaining away the quiet sky as evidence for absence of aliens, so without knowing if it is true or not we do not learn anything from the silence. The lack of megascale engineering is evidence against certain kinds of alien goals and activities, but rather weak evidence.

## Meaning of life

Depending on what you are trying to achieve, different long-term strategies make sense. This is another way SETI may tell us something interesting about the Big Questions by showing what advanced species are doing (or not):

If the ultimate value you aim for is local such as having as many happy minds as possible, then you want to spread very far and wide, even though the galaxy clusters you have settled will eventually drift apart and be forever separated. The total value doesn’t depend on all those happy minds talking to each other. Here the total amount of value is presumably proportional to the amount of stuff you have gathered times how long it can produce valuable thoughts. Aestivation makes sense, and you want to spread far and wide before doing it.

If the ultimate value you aim for is nonlocal, such as having your civilization produce the deepest possible philosophy, then all parts need to stay in touch with each other. This means that expanding outside a gravitationally bound supercluster is pointless: your expansion will halt at this point. We can be fairly certain there are no advanced civilizations trying to scrape together larger superclusters since it would be very visible.

If the ultimate value you aim for is finite, then at some point you may be done: you have made the perfect artwork or played all the possible chess games. Such a civilization only needs resources enough to achieve the goal, and then presumably will shut down. If the goal is small it might do this without aestivating, while if it is large it may aestivate with a finite hoard.

If the ultimate goal is modest, like enjoying your planetary utopia, then you will not affect the large-scale universe (although launching intergalactic colonization may still be good for security, leading to a nonlocal instrumental goal). Modest civilizations do not affect the overall fate of the universe.

# Can we test it?

Yes! The obvious way is to carefully look for odd processes keeping the universe from losing potentially useful raw materials. The suggestions in the paper give some ideas, but there are doubtless other things to look for.

Also, aestivators would protect themselves from late-evolving species that could steal their stuff. If we were to start building self-replicating von Neumann probes in the future, if there are aestivations around they better stop us. This hypothesis test may of course be rather dangerous…

# Isn’t there more to life than information processing?

Information is “a difference that makes a difference”: information processing is just going from one distinguishable state to another in a meaningful way. This covers not just computing with numbers and text, but having one brain state follow another, doing economic transactions, and creating art. Falling in love means that a mind goes from one state to another in a very complex way. Maybe the important subjective aspect is something very different from states of brain, but unless you think that it is possible to fall in love without having the brain change state there will be an information processing element to it. And that information processing is bound by the laws of thermodynamics.

Some theories of value place importance on how or that something is done rather than the consequences or intentions (which can be viewed as information states): maybe a perfect Zen action holds value on its own. If the start and end state are the same, then an infinite amount of such actions can be done and an equal amount of value achieved – yet there is no way of telling if they have ever happened, since there will not be a memory of them occurring.

In short, information processing is something we instrumentally need for the mental or practical activities that truly matter.

# “Aestivate”?

Like hibernate, but through summer (latin aestus=heat, aestivate=spend the summer). Hibernate (latin hibernus=wintry) is more common, but since this is about avoiding heat we choose the slightly rarer term.

# Can’t you put your computer in a fridge?

Yes, it is possible to cool below 3 K. But you need to do work to achieve it, spending precious energy on the cooling. If you want your computing done *now* and do not care about the total amount of computing, this is fine. But if you want as much computing as possible, then fridges are going to waste some of your energy.

There are some cool (sorry) possibilities by using very large black holes as heat sinks, since their temperature would be lower than the background radiation. But this will only last for a few hundred billion years, then the background will be cooler.

# Does computation costs have to be temperature dependent?

The short answer is no, but we do not think this matters for our conclusion.

The irreducible energy cost of computation is due to the Landauer limit (this limit or principle has also been ascribed to Brillouin, Shannon, von Neumann and many others): to erase one bit of information you need to pay an energy cost equal to $kT\ln(2)$ or more. Otherwise you could cheat the second law of thermodynamics.

However, logically reversible computation can work without paying this by never erasing information. The problem is of course that eventually memory runs out, but Bennett showed that one can then “un-compute” the computation by running it backwards, removing the garbage. The problem is that reversible computation needs to run very close to the average energy of the system (taking a long time) and that error correction is irreversible and temperature dependent. Same thing is true for quantum computation.

If one has a pool of negentropy, that is, something ordered that can be randomized, then one can “pay” for bit erasure using this pool until it runs out. This is potentially temperature independent! One can imagine having access to a huge memory full of zero bits. By swapping your garbage bit for a zero, you can potentially run computations without paying an energy cost (if the swapping is free): it has essentially zero temperature.

If there are natural negentropy pools aestivation is pointless: advanced civilizations would be dumping their entropy there in the present. But as far as we know, there are no such pools. We can make them by ordering matter or energy, but that has a work cost that depends on temperature (or using yet another pool of negentropy).

### Space-time as a resource?

Maybe the flatness of space-time is the ultimate negentropy pool, and by wrinkling it up we can get rid of entropy: this is in a sense how the universe has become so complex thanks to matter lumping together. The total entropy due to black holes dwarfs the entropy of normal matter by several orders of magnitude.

Were space-time lumpiness a useful resource we should expect advanced civilizations to dump matter into black holes on a vast scale; this does not seem to be going on.

# Lovecraft, wasn’t he, you know… a bit racist?

Yup. Very racist. And fearful of essentially everything in the modern world: globalisation, large societies, changing traditions, technology, and how insights from science make humans look like a small part of the universe rather than the centre of creation. Part of what make his horror stories interesting is that they are horror stories about modernity and the modern world-view. From a modernist perspective these things are not evil in themselves.

His vision of a vast universe inhabited by incomprehensible alien entities far outside the range of current humanity does fit in with Dysonian SETI and transhumanism: we should not assume we are at the pinnacle of power and understanding, we can look for signs that there are far more advanced civilizations out there (and if there is, we better figure out how to relate to this fact), and we can aspire to become something like them – which of course would have horrified Lovecraft to no end. Poor man.

# Settling Titan, Schneier’s Law, and scenario thinking

Charles Wohlforth and Amanda R. Hendrix want us to colonize Titan. The essay irritated me in an interesting manner.

Full disclosure: they interviewed me while they were writing their book Beyond Earth: Our Path to a New Home in the Planets, which I have not read yet, and I will only be basing the following on the SciAm essay. It is not really about settling Titan either, but something that bothers me with a lot of scenario-making.

# A weak case for Titan and against Luna and Mars

Basically the essay outlines reasons why other locations in the solar system are not good: Mercury too hot, Venus way too hot, Mars and Luna have too much radiation. Only Titan remains, with a cold environment but not too much radiation.

A lot of course hinges on the assumptions:

We expect human nature to stay the same. Human beings of the future will have the same drives and needs we have now. Practically speaking, their home must have abundant energy, livable temperatures and protection from the rigors of space, including cosmic radiation, which new research suggests is unavoidably dangerous for biological beings like us.

I am not that confident in that we will remain biological or vulnerable to radiation. But even if we decide to accept the assumptions, the case against the Moon and Mars is odd:

Practically, a Moon or Mars settlement would have to be built underground to be safe from this radiation.Underground shelter is hard to build and not flexible or easy to expand. Settlers would need enormous excavations for room to supply all their needs for food, manufacturing and daily life.

So making underground shelters is much harder than settling Titan, where buildings need to be isolated against a -179 C atmosphere and ice ground full with complex and quite likely toxic hydrocarbons. They suggest that there is no point in going to the moon to live in an underground shelter when you can do it on Earth, which is not too unreasonable – but is there a point in going to live inside an insulated environment on Titan either? The actual motivations would likely be less of a desire for outdoor activities and more scientific exploration, reducing existential risk, and maybe industrialization.

Also, while making underground shelters in space may be hard, it does not look like an insurmountable problem. The whole concern is a bit like saying submarines are not practical because the cold of the depths of the ocean will give the crew hypothermia – true, unless you add heating.

I think this is similar to Schneier’s law:

Anyone, from the most clueless amateur to the best cryptographer, can create an algorithm that he himself can’t break.

It is not hard to find a major problem with a possible plan that you cannot see a reasonable way around. That doesn’t mean there isn’t one.

# Settling for scenarios

Maybe Wohlforth and Hendrix spent a lot of time thinking about lunar excavation issues and consistent motivations for settlements to reach a really solid conclusion, but I suspect that they came to the conclusion relatively lightly. It produces an interesting scenario: Titan is not the standard target when we discuss where humanity ought to go, and it is an awesome environment.

Similarly the “humans will be humans” scenario assumptions were presumably chosen not after a careful analysis of relative likelihood of biological and postbiological futures, but just because it is similar to the past and makes an interesting scenario. Plus human readers like reading about humans rather than robots. All together it makes for a good book.

Clearly I have different priors compared to them on the ease and rationality of Lunar/Martian excavation and postbiology. Or even giving us D. radiodurans genes.

In The Age of Em Robin Hanson argues that if we get the brain emulation scenario space settlement will be delayed until things get really weird: while postbiological astronauts are very adaptable, so much of the mainstream of civilization will be turning inward towards a few dense centers (for economics and communications reasons). Eventually resource demand, curiosity or just whatever comes after the Age of Ems may lead to settling the solar system. But that process will be pretty different even if it is done by mentally human-like beings that do need energy and protection. Their ideal environments would be energy-gradient rich, with short communications lags: Mercury, slowly getting disassembled into a hot Dyson shell, might be ideal. So here the story will be no settlement, and then wildly exotic settlement that doesn’t care much about the scenery.

But even with biological humans we can imagine radically different space settlement scenarios, such as the Gerhard O’Neill scenario where planetary surfaces are largely sidestepped for asteroids and space habitats. This is Jeff Bezo’s vision rather than Elon Musk’s and Wohlforth/Hendrix’s. It also doesn’t tell the same kind of story: here our new home is not in the planets but between them.

My gripe is not against settling Titan, or even thinking it is the best target because of some reasons. It is against settling too easily for nice scenarios.

# Beyond the good story

Sometimes we settle for scenarios because they tell a good story. Sometimes because they are amenable to study among other, much less analyzable possibilities. But ideally we should aim at scenarios that inform us in a useful way about options and pathways we have.

That includes making assumptions wide enough to cover relevant options, even the less glamorous or tractable ones.

That requires assuming future people will be just as capable (or more) at solving problems: just because I can’t see a solution to X doesn’t mean it is not trivially solved in the future.

(Maybe we could call it the “Manure Principle” after the canonical example of horse manure being seen as a insoluble urban planning problem at the previous turn of century and then neatly getting resolved by unpredicted trams and cars – and just like Schneier’s law and Stigler’s law the reality is of course more complex than the story.)

In standard scenario literature there are often admonitions not just to select a “best case scenario”, “worst case scenario” and “business as usual scenario” – scenario planning comes into its own when you see nontrivial, mixed value possibilities. In particular, we want decision-relevant scenarios that make us change what we will do when we hear about them (rather than good stories, which entertain but do not change our actions). But scenarios on their own do not tell us how to make these decisions: they need to be built from our rationality and decision theory applied to their contents. Easy scenarios make it trivial to choose (cake or death?), but those choices would have been obvious even without the scenarios: no forethought needed except to bring up the question. Complex scenarios force us to think in new ways about relevant trade-offs.

The likelihood of complex scenarios is of course lower than simple scenarios (the conjunction fallacy makes us believe much more in rich stories). But if they are seen as tools for developing decisions rather than information about the future, then their individual probability is less of an issue.

In the end, good stories are lovely and worth having, but for thinking and deciding carefully we should not settle for just good stories or the scenarios that feel neat.

# The case for Mars

On practical Ethics I post about the goodness of being multi-planetary: is it rational to try to settle Mars as a hedge against existential risk?

The problem is not that it is absurd to care about existential risks or the far future (which was the Economist‘s unfortunate claim), nor that it is morally wrong to have a separate colony, but that there might be better risk reduction strategies with more bang for the buck.

One interesting aspect is that making space more accessible makes space refuges a better option. At some point in the future, even if space refuges are currently not the best choice, they may well become that. There are of course other reasons to do this too (science, business, even technological art).

So while existential risk mitigation right now might rationally aim at putting out the current brushfires and trying to set the long-term strategy right, doing the groundwork for eventual space colonisation seems to be rational.

# The Drake equation and correlations

I have been working on the Fermi paradox for a while, and in particular the mathematical structure of the Drake equation. While it looks innocent, it has some surprising issues.

$N\approx N_* f_p n_e f_l f_i f_c L$

One area I have not seen much addressed is the independence of terms. To a first approximation they were made up to be independent: the fraction of life-bearing Earth-like planets is presumably determined by a very different process than the fraction of planets that are Earth-like, and these factors should have little to do with the longevity of civilizations. But as Häggström and Verendel showed, even a bit of correlation can cause trouble.

If different factors in the Drake equation vary spatially or temporally, we should expect potential clustering of civilizations: the average density may be low, but in areas where the parameters have larger values there would be a higher density of civilizations. A low $N$ may not be the whole story. Hence figuring out the typical size of patches (i.e. the autocorrelation distance) may tell us something relevant.

## Astrophysical correlations

There is a sometimes overlooked spatial correlation in the first terms. In the orthodox formulation we are talking about earth-like planets orbiting stars with planets, which form at some rate in the Milky Way. This means that civilizations must be located in places where there are stars (galaxies), and not anywhere else. The rare earth crowd also argues that there is a spatial structure that makes earth-like worlds exist within a ring-shaped region in the galaxy. This implies an autocorrelation on the order of (tens of) kiloparsecs.

Even if we want to get away from planetocentrism there will be inhomogeneity. The warm intergalactic plasma contains 0.04 of the total mass of the universe, or 85% of the non-dark stuff. Planets account for just 0.00002%, and terrestrials obviously far less. Since condensed things like planets, stars or even galaxy cluster plasma is distributed in a inhomogeneous manner, unless the other factors in the Drake equation produce typical distances between civilizations beyond the End of Greatness scale of hundreds of megaparsec, we should expect a spatially correlated structure of intelligent life following galaxies, clusters and filiaments.

A tangent: different kinds of matter plausibly have different likelihood of originating life. Note that this has an interesting implication: if the probability of life emerging in something like the intergalactic plasma is non-zero, it has to be more than a hundred thousand times smaller than the probability per unit mass of planets, or the universe would be dominated by gas-creatures (and we would be unlikely observers, unless gas-life was unlikely to generate intelligence). Similarly life must be more than 2,000 times more likely on planets than stars (per unit of mass), or we should expect ourselves to be star-dwellers. Our planetary existence does give us some reason to think life or intelligence in the more common substrates (plasma, degenerate matter, neutronium) is significantly less likely than molecular matter.

## Biological correlations

One way of inducing correlations in the $f_l$ factor is panspermia. If life originates at some low rate per unit volume of space (we will now assume a spatially homogeneous universe in terms of places life can originate) and then diffuses from a nucleation site, then intelligence will show up in spatially correlated locations.

It is not clear how much panspermia could be going on, or if all kinds of life do it. A simple model is that panspermias emerge at a density $\rho$ and grow to radius $r$. The rate of intelligence emergence outside panspermias is set to 1 per unit volume (this sets a space scale), and inside a panspermia (since there is more life) it will be $A>1$ per unit volume. The probability that a given point will be outside a panspermia is

$P_{outside} = e^{-(4\pi/3) r^3 \rho}$.

The fraction of civilizations finding themselves outside panspermias will be

$F_{outside} = \frac{P_{outside}}{P_{outside}+A(1-P_{outside})} = \frac{1}{1+A(e^{(4 \pi/3)r^3 \rho}-1)}$.

As A increases, vastly more observers will be in panspermias. If we think it is large, we should expect to be in a panspermia unless we think the panspermia efficiency (and hence r) is very small. Loosely, the transition from going from 1% to 99% probability takes one order of magnitude change in r, three orders of magnitude in $\rho$ and four in A: given that these parameters can a priori range over many, many orders of magnitude, we should not expect to be in the mixed region where there are comparable numbers of observers inside panspermias and outside. It is more likely all or nothing.

There is another relevant distance beside $r$, the expected distance to the next civilization. This is $d \approx 0.55/\sqrt[3]{\lambda}$ where $\lambda$ is the density of civilizations. For the outside panspermia case this is $d_{outside}=0.55$, while inside it is $d_{inside}=0.55/\sqrt[3]{A}$. Note that these distances are not dependent on the panspermia sizes, since they come from an independent process (emergence of intelligence given a life-bearing planet rather than how well life spreads from system to system).

If $r then there will be no panspermia-induced correlation between civilization locations, since there is less than one civilization per panspermia. For $d_{inside} < r < d_{outside}$ there will be clustering with a typical autocorrelation distance corresponding to the panspermia size. For even larger panspermias they tend to dominate space (if $\rho$ is not very small) and there is no spatial structure any more.

So if panspermias have sizes in a certain range, $0.55/\sqrt[3]{A}, the actual distance to the nearest neighbour will be smaller than what one would have predicted from the average values of the parameters of the drake equation.

Running a Monte Carlo simulation shows this effect. Here I use 10,000 possible life sites in a cubical volume, and $\rho=1$ – the number of panspermias will be Poisson(1) distributed. The background rate of civilizations appearing is 1/10,000, but in panspermias it is 1/100. As I make panspermias larger civilizations become more common and the median distance from a civilization to the next closest civilization falls (blue stars). If I re-sample so the number of civilizations are the same but their locations are uncorrelated I get the red crosses: the distances decline, but they can be more than a factor of 2 larger.

## Technological correlations

The technological terms $f_c$ and $L$ can also show spatial patterns, if civilizations spread out from their origin.

The basic colonization argument by Hart and Tipler assumes a civilization will quickly spread out to fill the galaxy; at this point $N\approx 10^{10}$ if we count inhabited systems. If we include intergalactic colonization, then in due time, everything out to a radius of reachability on the order of 4 gigaparsec (for near c probes) and 1.24 gigaparsec (for 50% c probes). Within this domain it is plausible that the civilization could maintain whatever spatio-temporal correlations it wishes, from perfect homogeneity over the zoo hypothesis to arbitrary complexity. However, the reachability limit is due to physics and do impose a pretty powerful limit: any correlation in the Drake equation due to a cause at some point in space-time will be smaller than the reachability horizon (as measured in comoving coordinates) for that point.

Total colonization is still compatible with an empty galaxy if $L$ is short enough. Galaxies could be dominated by a sequence of “empires” that disappear after some time, and if the product between empire emergence rate $\lambda$ and $L$ is small enough most eras will be empty.

A related model is Brin’s resource exhaustion model, where civilizations spread at some velocity but also deplete their environment at some (random rate). The result is a spreading shell with an empty interior. This has some similarities to Hanson’s “burning the cosmic commons scenario”, although Brin is mostly thinking in terms of planetary ecology and Hanson in terms of any available resources: the Hanson scenario may be a single-shot situation. In Brin’s model “nursery worlds” eventually recover and may produce another wave. The width of the wave is proportional to $Lv$ where $v$ is the expansion speed; if there is a recovery parameter $R$ corresponding to the time before new waves can emerge we should hence expect spatial correlation length of order $(L+R)v$. For light-speed expansion and a megayear recovery (typical ecology and fast evolutionary timescale) we would get a length of a million light-years.

Another approach is the percolation theory inspired models first originated by Landis. Here civilizations spread short distances, and “barren” offshoots that do not colonize form a random “bark” around the network of colonization (or civilizations are limited to flights shorter than some distance). If the percolation parameter $p$ is low, civilizations will only spread to a small nearby region. When it increases larger and larger networks are colonized (forming a fractal structure), until a critical parameter value $p_c$ where the network explodes and reaches nearly anywhere. However, even above this transition there are voids of uncolonized worlds. The correlation length famously scales as $\xi \propto|p-p_c|^\nu$, where $\nu \approx -0.89$ for this case. The probability of a random site belonging to the infinite cluster for $p>p_c$ scales as $P(p) \propto |p-p_c|^\beta$ ($\beta \approx 0.472$) and the mean cluster size (excluding the infinite cluster) scales as $\propto |p-p_c|^\gamma$ ($\gamma \approx -1.725$).

So in this group of models, if the probability of a site producing a civilization is $\lambda$ the probability of encountering another civilization in one’s cluster is

$Q(p) = 1-(1-\lambda)^{N|p-p_c|^\gamma}$

for $p. Above the threshold it is essentially 1; there is a small probability $1-P(p)$ of being inside a small cluster, but it tends to be minuscule. Given the silence in the sky, were a percolation model the situation we should conclude either an extremely low $\lambda$ or a low $p$.

## Temporal correlations

Another way the Drake equation can become misleading is if the parameters are time varying. Most obviously, the star formation rate has changed over time. The metallicity of stars have changed, and we should expect any galactic life zones to shift due to this.

One interesting model due to James Annis and Milan Cirkovic is that the rate of gamma ray bursts and other energetic disasters made complex life unlikely in the past, but now the rate has declined enough that it can start the climb towards intelligence – and it was synchronized by this shared background. Such disasters can also produce spatial coherency, although it is very noisy.

In my opinion the most important temporal issue is inherent in the Drake equation itself. It assumes a steady state! At the left we get new stars arriving at a rate $N_*$, and at the right the rate gets multiplied by the longevity term for civilizations $L$, producing a dimensionless number. Technically we can plug in a trillion years for the longevity term and get something that looks like a real estimate of a teeming galaxy, but this actually breaks the model assumptions. If civilizations survived for trillions of years, the number of civilizations would currently be increasing linearly (from zero at the time of the formation of the galaxy) – none would have gone extinct yet. Hence we can know that in order to use the unmodified Drake equation $L$ has to be $< 10^{10}$ years.

Making a temporal Drake equation is not impossible. A simple variant would be something like

$\frac{dN(t)}{dt}=N_*(t)f_p(t)n_e(t)f_l(t)f_i(t)f_c(t)-(1/L)N$

where the first term is just the factors of the vanilla equation regarded as time-varying functions and the second term a decay corresponding to civilizations dropping out at a rate of 1/L (this assumes exponentially distributed survival, a potentially doubtful assumption). The steady state corresponds to the standard Drake level, and is approached with a time constant of 1/L.  One nice thing with this equation is that given a particular civilization birth rate $\beta(t)$ corresponding to the first term, we get an expression for the current state:

$N(t_{now}) = \int_{t_{bigbang}}^{t_{now}} \beta(t) e^{-(1/L) (t_{now}-t)} dt$.

Note how any spike in $\beta(t)$ gets smoothed by the exponential, which sets the temporal correlation length.

If we want to do things even more carefully, we can have several coupled equations corresponding to star formation, planet formation, life formation, biosphere survival, and intelligence emergence. However, at this point we will likely want to make a proper “demographic” model that assumes stars, biospheres and civilization have particular lifetimes rather than random disappearance. At this point it becomes possible to include civilizations with different L, like Sagan’s proposal that the majority of civilizations have short L but some have very long futures.

The overall effect is still a set of correlation timescales set by astrophysics (star and planet formation rates), biology (life emergence and evolution timescales, possibly the appearance of panspermias), and civilization timescales (emergence, spread and decay). The overall effect is dominated by the slowest timescale (presumably star formation or very long-lasting civilizations).

## Conclusions

Overall, the independence of the terms of the Drake equation is likely fairly strong. However, there are relevant size scales to consider.

• Over multiple gigaparsec scales there can not be any correlations, not even artificially induced ones, because of limitations due to the expansion of the universe (unless there are super-early or FTL civilizations).
• Over hundreds of megaparsec scales the universe is fairly uniform, so any natural influences will be randomized beyond this scale.
• Colonization waves in Brin’s model could have scales on the galactic cluster scale, but this is somewhat parameter dependent.
• The nearest civilization can be expected around $d \approx 0.55 [N_* f_p n_e f_l f_i f_c L / V]^{-1/3}$, where $V$ is the galactic volume. If we are considering parameters such that the number of civilizations per galaxy are low V needs to be increased and the density will go down significantly (by a factor of about 100), leading to a modest jump in expected distance.
• Panspermias, if they exist, will have an upper extent limited by escape from galaxies – they will tend to have galactic scales or smaller. The same is true for galactic habitable zones if they exist. Percolation colonization models are limited to galaxies (or even dense parts of galaxies) and would hence have scales in the kiloparsec range.
• “Scars” due to gamma ray bursts and other energetic events are below kiloparsecs.
• The lower limit of panspermias are due to $d$ being smaller than the panspermia, presumably at least in the parsec range. This is also the scale of close clusters of stars in percolation models.
• Time-wise, the temporal correlation length is likely on the gigayear timescale, dominated by stellar processes or advanced civilization survival. The exception may be colonization waves modifying conditions radically.

In the end, none of these factors appear to cause massive correlations in the Drake equation. Personally, I would guess the most likely cause of an observed strong correlation between different terms would be artificial: a space-faring civilization changing the universe in some way (seeding life, wiping out competitors, converting it to something better…)

# What is the natural timescale for making a Dyson shell?

KIC 8462852 (“Tabby’s Star”) continues to confuse. I blogged earlier about why I doubt it is a Dyson sphere. SETI observations in radio and optical has not produced any finds. Now there is evidence that it has dimmed over a century timespan, something hard to square with the comet explanation. Phil Plait over at Bad Astronomy has a nice overview of the headscratching.

However, he said something that I strongly disagree with:

Now, again, let me be clear. I am NOT saying aliens here. But, I’d be remiss if I didn’t note that this general fading is sort of what you’d expect if aliens were building a Dyson swarm. As they construct more of the panels orbiting the star, they block more of its light bit by bit, so a distant observer sees the star fade over time.

However, this doesn’t work well either. … Also, blocking that much of the star over a century would mean they’d have to be cranking out solar panels.

Basically, he is saying that a century timescale construction of a Dyson shell is unlikely. Now, since I have argued that we could make a Dyson shell in about 40 years, I disagree. I got into a Twitter debate with Karim Jebari (@KarimJebari) about this, where he also doubted what the natural timescale for Dyson construction is. So here is a slightly longer than Twitter message exposition of my model.

# Lower bound

There is a strict lower bound set by how long it takes for the star to produce enough energy to overcome the binding energy of the source bodies (assuming one already have more than enough collector area). This is on the order of days for terrestrial planets, as per Robert Bradbury’s original calculations.

# Basic model

Starting with a small system that builds more copies of itself, solar collectors and mining equipment, one can get exponential growth.

A simple way of reasoning: if you have an area $A(t)$ of solar collectors, you will have energy $kA(t)$ to play with, where $k$ is the energy collected per square meter. This will be used to lift and transform matter into more collectors. If we assume this takes $x$ Joules per square meter on average, we get $A'(t) = (k/x)A(t)$, which makes $A(t)$ is an exponential function with time constant $k/x$. If a finished Dyson shell has area $A_D\approx 2.8\cdot 10^{23}$ meters and we start with an initial plant of size $A(0)$ (say on the order of a few hundred square meters), then the total time to completion is $t = (x/k)\ln(A_D/A(0))$ seconds. The logarithmic factor is about 50.

If we assume $k \approx 3\cdot 10^2$ W and $x \approx 40.15$ MJ/kg (see numerics below), then t=78 days.

This is very much in line with Robert’s original calculations. He pointed out that given the sun’s power output Earth could be theoretically disassembled in 22 days. In the above calculations  the time constant (the time it takes to get 2.7 times as much area) is 37 hours. So for most of the 78 days there is just a small system expanding, not making a significant dent in the planet nor being very visible over interstellar distances; only in the later part of the period will it start to have radical impact.

The timescale is robust to the above assumptions: sun-like main sequence stars have luminosities within an order of magnitude of the sun (so $k$ can only change a factor of 10), using asteroid material (no gravitational binding cost) brings down $x$ by a factor of 10; if the material needs to be vaporized $x$ increases by less than a factor of 10; if a sizeable fraction of the matter is needed for mining/transport/building systems $x$ goes down proportionally; much thinner shells (see below) may give three orders of magnitude smaller $x$ (and hence bump into the hard bound above). So the conclusion is that for this model the natural timescale of terrestrial planetary disassembly into Dyson shells is on the order of months.

Digging into the practicalities of course shows that there are some other issues. Material needs to be transported into place (natural timescale about a year for a moving something 1 AU), the heating effects are going to be major on the planet being disassembled (lots of energy flow there, but of course just boiling it into space and capturing the condensing dust is a pretty good lifting method), the time it takes to convert 1 kg of undifferentiated matter into something useful places a limit of the mass flow per converting device, and so on. This is why our conservative estimate was 40 years for a Mercury-based shell: we assumed a pretty slow transport system.

## Numerical values

Estimate for $x$: assuming that each square meter shell has mass 1 kg, that the energy cost comes from the mean gravitational binding energy of Earth per kg of mass (37.5 MJ/kg), plus processing energy (on the order of 2.65 MJ/kg for heating and melting silicon). Note that using Earth slows things significantly.

I had a conversation with Eric Drexler today, where he pointed out that assuming 1 kg/square meter for the shell is arbitrary. There is a particular area density that is special: given that solar gravity and light pressure both decline with the square of the distance, there exists a particular density $\rho=E/(4 \pi c G M_{sun})\approx 0.78$ gram per square meter, which will just hang there neutrally. Heavier shells will need to orbit to remain where they are, lighter shells need cables or extra weight to not blow away. This might hence be a natural density for shells, making $x$ a factor 1282 smaller.

# Linear growth does not work

I think the key implicit assumption in Plait’s thought above is that he imagines some kind of alien factory churning out shell. If it produces it at a constant rate $A'$, then the time until it a has produced a finished Dyson shell with area $A_D\approx 2.8\cdot 10^{23}$ square meters. That will take $A_D/A'$ seconds.

Current solar cell factories produce on the order of a few hundred MW of solar cells per year; assuming each makes about 2 million square meters per year, we need 140 million billion years. Making a million factories merely brings things down to 140 billion years. To get a century scale dimming time, $A' \approx 8.9\cdot 10^{13}$ square meters per second, about the area of the Atlantic ocean.

This feels absurd. Which is no good reason for discounting the possibility.

# Automation makes the absurd normal

As we argued in our paper, the key assumptions are (1) things we can do can be automated, so that if there are more machines doing it (or doing it faster) there will be more done. (2) we have historically been good at doing things already occurring in nature. (3) self-replication and autonomous action occurs in nature. 2+3 suggests exponentially growing technologies are possible where a myriad entities work in parallel, and 1 suggests that this allows functions such as manufacturing to be scaled up as far as the growth goes. As Kardashev pointed out, there is no reason to think there is any particular size scale for the activities of a civilization except as set by resources and communication.

Incidentally, automation is also why cost overruns or lack of will may not matter so much for this kind of megascale projects. The reason Intel and AMD can reliably make billions of processors containing billions of transistors each is that everything is automated. Making the blueprint and fab pipeline is highly complex and requires an impressive degree of skill (this is where most overruns and delays happen), but once it is done production can just go on indefinitely. The same thing is true of Dyson-making replicators. The first one may be a tough problem that takes time to achieve, but once it is up and running it is autonomous and merely requires some degree of watching (make sure it only picks apart the planets you don’t want!) There is no requirement of continued interest in its operations to keep them going.

# Likely growth rates

But is exponential growth limited mostly by energy the natural growth rate? As Karim and others have suggested, maybe the aliens are lazy or taking their time? Or, conversely, that multi century projects are unexpectedly long-term and hence rare.

Obviously projects could occur with any possible speed: if something can construct something in time X, it can in generally be done half as fast. And if you can construct something of size X, you can do half of it. But not every speed or boundary is natural. We do not answer the question of why a forest or the Great Barrier reef have the size they do by cost overruns stopping them, or that they will eventually grow to arbitrary size, but the growth rate is so small that it is imperceptible. The spread of a wildfire is largely set by physical factors, and a static wildfire will soon approach its maximum allowed speed since part of the fire that do not spread will be overtaken by parts that do. The same is true for species colonizing new ecological niches or businesses finding new markets. They can run slow, it is just that typically they seem to move as fast as they can.

Human economic growth has been on the order of 2% per year for very long historical periods. That implies a time constant $\ln(1.02)\approx 50$ years. This is a “stylized fact” that remained roughly true despite very different technologies, cultures, attempts at boosting it, etc. It seems to be “natural” for human economies. So were a Dyson shell built as a part of a human economy, we might expect it to be completed in 250 years.

What about biological reproduction rates? Merkle and Freitas lists the replication time for various organisms and machines. They cover almost 25 orders of magnitude, but seem to roughly scale as $\tau \approx c M^{1/4}$, where $M$ is the mass and $c\approx 10^7$. So if a total mass $M_T$ needs to be converted into replicators of mass $M$, it will take time $t=\tau\ln(M_T)/\ln(2)$. Plugging in the first formula gives $t=c M^{1/4} \ln(M_T)/\ln(2)$. The smallest independent replicators have $M_s=10^{-15} kg$ (this gives $\tau_s=10^{3.25}=29$ minutes) while a big factory-like replicator (or a tree!) would have $M_b=10^5$ ($\tau_b=10^{8.25}=5.6$ years). In turn, if we set $M_T=A_D\rho=2.18\cdot 10^{20}$ (a “light” Dyson shell) the time till construction ranges from 32 hours for the tiny to 378 years for the heavy replicator. Setting $M_T$ to an Earth mass gives a range from 36 hours to 408 years.

The lower end is infeasible, since this model assumes enough input material and energy – the explosive growth of bacteria-like replicators is not possible if there is not enough energy to lift matter out of gravity wells. But it is telling that the upper end of the range is merely multi-century. This makes a century dimming actually reasonable if we think we are seeing the last stages (remember, most of the construction time the star will be looking totally normal); however, as I argued in my previous post, the likelihood of seeing this period in a random star being englobed is rather low. So if you want to claim it takes millennia or more to build a Dyson shell, you need to assume replicators that are very large and heavy.

[Also note that some of the technological systems discussed in Merkle & Freitas are significantly faster than the main branch. Also, this discussion has talked about general replicators able to make all their parts: if subsystems specialize they can become significantly faster than more general constructors. Hence we have reason to think that the upper end is conservative.]

# Conclusion

There is a lower limit on how fast a Dyson shell can be built, which is likely on the order of hours for manufacturing and a year of dispersion. Replicator sizes smaller than a hundred tons imply a construction time at most a few centuries. This range includes the effect of existing biological and economic growth rates. We hence have a good reason to think most Dyson construction is fast compared to astronomical time, and that catching a star being englobed is pretty unlikely.

I think that models involving slowly growing Dyson spheres require more motivation than models where they are closer to the limits of growth.

# Messages on plaques and disks

Representing ourselves

If we wanted to represent humanity most honestly to aliens, we would just give them a constantly updated full documentation of our cultures and knowledge. But that is not possible.

So in METI we may consider sending “a copy of the internet” as a massive snapshot of what we currently are, or as the Voyager recording did, send a sample of what we are. In both cases it is a snapshot at a particular time: had we sent the message at some other time, the contents would have been different. The selection used is also a powerful shaper, with what is chosen as representative telling a particular story.

That we send a snapshot is not just a necessity, it may be a virtue. The full representation of what humanity is, is not so much a message as a gift with potentially tricky moral implications: imagine if we were given the record of an alien species, clearly sent with the intention that we ought to handle it according to some – to us unknowable – preferences. If we want to do some simple communication, essentially sending a postcard-like “here we are! This is what we think we are!” is the best we can do. A thick and complex message would obscure the actual meaning:

The spacecraft will be encountered and the record played only if there are advanced space-faring civilizations in interstellar space. But the launching of this ‘bottle’ into the cosmic ‘ocean’ says something very hopeful about life on this planet.
– Carl Sagan

It is a time capsule we send because we hope to survive and matter. If it becomes an epitaph of our species it is a decent epitaph. Anybody receiving it is a bonus.

Temporal preferences

Clearly we want the message to persist, maybe be detected, and ideally understood. We do not want the message to be distorted by random chance (if it can be avoided) or by independent actors.

This is why I am not too keen on sending an addendum. One can change the meaning of a message with a small addition: “Haha, just kidding!” or “We were such tools in the 1970s!”

Note that we have a present desire for a message (possibly the original) to reach the stars, but the launchers in 1977 clearly wanted their message to reach the stars: their preferences were clearly linked to what they selected. I think we have a moral duty to respect past preferences for information. I have expressed it elsewhere as a temporal golden rule: “treat the past as you want the future to treat you”. We would not want our message or amendments changed, so we better be careful about past messages.

However, adding a careful footnote is not necessarily wrong. But it needs to be in the spirit of the past message, adding to it.

So what kind of update would be useful?

We might want to add something that we have learned since the launch that aliens ought to know. For example, an important discovery. But this needs to be something that advanced aliens are unlikely to already know, which is tricky: they likely know about dark matter, that geopolitical orders can suddenly shift, or a proof of the Poincaré conjecture.

They have to be contingent, unique to humanity, and ideally universally significant. Few things are. Maybe that leaves us with adding the notes for some new catchy melody (“Gangnam style” or “Macarena”?) or a really neat mathematical insight (PCP theorem? Oops, it looks like Andrew Wiles’ Fermat proof is too large for the probe).

In the end, maybe just a “Still here, 38 years later” may be the best addition. Contingent, human, gives some data on the survival of intelligence in the universe.

# Likely not even a microDyson

Right now KIC 8462852 is really hot, and not just because it is a F3 V/IV type star: the light curve, as measured by Kepler, has irregular dips that looks like something (or rather, several somethings) are obscuring the star. The shapes of the dips are odd. The system is too old and IR-clean to have a remaining protoplanetary disk, dust clumps would coalesce, the aftermath of a giant planet impact is very unlikely (and hard to fit with the aperiodicity); maybe there is a storm of comets due to a recent stellar encounter, but comets are not very good at obscuring stars. So a lot of people on the net are quietly or not so quietly thinking that just maybe this is a Dyson sphere under construction.

I doubt it.

My basic argument is this: if a civilization builds a Dyson sphere it is unlikely to remain small for a long period of time. Just as planetary collisions are so rare that we should not expect to see any in the Kepler field, the time it takes to make a Dyson sphere is also very short: seeing it during construction is very unlikely.

# Fast enshrouding

In my and Stuart Armstrong’s paper “Eternity in Six Hours” we calculated that disassembling Mercury to make a partial Dyson shell could be done in 31 years. We did not try to push things here: our aim was to show that using a small fraction of the resources in the solar system it is possible to harness enough energy to launch a massive space colonization effort (literally reaching every reachable galaxy, eventually each solar system). Using energy from already built solar captors more material is mined and launched, producing an exponential feedback loop. This was originally discussed by Robert Bradbury. The time to disassemble terrestrial planets is not much longer than for Mercury, while the gas giants would take a few centuries.

If we imagine the history of a F5 star 1,000 years is not much. Given the estimated mass of KIC 8462852 as 1.46 solar masses, it will have a main sequence lifespan of 4.1 billion years. The chance of seeing it while being enshrouded is one in 4.3 million. This is the same problem as the giant impact theory.

# A ruin?

An abandoned Dyson shell would likely start clumping together; this might at first sound like a promising – if depressing – explanation of the observation. But the timescale is likely faster than planetary formation timescales of $10^5$$10^6$ years – the pieces are in nearly identical orbits – so the probability problem remains.

But it is indeed more likely to see the decay of the shell than the construction by several orders of magnitude. Just like normal ruins hang around far longer than the time it took to build the original building.

# Laid-back aliens?

Maybe the aliens are not pushing things? Obviously one can build a Dyson shell very slowly – in a sense we are doing it (and disassembling Earth to a tiny extent!) by launching satellites one by one. So if an alien civilization wanted to grow at a leisurely rate or just needed a bit of Dyson shell they could of course do it.

However, if you need something like $2.87\cdot 10^{19}$ Watt (a 100,000 km collector at 1 AU around the star) your demands are not modest. Freeman Dyson originally proposed the concept based on the observation that human energy needs were growing exponentially, and this was the logical endpoint. Even at 1% growth rate a civilization quickly – in a few millennia – need most of the star’s energy.

In order to get a reasonably high probability of seeing an incomplete shell we need to assume growth rates that are exceedingly small (on the order of less than a millionth per year). While it is not impossible, given how the trend seems to be towards more intense energy use in many systems and that entities with higher growth rates will tend to dominate a population, it seems rather unlikely. Of course, one can argue that we currently can more easily detect the rare laid-back civilizations than the ones that aggressively enshrouded their stars, but Dyson spheres do look pretty rare.

# Other uses?

Dyson shells are not the only megastructures that could cause intriguing transits.

C. R. McInnes has a suite of fun papers looking at various kinds of light-related megastructures. One can sort asteroid material using light pressure, engineer climate, adjust planetary orbits, and of course travel using solar sails. Most of these are smallish compared to stars (and in many cases dust clouds), but they show some of the utility of obscuring objects.

Duncan Forgan has a paper on detecting stellar engines (Shkadov thrusters) using light curves; unfortunately the calculated curves do not fit KIC8462852 as far as I can tell.

Luc Arnold analysed the light curves produced by various shapes of artificial objectsHe suggested that one could make a weirdly shaped mask for signalling one’s presence using transits. In principle one could make nearly any shape, but for signalling something unusual yet simple enough to be artificial would make most sense: I doubt the KIC transits fit this.

# More research is needed (duh)

In the end, we need more data. I suspect we will find that it is yet another odd natural phenomenon or coincidence. But it makes sense to watch, just in case.

Were we to learn that there is (or was) a technological civilization acting on a grand scale it would be immensely reassuring: we would know intelligent life could survive for at least some sizeable time. This is the opposite side of the Great Filter argument for why we should hope not to see any extraterrestrial life: life without intelligence is evidence for intelligence either being rare or transient, but somewhat non-transient intelligence in our backyard (just 1,500 light-years away!) is evidence that it is neither rare nor transient. Which is good news, unless we fancy ourselves as unique and burdened by being stewards of the entire reachable universe.

But I think we will instead learn that the ordinary processes of astrophysics can produce weird transit curves, perhaps due to weird objects (remember when we thought hot jupiters were exotic?) The universe is full of strange things, which makes me happy I live in it.

[An edited version of this post can be found at The Conversation: What are the odds of an alien megastructure blocking light from a distant star? ]