The Aestivation hypothesis: popular outline and FAQ

Anders Sandberg & Milan Ćirković

Since putting up a preprint for our paper “That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi’s paradox” (Journal of the British Interplanetary Society, in press) we have heard some comments and confusion that suggest to us that it would be useful to try to outline and clarify what our idea is, what we think about it, and some of the implications.

Table of contents

The super-short version of the paper

Maybe we are not seeing alien civilizations because they are all rationally “sleeping” in the current early cosmological era, waiting for a remote future when it is more favourable to exploit the resources of the universe. We show that given current observations we can rule out a big chunk of possibilities like this, but not all.

A bit more unpacked explanation

Information processing requires physical resources, not just computers or brains, but energy to run them. There is a thermodynamic cost to perform information processing that is temperature dependent: in principle, running processing become 10 times more efficient if your computer is 10 times colder (measured in Kelvins). Right now the cosmic background radiation makes nearly everything in the universe hotter than 3 Kelvin, but as the universe expands this background temperature will decline exponentially. So if you want to do as much information processing as possible with the energy you have it makes sense to wait. It becomes exponentially better. Eventually the background temperature bottoms out because of horizon radiation in a few trillion years: at this point it no longer makes sense to wait with the computation.

Hence, an advanced civilization may have explored a big chunk of the universe, done what is doable with existing nature, and now mostly have internal “cultural” things to do. These things can be regarded as information processing. If they want to maximize processing they should not do it today but wait until the cold future when they will get tremendously more done (1030 times more!). They should hence aestivate, leaving their domain protected by some automation until they wake up.

If this is correct, there might be old and powerful civilizations around that are hard to observe, not because they are deliberately hiding but because they are inactive for the time being.

However, were this hypothesis true, they would not want to lose their stuff. We should expect to see fewer processes that reduce resources  that could be useful in the far future. In the paper we look at processes that look like they might waste resources: stars converting mass into energy that is lost, stars imploding into black holes, galactic winds losing gas into intergalactic space, galaxy collisions, and galaxy clusters getting separated by the expansion of the universe. Current observations do not seem to indicate anything preventing these processes (and most interventions would be very visible).

Hence, either:

  1. the answer to the Fermi question “where are they?!” is something else (like there being no aliens),
  2. advanced civilizations aestivate but do so with only modest hoards of resources rather than entire superclusters,
  3. they are mostly interested in spreading far and wide since this gives a lot of stuff with a much smaller effort than retaining it.

Necessary assumptions

The aestivation hypothesis makes the following assumptions:

  1. There are civilizations that mature much earlier than humanity. (not too implausible, given that Earth is somewhat late compared to other planets)
  2. These civilizations can expand over sizeable volumes, gaining power over their contents. (we have argued that this is doable)
  3. These civilizations have solved their coordination problems. (otherwise it would be hard to jointly aestivate; assumption likelihood hard to judge)
  4. A civilization can retain control over its volume against other civilizations. (otherwise it would need to actively defend its turf in the present era and cannot aestivate; likelihood hard to judge)
  5. The fraction of mature civilizations that aestivate is non-zero. (if it is rational at least some will try)
  6. Aestivation is largely invisible. (seems likely, since there would be nearly no energy release)

Have you solved the Fermi question?

We are not claiming we now know the answer to the Fermi question. Rather, we have a way of ruling out some possibilities and a few new possibilities worth looking for (like galaxies with inhibited heavy star formation).

Do you really believe in it?

I (Anders) personally think the likeliest reason we are not seeing aliens is not that they are aestivating, but just that they do not exist or are very far away.

We have an upcoming paper giving some reasons for this belief. The short of it is that we are very uncertain about the probability of life and intelligence given the current state of scientific knowledge. They could be exceedingly low, and this means we have to assign a fairly high credence to the empty universe hypothesis. If that hypothesis is not true, then aestivation is a pretty plausible answer in my personal opinion.

Why write about a hypothesis you do not think is the most likely one? Because we need to cover as much of possibility space as possible, and the aestivation hypothesis is neatly suggested by considerations of the thermodynamics of computation and physical eschatology. We have been looking at other unlikely Fermi hypotheses like the berserker hypothesis to see if we can give good constraints on them (in that case, our existence plus some ecological instability problems make berzerkers unlikely).

What is the point?

Understanding the potential and limits of intelligence in the universe tells us things about our own chances and potential future.

At the very least, this paper shows what a future advanced human-derived civilization may try to achieve, and some of the ultimate limits on far-future information processing. It gives some new numbers to feed into Nick Bostrom’s astronomical waste argument for working very hard on reducing existential risk in the present: the potential future is huge.

In regards to alien civilizations, the paper maps a part of possibility space, showing what is required for this Fermi paradox explanation to work as an explanation. It helps cut down on the possibilities a fair bit.

What about the Great Filter?

We know there has to be at least one the unlikely step between non-living matter and easily observable technological civilizations (“the Great Filter”), otherwise the sky would be full of them. If it is an early filter (life or intelligence is rare) we may be fairly alone but our future is open; were the filter a later step, we should expect to be doomed.

The aestivation hypothesis doesn’t tell us much about the filter. It allows explaining away the quiet sky as evidence for absence of aliens, so without knowing if it is true or not we do not learn anything from the silence. The lack of megascale engineering is evidence against certain kinds of alien goals and activities, but rather weak evidence.

Meaning of life

Depending on what you are trying to achieve, different long-term strategies make sense. This is another way SETI may tell us something interesting about the Big Questions by showing what advanced species are doing (or not):

If the ultimate value you aim for is local such as having as many happy minds as possible, then you want to spread very far and wide, even though the galaxy clusters you have settled will eventually drift apart and be forever separated. The total value doesn’t depend on all those happy minds talking to each other. Here the total amount of value is presumably proportional to the amount of stuff you have gathered times how long it can produce valuable thoughts. Aestivation makes sense, and you want to spread far and wide before doing it.

If the ultimate value you aim for is nonlocal, such as having your civilization produce the deepest possible philosophy, then all parts need to stay in touch with each other. This means that expanding outside a gravitationally bound supercluster is pointless: your expansion will halt at this point. We can be fairly certain there are no advanced civilizations trying to scrape together larger superclusters since it would be very visible.

If the ultimate value you aim for is finite, then at some point you may be done: you have made the perfect artwork or played all the possible chess games. Such a civilization only needs resources enough to achieve the goal, and then presumably will shut down. If the goal is small it might do this without aestivating, while if it is large it may aestivate with a finite hoard.

If the ultimate goal is modest, like enjoying your planetary utopia, then you will not affect the large-scale universe (although launching intergalactic colonization may still be good for security, leading to a nonlocal instrumental goal). Modest civilizations do not affect the overall fate of the universe.

Can we test it?

Yes! The obvious way is to carefully look for odd processes keeping the universe from losing potentially useful raw materials. The suggestions in the paper give some ideas, but there are doubtless other things to look for.

Also, aestivators would protect themselves from late-evolving species that could steal their stuff. If we were to start building self-replicating von Neumann probes in the future, if there are aestivations around they better stop us. This hypothesis test may of course be rather dangerous…

Isn’t there more to life than information processing?

Information is “a difference that makes a difference”: information processing is just going from one distinguishable state to another in a meaningful way. This covers not just computing with numbers and text, but having one brain state follow another, doing economic transactions, and creating art. Falling in love means that a mind goes from one state to another in a very complex way. Maybe the important subjective aspect is something very different from states of brain, but unless you think that it is possible to fall in love without having the brain change state there will be an information processing element to it. And that information processing is bound by the laws of thermodynamics.

Some theories of value place importance on how or that something is done rather than the consequences or intentions (which can be viewed as information states): maybe a perfect Zen action holds value on its own. If the start and end state are the same, then an infinite amount of such actions can be done and an equal amount of value achieved – yet there is no way of telling if they have ever happened, since there will not be a memory of them occurring.

In short, information processing is something we instrumentally need for the mental or practical activities that truly matter.

“Aestivate”?

Like hibernate, but through summer (latin aestus=heat, aestivate=spend the summer). Hibernate (latin hibernus=wintry) is more common, but since this is about avoiding heat we choose the slightly rarer term.

Can’t you put your computer in a fridge?

Yes, it is possible to cool below 3 K. But you need to do work to achieve it, spending precious energy on the cooling. If you want your computing done *now* and do not care about the total amount of computing, this is fine. But if you want as much computing as possible, then fridges are going to waste some of your energy.

There are some cool (sorry) possibilities by using very large black holes as heat sinks, since their temperature would be lower than the background radiation. But this will only last for a few hundred billion years, then the background will be cooler.

Does computation costs have to be temperature dependent?

The short answer is no, but we do not think this matters for our conclusion.

The irreducible energy cost of computation is due to the Landauer limit (this limit or principle has also been ascribed to Brillouin, Shannon, von Neumann and many others): to erase one bit of information you need to pay an energy cost equal to kT\ln(2) or more. Otherwise you could cheat the second law of thermodynamics.

However, logically reversible computation can work without paying this by never erasing information. The problem is of course that eventually memory runs out, but Bennett showed that one can then “un-compute” the computation by running it backwards, removing the garbage. The problem is that reversible computation needs to run very close to the average energy of the system (taking a long time) and that error correction is irreversible and temperature dependent. Same thing is true for quantum computation.

If one has a pool of negentropy, that is, something ordered that can be randomized, then one can “pay” for bit erasure using this pool until it runs out. This is potentially temperature independent! One can imagine having access to a huge memory full of zero bits. By swapping your garbage bit for a zero, you can potentially run computations without paying an energy cost (if the swapping is free): it has essentially zero temperature.

If there are natural negentropy pools aestivation is pointless: advanced civilizations would be dumping their entropy there in the present. But as far as we know, there are no such pools. We can make them by ordering matter or energy, but that has a work cost that depends on temperature (or using yet another pool of negentropy).

Space-time as a resource?

Maybe the flatness of space-time is the ultimate negentropy pool, and by wrinkling it up we can get rid of entropy: this is in a sense how the universe has become so complex thanks to matter lumping together. The total entropy due to black holes dwarfs the entropy of normal matter by several orders of magnitude.

Were space-time lumpiness a useful resource we should expect advanced civilizations to dump matter into black holes on a vast scale; this does not seem to be going on.

Lovecraft, wasn’t he, you know… a bit racist?

Yup. Very racist. And fearful of essentially everything in the modern world: globalisation, large societies, changing traditions, technology, and how insights from science make humans look like a small part of the universe rather than the centre of creation. Part of what make his horror stories interesting is that they are horror stories about modernity and the modern world-view. From a modernist perspective these things are not evil in themselves.

His vision of a vast universe inhabited by incomprehensible alien entities far outside the range of current humanity does fit in with Dysonian SETI and transhumanism: we should not assume we are at the pinnacle of power and understanding, we can look for signs that there are far more advanced civilizations out there (and if there is, we better figure out how to relate to this fact), and we can aspire to become something like them – which of course would have horrified Lovecraft to no end. Poor man.

Likely not even a microDyson

XIX: The Dyson SunRight now KIC 8462852 is really hot, and not just because it is a F3 V/IV type star: the light curve, as measured by Kepler, has irregular dips that looks like something (or rather, several somethings) are obscuring the star. The shapes of the dips are odd. The system is too old and IR-clean to have a remaining protoplanetary disk, dust clumps would coalesce, the aftermath of a giant planet impact is very unlikely (and hard to fit with the aperiodicity); maybe there is a storm of comets due to a recent stellar encounter, but comets are not very good at obscuring stars. So a lot of people on the net are quietly or not so quietly thinking that just maybe this is a Dyson sphere under construction.

I doubt it.

My basic argument is this: if a civilization builds a Dyson sphere it is unlikely to remain small for a long period of time. Just as planetary collisions are so rare that we should not expect to see any in the Kepler field, the time it takes to make a Dyson sphere is also very short: seeing it during construction is very unlikely.

Fast enshrouding

In my and Stuart Armstrong’s paper “Eternity in Six Hours” we calculated that disassembling Mercury to make a partial Dyson shell could be done in 31 years. We did not try to push things here: our aim was to show that using a small fraction of the resources in the solar system it is possible to harness enough energy to launch a massive space colonization effort (literally reaching every reachable galaxy, eventually each solar system). Using energy from already built solar captors more material is mined and launched, producing an exponential feedback loop. This was originally discussed by Robert Bradbury. The time to disassemble terrestrial planets is not much longer than for Mercury, while the gas giants would take a few centuries.

If we imagine the history of a F5 star 1,000 years is not much. Given the estimated mass of KIC 8462852 as 1.46 solar masses, it will have a main sequence lifespan of 4.1 billion years. The chance of seeing it while being enshrouded is one in 4.3 million. This is the same problem as the giant impact theory.

A ruin?

An abandoned Dyson shell would likely start clumping together; this might at first sound like a promising – if depressing – explanation of the observation. But the timescale is likely faster than planetary formation timescales of 10^510^6 years – the pieces are in nearly identical orbits – so the probability problem remains.

But it is indeed more likely to see the decay of the shell than the construction by several orders of magnitude. Just like normal ruins hang around far longer than the time it took to build the original building.

Laid-back aliens?

Maybe the aliens are not pushing things? Obviously one can build a Dyson shell very slowly – in a sense we are doing it (and disassembling Earth to a tiny extent!) by launching satellites one by one. So if an alien civilization wanted to grow at a leisurely rate or just needed a bit of Dyson shell they could of course do it.

However, if you need something like 2.87\cdot 10^{19} Watt (a 100,000 km collector at 1 AU around the star) your demands are not modest. Freeman Dyson originally proposed the concept based on the observation that human energy needs were growing exponentially, and this was the logical endpoint. Even at 1% growth rate a civilization quickly – in a few millennia – need most of the star’s energy.

In order to get a reasonably high probability of seeing an incomplete shell we need to assume growth rates that are exceedingly small (on the order of less than a millionth per year). While it is not impossible, given how the trend seems to be towards more intense energy use in many systems and that entities with higher growth rates will tend to dominate a population, it seems rather unlikely. Of course, one can argue that we currently can more easily detect the rare laid-back civilizations than the ones that aggressively enshrouded their stars, but Dyson spheres do look pretty rare.

Other uses?

Dyson shells are not the only megastructures that could cause intriguing transits.

C. R. McInnes has a suite of fun papers looking at various kinds of light-related megastructures. One can sort asteroid material using light pressure, engineer climate, adjust planetary orbits, and of course travel using solar sails. Most of these are smallish compared to stars (and in many cases dust clouds), but they show some of the utility of obscuring objects.

Duncan Forgan has a paper on detecting stellar engines (Shkadov thrusters) using light curves; unfortunately the calculated curves do not fit KIC8462852 as far as I can tell.

Luc Arnold analysed the light curves produced by various shapes of artificial objectsHe suggested that one could make a weirdly shaped mask for signalling one’s presence using transits. In principle one could make nearly any shape, but for signalling something unusual yet simple enough to be artificial would make most sense: I doubt the KIC transits fit this.

More research is needed (duh)

In the end, we need more data. I suspect we will find that it is yet another odd natural phenomenon or coincidence. But it makes sense to watch, just in case.

Were we to learn that there is (or was) a technological civilization acting on a grand scale it would be immensely reassuring: we would know intelligent life could survive for at least some sizeable time. This is the opposite side of the Great Filter argument for why we should hope not to see any extraterrestrial life: life without intelligence is evidence for intelligence either being rare or transient, but somewhat non-transient intelligence in our backyard (just 1,500 light-years away!) is evidence that it is neither rare nor transient. Which is good news, unless we fancy ourselves as unique and burdened by being stewards of the entire reachable universe.

But I think we will instead learn that the ordinary processes of astrophysics can produce weird transit curves, perhaps due to weird objects (remember when we thought hot jupiters were exotic?) The universe is full of strange things, which makes me happy I live in it.

[An edited version of this post can be found at The Conversation: What are the odds of an alien megastructure blocking light from a distant star? ]

ET, phone for you!

TelescopeI have been in the media recently since I became the accidental spokesperson for UKSRN at the British Science Festival in Bradford:

BBC / The Telegraph / The Guardian / Iol SciTech / The Irish Times / Bt.com

(As well as BBC 5 Live, BBC Newcastle and BBC Berkshire… so my comments also get sent to space as a side effect).

My main message is that we are going to send in something for the Breakthrough Message initiative: a competition to write a good message to be sent to aliens. The total pot is a million dollars (it seems that was misunderstood in some reporting: it is likely not going to be a huge prize, but rather several). The message will not actually be sent to the stars: this is an intellectual exercise rather than a practical one.

(I also had some comments about the link between Langsec and SETI messages – computer security is actually a bit of an issue for fun reasons. Watch this space.)

Should we?

One interesting issue is whether there are any good reasons not to signal. Stephen Hawking famously argued against it (but he is a strong advocate of SETI), as does David Brin. A recent declaration argues that we should not signal unless there was a widespread agreement about it. Yet others have made the case that we should signal, perhaps a bit cautiously. In fact, an eminent astronomer just told he could not take concerns about sending a message seriously.

Some of the arguments are (in no particular order):

Pro Con
SETI will not work if nobody speaks. Malign ETI.
ETI is likely to be far more advanced than us and could help us. Past meetings between different civilizations have often ended badly.
Knowing if there is intelligence out there is important. Giving away information about ourselves may expose us to accidental or deliberate hacking.
Hard to prevent transmissions.  Waste of resources.
 Radio transmissions are already out there.  If the ETI is quiet, it is for a reason.
 Maybe they are waiting for us to make the first move.  We should listen carefully first, then transmit.

It is actually an interesting problem: how do we judge the risks and benefits in a situation like this? Normal decision theory runs into trouble (not that it stops some of my colleagues). The problem here is that the probability and potential gain/loss are badly defined. We may have our own personal views on the likelihood of intelligence within radio reach and its nature, but we should be extremely uncertain given the paucity of evidence.

[ Even the silence in the sky is some evidence, but it is somewhat tricky to interpret given that it is compatible with both no intelligence (because of rarity or danger), intelligence not communicating or looking in spectra we see, cultural convergence towards quietness (the zoo hypothesis, everybody hiding, everybody becoming Jupiter brains), or even the simulation hypothesis. The first category is at least somewhat concise, while the later categories have endless room for speculation. One could argue that since later categories can fit any kind of evidence they are epistemically weak and we should not trust them much.]

Existential risks also tends to take precedence over almost anything. If we can avoid doing something that could cause existential risk the maxiPOK principle tells us not to do it: we can avoid sending and sending might bring down the star wolves on us, so we should avoid it.

There is also a unilateralist curse issue. It is enough that one group somewhere thinks transmitting is a good idea and hence do it to get the consequences, whatever they are. So the more groups that consider transmitting, even if they are all rational, well-meaning and consider the issue at length the more likely it is that somebody will do it even if it is a stupid thing to do. In situations like this we have argued it behoves us to be more conservative individually than we would otherwise have been – we should simply think twice just because sending messages is in the unilateralist curse category. We also argue in that paper that it is even better to share information and make collectively coordinated decisions.

That these arguments strengthen the con side – but largely independently of what the actual anti-message arguments are. They are general arguments that we should be careful, not final arguments.

Conversely, Alan Penny argued that given the high existential risk to humanity we may actually have little to lose: if our risk per century is 12-40% of extinction, then adding a small ETI risk has little effect on the overall risk level, yet a small chance of friendly ETI advice (“By the way, you might want to know about this…”) that decreases existential risk may be an existential hope. Suppose we think it is 50% likely that ETI is friendly, and 1% chance it is out there. If it is friendly it might give us advice that reduces our existential risk by 50%, otherwise it will eat us with 1% probability. So if we do nothing our risk is (say) 12%. If we signal, then the risk is 0.12*0.99 + 0.01*(0.5*0.12*0.5 + 0.5*(0.12*0.99+0.01))=11.9744% – a slight improvement. Like the Drake equation one can of course plug in different numbers and get different effects.

Truth to the stars

Considering the situation over time, sending a message now may also be irrelevant since we could wipe ourselves out before any response will arrive. That brings to mind a discussion we had at the press conference yesterday about what the point of sending messages far away would be: wouldn’t humanity be gone by then? Also, we were discussing what to present to ETI: an honest or whitewashed version of ourselves? (my co-panelist Dr Jill Stuart made some great points about the diversity issues in past attempts).

My own view is that I’d rather have an honest epitaph for our species than a polished but untrue one. This is both relevant to us, since we may want to be truthful beings even if we cannot experience the consequences of the truth, and relevant to ETI, who may find the truth more useful than whatever our culture currently would like to present.