What kinds of grand futures are there?

I have been working for about a year on a book on “Grand Futures” – the future of humanity, starting to sketch a picture of what we could eventually achieve were we to survive, get our act together, and reach our full potential. Part of this is an attempt to outline what we know is and isn’t physically possible to achieve, part of it is an exploration of what makes a future good.

Here are some things that appear to be physically possible (not necessarily easy, but doable):

  • Societies of very high standards of sustainable material wealth. At least as rich (and likely far above) current rich nation level in terms of what objects, services, entertainment and other lifestyle ordinary people can access.
  • Human enhancement allowing far greater health, longevity, well-being and mental capacity, again at least up to current optimal levels and likely far, far beyond evolved limits.
  • Sustainable existence on Earth with a relatively unchanged biosphere indefinitely.
  • Expansion into space:
    • Settling habitats in the solar system, enabling populations of at least 10 trillion (and likely many orders of magnitude more)
    • Settling other stars in the milky way, enabling populations of at least 1029 people
    • Settling over intergalactic distances, enabling populations of at least 1038 people.
  • Survival of human civilisation and the species for a long time.
    • As long as other mammalian species – on the order of a million years.
    • As long as Earth’s biosphere remains – on the order of a billion years.
    • Settling the solar system – on the order of 5 billion years
    • Settling the Milky Way or elsewhere – on the order of trillions of years if dependent on sunlight
    • Using artificial energy sources – up to proton decay, somewhere beyond 1032 years.
  • Constructing Dyson spheres around stars, gaining energy resources corresponding to the entire stellar output, habitable space millions of times Earth’s surface, telescope, signalling and energy projection abilities that can reach over intergalactic distances.
  • Moving matter and objects up to galactic size, using their material resources for meaningful projects.
  • Performing more than a google (10100) computations, likely far more thanks to reversible and quantum computing.

While this might read as a fairly overwhelming list, it is worth noticing that it does not include gaining access to an infinite amount of matter, energy, or computation. Nor indefinite survival. I also think faster than light travel is unlikely to become possible. If we do not try to settle remote galaxies within 100 billion years accelerating expansion will move them beyond our reach. This is a finite but very large possible future.

What kinds of really good futures may be possible? Here are some (not mutually exclusive):

  • Survival: humanity survives as long as it can, in some form.
  • “Modest futures”: humanity survives for as long as is appropriate without doing anything really weird. People have idyllic lives with meaningful social relations. This may include achieving close to perfect justice, sustainability, or other social goals.
  • Gardening: humanity maintains the biosphere of Earth (and possibly other planets), preventing them from crashing or going extinct. This might include artificially protecting them from a brightening sun and astrophysical disasters, as well as spreading life across the universe.
  • Happiness: humanity finds ways of achieving extreme states of bliss or other positive emotions. This might include local enjoyment, or actively spreading minds enjoying happiness far and wide.
  • Abolishing suffering: humanity finds ways of curing negative emotions and suffering without precluding good states. This might include merely saving humanity, or actively helping all suffering beings in the universe.
  • Posthumanity: humanity deliberately evolves or upgrades itself into forms that are better, more diverse or otherwise useful, gaining access to modes of existence currently not possible to humans but equally or more valuable.
  • Deep thought: humanity develops cognitive abilities or artificial intelligence able to pursue intellectual pursuits far beyond what we can conceive of in science, philosophy, culture, spirituality and similar but as yet uninvented domains.
  • Creativity: humanity plays creatively with the universe, making new things and changing the world for its own sake.

I have no doubt I have missed many plausible good futures.

Note that there might be moral trades, where stay-at-homes agree with expansionists to keep Earth an idyllic world for modest futures and gardening while the others go off to do other things, or long-term oriented groups agreeing to give short-term oriented groups the universe during the stelliferous era in exchange for getting it during the cold degenerate era trillions of years in the future. Real civilisations may also have mixtures of motivations and sub-groups.

Note that the goals and the physical possibilities play out very differently: modest futures do not reach very far, while gardener civilisations may seek to engage in megascale engineering to support the biosphere but not settle space. Meanwhile the happiness-maximizers may want to race to convert as much matter as possible to hedonium, while the deep thought-maximizers may want to move galaxies together to create permanent hyperclusters filled with computation to pursue their cultural goals.

I don’t know what goals are right, but we can examine what they entail. If we see a remote civilization doing certain things we can make some inferences on what is compatible with the behaviour. And we can examine what we need to do today to have the best chances of getting to a trajectory towards some of these goals: avoiding getting extinct, improve our coordination ability, and figure out if we need to perform some global coordination in the long run that we need to agree on before spreading to the stars.

What is the natural timescale for making a Dyson shell?

KIC 8462852 (“Tabby’s Star”) continues to confuse. I blogged earlier about why I doubt it is a Dyson sphere. SETI observations in radio and optical has not produced any finds. Now there is evidence that it has dimmed over a century timespan, something hard to square with the comet explanation. Phil Plait over at Bad Astronomy has a nice overview of the headscratching.

However, he said something that I strongly disagree with:

Now, again, let me be clear. I am NOT saying aliens here. But, I’d be remiss if I didn’t note that this general fading is sort of what you’d expect if aliens were building a Dyson swarm. As they construct more of the panels orbiting the star, they block more of its light bit by bit, so a distant observer sees the star fade over time.

However, this doesn’t work well either. … Also, blocking that much of the star over a century would mean they’d have to be cranking out solar panels.

Basically, he is saying that a century timescale construction of a Dyson shell is unlikely. Now, since I have argued that we could make a Dyson shell in about 40 years, I disagree. I got into a Twitter debate with Karim Jebari (@KarimJebari) about this, where he also doubted what the natural timescale for Dyson construction is. So here is a slightly longer than Twitter message exposition of my model.

Lower bound

There is a strict lower bound set by how long it takes for the star to produce enough energy to overcome the binding energy of the source bodies (assuming one already have more than enough collector area). This is on the order of days for terrestrial planets, as per Robert Bradbury’s original calculations.

Basic model

Starting with a small system that builds more copies of itself, solar collectors and mining equipment, one can get exponential growth.

A simple way of reasoning: if you have an area A(t) of solar collectors, you will have energy kA(t) to play with, where k is the energy collected per square meter. This will be used to lift and transform matter into more collectors. If we assume this takes x Joules per square meter on average, we get A'(t) = (k/x)A(t), which makes A(t) is an exponential function with time constant k/x. If a finished Dyson shell has area A_D\approx 2.8\cdot 10^{23} meters and we start with an initial plant of size A(0) (say on the order of a few hundred square meters), then the total time to completion is t = (x/k)\ln(A_D/A(0)) seconds. The logarithmic factor is about 50.

If we assume k \approx 3\cdot 10^2 W and x \approx 40.15 MJ/kg (see numerics below), then t=78 days.

This is very much in line with Robert’s original calculations. He pointed out that given the sun’s power output Earth could be theoretically disassembled in 22 days. In the above calculations  the time constant (the time it takes to get 2.7 times as much area) is 37 hours. So for most of the 78 days there is just a small system expanding, not making a significant dent in the planet nor being very visible over interstellar distances; only in the later part of the period will it start to have radical impact.

The timescale is robust to the above assumptions: sun-like main sequence stars have luminosities within an order of magnitude of the sun (so k can only change a factor of 10), using asteroid material (no gravitational binding cost) brings down x by a factor of 10; if the material needs to be vaporized x increases by less than a factor of 10; if a sizeable fraction of the matter is needed for mining/transport/building systems x goes down proportionally; much thinner shells (see below) may give three orders of magnitude smaller x (and hence bump into the hard bound above). So the conclusion is that for this model the natural timescale of terrestrial planetary disassembly into Dyson shells is on the order of months.

Digging into the practicalities of course shows that there are some other issues. Material needs to be transported into place (natural timescale about a year for a moving something 1 AU), the heating effects are going to be major on the planet being disassembled (lots of energy flow there, but of course just boiling it into space and capturing the condensing dust is a pretty good lifting method), the time it takes to convert 1 kg of undifferentiated matter into something useful places a limit of the mass flow per converting device, and so on. This is why our conservative estimate was 40 years for a Mercury-based shell: we assumed a pretty slow transport system.

Numerical values

Estimate for x: assuming that each square meter shell has mass 1 kg, that the energy cost comes from the mean gravitational binding energy of Earth per kg of mass (37.5 MJ/kg), plus processing energy (on the order of 2.65 MJ/kg for heating and melting silicon). Note that using Earth slows things significantly.

I had a conversation with Eric Drexler today, where he pointed out that assuming 1 kg/square meter for the shell is arbitrary. There is a particular area density that is special: given that solar gravity and light pressure both decline with the square of the distance, there exists a particular density \rho=E/(4 \pi c G M_{sun})\approx 0.78 gram per square meter, which will just hang there neutrally. Heavier shells will need to orbit to remain where they are, lighter shells need cables or extra weight to not blow away. This might hence be a natural density for shells, making x a factor 1282 smaller.

Linear growth does not work

I think the key implicit assumption in Plait’s thought above is that he imagines some kind of alien factory churning out shell. If it produces it at a constant rate A', then the time until it a has produced a finished Dyson shell with area A_D\approx 2.8\cdot 10^{23} square meters. That will take A_D/A' seconds.

Current solar cell factories produce on the order of a few hundred MW of solar cells per year; assuming each makes about 2 million square meters per year, we need 140 million billion years. Making a million factories merely brings things down to 140 billion years. To get a century scale dimming time, A' \approx 8.9\cdot 10^{13} square meters per second, about the area of the Atlantic ocean.

This feels absurd. Which is no good reason for discounting the possibility.

Automation makes the absurd normal

As we argued in our paper, the key assumptions are (1) things we can do can be automated, so that if there are more machines doing it (or doing it faster) there will be more done. (2) we have historically been good at doing things already occurring in nature. (3) self-replication and autonomous action occurs in nature. 2+3 suggests exponentially growing technologies are possible where a myriad entities work in parallel, and 1 suggests that this allows functions such as manufacturing to be scaled up as far as the growth goes. As Kardashev pointed out, there is no reason to think there is any particular size scale for the activities of a civilization except as set by resources and communication.

Incidentally, automation is also why cost overruns or lack of will may not matter so much for this kind of megascale projects. The reason Intel and AMD can reliably make billions of processors containing billions of transistors each is that everything is automated. Making the blueprint and fab pipeline is highly complex and requires an impressive degree of skill (this is where most overruns and delays happen), but once it is done production can just go on indefinitely. The same thing is true of Dyson-making replicators. The first one may be a tough problem that takes time to achieve, but once it is up and running it is autonomous and merely requires some degree of watching (make sure it only picks apart the planets you don’t want!) There is no requirement of continued interest in its operations to keep them going.

Likely growth rates

But is exponential growth limited mostly by energy the natural growth rate? As Karim and others have suggested, maybe the aliens are lazy or taking their time? Or, conversely, that multi century projects are unexpectedly long-term and hence rare.

Obviously projects could occur with any possible speed: if something can construct something in time X, it can in generally be done half as fast. And if you can construct something of size X, you can do half of it. But not every speed or boundary is natural. We do not answer the question of why a forest or the Great Barrier reef have the size they do by cost overruns stopping them, or that they will eventually grow to arbitrary size, but the growth rate is so small that it is imperceptible. The spread of a wildfire is largely set by physical factors, and a static wildfire will soon approach its maximum allowed speed since part of the fire that do not spread will be overtaken by parts that do. The same is true for species colonizing new ecological niches or businesses finding new markets. They can run slow, it is just that typically they seem to move as fast as they can.

Human economic growth has been on the order of 2% per year for very long historical periods. That implies a time constant \ln(1.02)\approx 50 years. This is a “stylized fact” that remained roughly true despite very different technologies, cultures, attempts at boosting it, etc. It seems to be “natural” for human economies. So were a Dyson shell built as a part of a human economy, we might expect it to be completed in 250 years.

What about biological reproduction rates? Merkle and Freitas lists the replication time for various organisms and machines. They cover almost 25 orders of magnitude, but seem to roughly scale as \tau \approx c M^{1/4}, where M is the mass and c\approx 10^7. So if a total mass $M_T$ needs to be converted into replicators of mass M, it will take time t=\tau\ln(M_T)/\ln(2). Plugging in the first formula gives t=c M^{1/4} \ln(M_T)/\ln(2). The smallest independent replicators have M_s=10^{-15} kg (this gives \tau_s=10^{3.25}=29 minutes) while a big factory-like replicator (or a tree!) would have M_b=10^5 (\tau_b=10^{8.25}=5.6 years). In turn, if we set M_T=A_D\rho=2.18\cdot 10^{20} (a “light” Dyson shell) the time till construction ranges from 32 hours for the tiny to 378 years for the heavy replicator. Setting M_T to an Earth mass gives a range from 36 hours to 408 years.

The lower end is infeasible, since this model assumes enough input material and energy – the explosive growth of bacteria-like replicators is not possible if there is not enough energy to lift matter out of gravity wells. But it is telling that the upper end of the range is merely multi-century. This makes a century dimming actually reasonable if we think we are seeing the last stages (remember, most of the construction time the star will be looking totally normal); however, as I argued in my previous post, the likelihood of seeing this period in a random star being englobed is rather low. So if you want to claim it takes millennia or more to build a Dyson shell, you need to assume replicators that are very large and heavy.

[Also note that some of the technological systems discussed in Merkle & Freitas are significantly faster than the main branch. Also, this discussion has talked about general replicators able to make all their parts: if subsystems specialize they can become significantly faster than more general constructors. Hence we have reason to think that the upper end is conservative.]

Conclusion

There is a lower limit on how fast a Dyson shell can be built, which is likely on the order of hours for manufacturing and a year of dispersion. Replicator sizes smaller than a hundred tons imply a construction time at most a few centuries. This range includes the effect of existing biological and economic growth rates. We hence have a good reason to think most Dyson construction is fast compared to astronomical time, and that catching a star being englobed is pretty unlikely.

I think that models involving slowly growing Dyson spheres require more motivation than models where they are closer to the limits of growth.

Starkiller base versus the ideal gas law

Partial eclipseMy friend Stuart explains why the Death Stars and the Starkiller Base in the Star Wars universe are inefficient ways of taking over the galaxy. I generally agree: even a super-inefficient robot army will win if you simply bury enemy planets in robots.

But thinking about the physics of absurd superweapons is fun and warms the heart.

The ideal gas law: how do you compress stars?

My biggest problem with the Starkiller Base is the ideal gas law. The weapon works by sucking up a star and then beaming its energy or plasma at remote targets. A sun-like star has a volume around 1.4*1018 cubic kilometres, while an Earthlike planet has a volume around 1012 cubic kilometres. So if you suck up a star it will get compressed by a factor of 1.4 million times. The ideal gas law states that pressure times volume equals temperature times the number of particles and some constant: PV=nRT

1.4 million times less volume needs to be balanced somehow: either the pressure P has to go down, the temperature T has to go up, or the number of particles n need to go down.

Pressure reduction seems to be a non-starter, unless the Starkiller base actually contains some kind of alternate dimension where there is no pressure (or an enormous volume).

The second case implies a temperature increase by a factor of a 1.4 million. Remember how hot a bike pump gets when compressing air: this is the same effect. This would heat the photosphere gas to 8.4 billion degrees and the core to 2.2*1013 K, 22 TeraKelvin; the average would be somewhere between, on the hotter side. We are talking about temperatures microseconds after the Big Bang, hotter than a supernova: protons and neutrons melt at 0.5–1.2 TK into a quark-gluon plasma. Excellent doomsday weapon material but now containment seems problematic. Even if we have antigravity forcefields to hold the star, the black-body radiation is beyond the supernova range. Keeping it inside a planet would be tough: the amount of neutrino radiation would likely blow up the surface like a supernova bounce does.

Maybe the extra energy is bled off somehow? That might be a way to merely get super-hot plasma rather than something evaporating the system. Maybe those pesky neutrinos can be shunted into hyperspace, taking most of the heat with them (neutrino cooling can be surprisingly fast for very hot objects; at these absurd temperatures it is likely subsecond down to mere supernova temperatures).

Another bizarre and fun approach is to reduce the number of gas particles: simply fuse them all into a single nucleus. A neutron star is in a sense a single atomic nucleus. As a bonus, the star would now be a tiny multikilometre sphere held together by its own gravity. If n is reduced by a factor of 1057 it could outweigh the compression temperature boost. There would be heating from all the fusion; my guesstimate is that it is about a percent of the mass energy, or 2.7*1045 J. This would heat the initial gas to around 96 billion degrees, still manageable by the dramatic particle number reduction. This approach still would involve handling massive neutrino emissions, since the neutronium would still be pretty hot.

In this case the star would remain gravitationally bound into a small blob: convenient as a bullet. Maybe the red “beam” is actually just an accelerated neutron star, leaking mass along its trajectory. The actual colour would of course be more like blinding white with a peak in the gamma ray spectrum. Given the intense magnetic fields locked into neutron stars, moving them electromagnetically looks pretty feasible… assuming you have something on the other end of the electromagnetic field that is heavier or more robust. If a planet shoots a star-mass bullet at a high velocity, then we should expect the recoil to send the planet moving at about a million times faster in the opposite direction.

Other issues

We have also ignored gravity: putting a sun-mass inside an Earth-radius means we get 333,000 times higher gravity. We can try to hand-wave this by arguing that the antigravity used to control the star eating also compensates for the extra gravity. But even a minor glitch in the field would produce an instant, dramatic squishing. Messing up the system* containing the star would not produce conveniently dramatic earthquakes and rifts, but rather near-instant compression into degenerate matter.

(* System – singular. Wow. After two disasters due to single-point catastrophic failures one would imagine designers learning their lesson. Three times is enemy action: if I were the Supreme Leader I would seriously check if the lead designer happens to be named Skywalker.)

There is also the issue of the amount of energy needed to run the base. Sucking up a star from a distance requires supplying the material with the gravitational binding energy of the star, 6.87*1041 J for the sun. Doing this over an hour or so is a pretty impressive power, about 1.9*1038 W. This is about 486 billion times the solar luminosity. In fact, just beaming that power at a target using any part of the electromagnetic spectrum would fry just about anything.

Of course, a device that can suck up a star ought to be able to suck up planets a million times faster. So there is no real need to go for stars: just suck up the Republic. Since the base can suck up space fleets too, local defences are not much of a problem. Yes, you may have to go there with your base, but if the Death Star can move, the Starkiller can too. If nothing else, it could use its beam to propel itself.

If the First Order want me to consult on their next (undoubtedly even more ambitious) project I am open for offers. However, one iron-clad condition given recent history is that I get to work from home, as far away as possible from the superweapon. Ideally in a galaxy far, far away.

Likely not even a microDyson

XIX: The Dyson SunRight now KIC 8462852 is really hot, and not just because it is a F3 V/IV type star: the light curve, as measured by Kepler, has irregular dips that looks like something (or rather, several somethings) are obscuring the star. The shapes of the dips are odd. The system is too old and IR-clean to have a remaining protoplanetary disk, dust clumps would coalesce, the aftermath of a giant planet impact is very unlikely (and hard to fit with the aperiodicity); maybe there is a storm of comets due to a recent stellar encounter, but comets are not very good at obscuring stars. So a lot of people on the net are quietly or not so quietly thinking that just maybe this is a Dyson sphere under construction.

I doubt it.

My basic argument is this: if a civilization builds a Dyson sphere it is unlikely to remain small for a long period of time. Just as planetary collisions are so rare that we should not expect to see any in the Kepler field, the time it takes to make a Dyson sphere is also very short: seeing it during construction is very unlikely.

Fast enshrouding

In my and Stuart Armstrong’s paper “Eternity in Six Hours” we calculated that disassembling Mercury to make a partial Dyson shell could be done in 31 years. We did not try to push things here: our aim was to show that using a small fraction of the resources in the solar system it is possible to harness enough energy to launch a massive space colonization effort (literally reaching every reachable galaxy, eventually each solar system). Using energy from already built solar captors more material is mined and launched, producing an exponential feedback loop. This was originally discussed by Robert Bradbury. The time to disassemble terrestrial planets is not much longer than for Mercury, while the gas giants would take a few centuries.

If we imagine the history of a F5 star 1,000 years is not much. Given the estimated mass of KIC 8462852 as 1.46 solar masses, it will have a main sequence lifespan of 4.1 billion years. The chance of seeing it while being enshrouded is one in 4.3 million. This is the same problem as the giant impact theory.

A ruin?

An abandoned Dyson shell would likely start clumping together; this might at first sound like a promising – if depressing – explanation of the observation. But the timescale is likely faster than planetary formation timescales of 10^510^6 years – the pieces are in nearly identical orbits – so the probability problem remains.

But it is indeed more likely to see the decay of the shell than the construction by several orders of magnitude. Just like normal ruins hang around far longer than the time it took to build the original building.

Laid-back aliens?

Maybe the aliens are not pushing things? Obviously one can build a Dyson shell very slowly – in a sense we are doing it (and disassembling Earth to a tiny extent!) by launching satellites one by one. So if an alien civilization wanted to grow at a leisurely rate or just needed a bit of Dyson shell they could of course do it.

However, if you need something like 2.87\cdot 10^{19} Watt (a 100,000 km collector at 1 AU around the star) your demands are not modest. Freeman Dyson originally proposed the concept based on the observation that human energy needs were growing exponentially, and this was the logical endpoint. Even at 1% growth rate a civilization quickly – in a few millennia – need most of the star’s energy.

In order to get a reasonably high probability of seeing an incomplete shell we need to assume growth rates that are exceedingly small (on the order of less than a millionth per year). While it is not impossible, given how the trend seems to be towards more intense energy use in many systems and that entities with higher growth rates will tend to dominate a population, it seems rather unlikely. Of course, one can argue that we currently can more easily detect the rare laid-back civilizations than the ones that aggressively enshrouded their stars, but Dyson spheres do look pretty rare.

Other uses?

Dyson shells are not the only megastructures that could cause intriguing transits.

C. R. McInnes has a suite of fun papers looking at various kinds of light-related megastructures. One can sort asteroid material using light pressure, engineer climate, adjust planetary orbits, and of course travel using solar sails. Most of these are smallish compared to stars (and in many cases dust clouds), but they show some of the utility of obscuring objects.

Duncan Forgan has a paper on detecting stellar engines (Shkadov thrusters) using light curves; unfortunately the calculated curves do not fit KIC8462852 as far as I can tell.

Luc Arnold analysed the light curves produced by various shapes of artificial objectsHe suggested that one could make a weirdly shaped mask for signalling one’s presence using transits. In principle one could make nearly any shape, but for signalling something unusual yet simple enough to be artificial would make most sense: I doubt the KIC transits fit this.

More research is needed (duh)

In the end, we need more data. I suspect we will find that it is yet another odd natural phenomenon or coincidence. But it makes sense to watch, just in case.

Were we to learn that there is (or was) a technological civilization acting on a grand scale it would be immensely reassuring: we would know intelligent life could survive for at least some sizeable time. This is the opposite side of the Great Filter argument for why we should hope not to see any extraterrestrial life: life without intelligence is evidence for intelligence either being rare or transient, but somewhat non-transient intelligence in our backyard (just 1,500 light-years away!) is evidence that it is neither rare nor transient. Which is good news, unless we fancy ourselves as unique and burdened by being stewards of the entire reachable universe.

But I think we will instead learn that the ordinary processes of astrophysics can produce weird transit curves, perhaps due to weird objects (remember when we thought hot jupiters were exotic?) The universe is full of strange things, which makes me happy I live in it.

[An edited version of this post can be found at The Conversation: What are the odds of an alien megastructure blocking light from a distant star? ]

What is the largest possible inhabitable world?

The question is of course ill-defined, since “largest”, “possible”, “inhabitable” and “world” are slippery terms. But let us aim at something with maximal surface area that can be inhabited by at least terrestrial-style organic life of human size and is allowed by the known laws of physics. This gives us plenty of leeway.

Piled higher and deeper

Bigworld

We could simply imagining adding more and more mass to a planet. At first we might get something like my double Earths, ocean worlds surrounding a rock core. The oceans are due to the water content of the asteroids and planetesimals we build them from: a huge dry planet is unlikely without some process stripping away water. As we add more material the ocean gets deeper until the extreme pressure makes the bottom solidify into exotic ice – which slows down the expansion somewhat.

Adding even more matter will produce a denser atmosphere too. A naturally accreting planet will acquire gas if it is heavy and cold enough, at first producing something like Neptune and then a gas giant. Keep it up, and you get a brown dwarf and eventually a star. These gassy worlds are also far more compressible than a rock- or water-world, so their radius does not increase when they get heavier. In fact, most gas giants are expected to be about the size of Jupiter.

If this is true, why is the sun and some hot Jupiters much bigger? Jupiter’s radius is  69,911 km, the sun radius is 695,800 km,  and the largest exoplanets known today have radii around 140,000 km.  The answer is that another factor determining size is temperature. As the ideal gas law states, to a first approximation pressure times volume equals temperature: the pressure at the core due to the weight of all the matter stays roughly the same, but at higher temperatures the same planet/star gets larger. But I will assume inhabitable worlds are reasonably cold.

Planetary models also suggest that a heavy planet will tend to become denser: adding more mass compresses the interior, making the radius climb more slowly.

The central pressure of a uniform body is P = 2\pi G R^2 \rho^2/3. In reality planets do not tend to be uniform, but let us ignore this. Given an average density we see that the pressure grows with the square of the radius and quickly becomes very large (in Earth, the core pressure is somewhere in the vicinity of 350 GPa). If we wanted something huge and heavy we need to make it out of something incompressible, or in the language of physics, something with a stiff equation of state. There is a fair amount of research about super-earth compositions and mass-radius relationships in the astrophysics community, with models of various levels of complexity.

This paper by Seager, Kuchner, Hier-Majumder and Militzer provides a lovely approximate formula: \log_{10}(R/r_1) = k_1+(1/3)\log_{10}(M/m_1)-k_2M^{k_3} up to about 20 earth masses. Taking the derivative and setting it to zero gives us the mass where the radius is maximal as

M=\left [\frac{m_1^{k_3}}{3k_2k_3\ln(10)}\right ]^{1/k_3}.

Taking the constants (table 4) corresponding to iron gives a maximum radius at the mass of 274 Earths, perovskite at 378 Earths, and for ice at 359 Earths. We should likely not trust the calculation very much around the turning point, since we are well above the domain of applicability. Still, looking at figure 4 shows that the authors at least plot the curves up to this range. The maximal iron world is about 2.7 times larger than Earth, the maximal perovskite worlds manage a bit more than 3 times Earth’s radius, and the waterworlds just about reach 5 times. My own plot of the approximation function gives somewhat smaller radii:

Approximate radius for different planet compositions, based on Seager et al. 2007.
Approximate radius for different planet compositions, based on Seager et al. 2007.

Mordasini et al. have a paper producing similar results; for masses around 1000 Earth masses their maximum sizes are about 3.2 times for a Earthlike 2:1 silicate-to-iron ratio, 4 times for an 50% ice, 33% silicate and 70% iron planet, and 4.8 times for planets made completely of ice.

The upper size limit is set by the appearance of degenerate matter. Electrons are not allowed to be in the same energy state in the same place. If you squeeze atoms together, eventually the electrons will have to start piling into higher energy states due to lack of space. This is resisted, producing the degeneracy pressure. However, it grows rather slowly with density, so degenerate cores will readily compress. For fully degenerate bodies like white dwarves and neutron stars the radius declines with increasing mass (making the largest neutron stars the lightest!). And of course, beyond a certain limit the degeneracy pressure is unable to stop gravitational collapse and they implode into black holes.

For maximum-size planets the really exotic physics is (unfortunately?) irrelevant. Normal gravity is however applicable: the  surface gravity scales as g =GM/R^2 = 4 \pi G \rho R / 3. So for a 274 times heavier and 2.7 times larger iron-Earth surface gravity is 38 times Earth’s.  This is not habitable for humans (although immersion in a liquid tank and breathing through oxygenated liquids might allow survival). However, bacteria have been cultured at 403,627 g in centrifuges! The 359 times heavier and 5 times large ice world just has 14.3 times our surface gravity. Humans could probably survive if they were lying down, although this is way above any long-term limits found by NASA.

What about rotating the planet fast enough? As Mesklin in Hal Clement’s Mission of Gravity demonstrates, we can have a planet with hundreds of Gs of gravity at the poles, yet a habitable mere 3 G equator. Of course, this is cheating somewhat with the habitability condition: only a tiny part is human-habitable, yet there is a lot of unusable (to humans, not mesklinites) surface area. Estimating the maximum size becomes fairly involved since the acceleration and pressure fields inside are not spherically symmetric. A crude guesstimate would be to look at the polar radius and assume it is limited by the above degeneracy conditions, and then note that the limiting eccentricity is about 0.4: that would make the equatorial radius 2.5 times larger than the polar radius. So for the spun-up ice world we might get an equatorial radius 12 times Earth and a surface area about 92 times larger. If we want to go beyond this we might consider torus-worlds; they can potentially have an arbitrarily large area with a low gravity outer equator. Unfortunately they are likely not very stable: any tidal forces or big impacts (see below) might introduce a fatal wobble and breakup.

So in some sense the maximal size planets would be habitable. However, as mentioned above, they would also likely turn into waterworlds and warm Neptunes.

Getting a solid mega-Earth (and keeping it solid)

The most obvious change is to postulate that the planet indeed just has the right amount of water to make decent lakes and oceans, but does not turn into an ocean-world. Similarly we may hand-wave away the atmosphere accretion and end up with a huge planet with a terrestrial surface.

Although it is not going to stay that way for long. The total heat production inside the planet is proportional to the volume which is proportional to the cube of the radius, but the surface area that radiates away heat is proportional to the square of the radius. Large planets will have more heat per square meter of surface, and hence have more volcanism and plate tectonics. That big world will soon get a fair bit of atmosphere from volcanic eruptions, and not the good kind – lots of sulphuric oxides, carbon dioxide and other nasties. (A pure ice-Earth would escape this, since all hydrogen and oxygen isotopes are short lived – once it solidified it would stay solid and boring).

And the big planet will get hit by comets too. The planet will sweep up stuff that comes inside its capture cross section \sigma_c = \sigma_{geom} (1 + v_e^2/v_0^2) where \sigma_{geom}=\pi R^2 is the geometric cross section, v_e = \sqrt{2GM/R} = R \sqrt{8 G \pi \rho / 3} the escape velocity and v_0 the original velocity of the stuff. Putting it all together gives a capture cross section proportional to R^4: double-Earth will get hit by 2^4=16 times as much space junk as Earth. Iron-Earth by 53 times as much.

So over time the planet will accumulate an atmosphere denser than it started. But the impact cataclysms might also be worse for habitability – the energy released when something hits is roughly proportional to the square of the escape velocity, which scales as R^2. On Double-Earth the Chicxulub impact would have been 2^2=4 four times more energetic. So the mean energy per unit of time due to impacts scales like R^4 R^2=R^6. Ouch. Crater sizes scale as \propto g^{1/6} W^{1/3.4} where W is the energy. So for our big worlds the scars will scale as \propto R^{1/6 + 2/3.4}=R^{0.75}. Double-Earth will have craters 70% larger than Earth, and iron-Earth 121% larger.

Big and light worlds

Surface gravity scales as g =GM/R^2 = 4 \pi G \rho R / 3. So if we want R to be huge but g modest, the density has to go down. This is also a good strategy for reducing internal pressure, which is compressing our core. This approach is a classic in science fiction, perhaps most known from Jack Vance’s Big Planet.

Could we achieve this by assuming it to be made out of something very light like lithium hydride (LiH)?  Lithium hydride is nicely low density (0.78 g/cm3) but also appears to be rather soft (3.5 on the Mohs scale), plus of course that it reacts with oxygen and water, which is bad for habitability. Getting something that doesn’t react badly rules out most stuff at the start of the periodic table: I think the first compound (besides helium) that doesn’t decompose in water or is acutely toxic is likely pure boron. Of course, density is not a simple function of atomic number: amorphous carbon and graphite have lower densities than boron.

Artist rendering of a carbon world surface. The local geology is dominated by graphite and tar deposits, with diamond crystals and heavy hydrocarbon lakes. The atmosphere is largely carbon monoxide and volatile hydrocarbons.
Artist rendering of a carbon world surface. The local geology is dominated by graphite and tar deposits, with diamond crystals and heavy hydrocarbon lakes. The atmosphere is largely carbon monoxide and volatile hydrocarbons, with a fair amount of soot.

A carbon planet is actually not too weird. There are exoplanets that are believed to be carbon worlds where a sizeable amount of mass is carbon. They are unlikely to be very habitable for terrestrial organisms since oxygen would tend to react with all the carbon and turn into carbon dioxide, but would have interesting surface environments with tars, graphite and diamonds. We could imagine a “pure” carbon planet composed largely of graphite, diamond and a core of metallic carbon. If we handwave that on top of the carbon core there is some intervening rock layer or that the oxidation processes are slow enough, then we could have a habitable surface (until volcanism and meteors get it). A diamond planet with 1 G gravity is would be R = (\rho_{earth}/\rho_{diamond}) R_{earth}=5.513/3.5= 10,046 km. We get a 1.6 times larger radius than earth this way, and 2.5 times more surface area. (Here I ignore all the detailed calculations in real planetary astrophysics and just assume uniformity; I suspect the right diamond structure will be larger.)

A graphite planet would have radius 16,805 km, 2.6 times ours and with about 7 times our surface area. Unfortunately it would likely turn (cataclysmically) into a diamond planet as the core compressed.

Another approach to low density is of course to use stiff materials with voids. Aerogels have densities close to 1 kg per cubic meter, but that is of course mostly the air: the real density of a silica aerogel is 0.003-0.35 g/cm3. Now that would allow a fluffy world up to 1837 times Earth’s radius! We can do even better with metallic microlattices, where the current  record is about 0.0009 g/cm– this metal fluffworld would have a radius 39,025,914 km, 6125 times Earth, with 3.8 million times our surface area!

The problem is that aerogels and microlattices do not have that great bulk modulus, the ability to resist compression. Their modulus scales with the cube or square of density, so the lighter they are, the more compressible they get – wonderful for many applications, but very bad for keeping planets from imploding. Imagine trying to build a planet out of foam rubber. Diamond is far, far better. What we should look for is something with a high specific modulus, the ratio between bulk modulus and density. Looking at this table suggests carbon fiber is best at 417 million m2/s2, followed by diamond at 346 million m2/s2. So pure carbon worlds are likely the largest we could get, a few times Earth’s size.

Artificial worlds

We can do better if we abandon the last pretence of the world being able to form naturally (natural metal microlattices, seriously?).

Shellworld

A sketch of a shellworld.
A sketch of a shellworld.

Consider roofing over the entire Earth’s surface: it would take a fair amount of material, but we could mine it by digging tunnels under the surface. At the end we would have more than doubled the available surface (roof, old ground, plus some tunnels). We can continue the process, digging up material to build a giant onion of concentric floors and giant pillars holding up the rest. The end result is akin to the megastructure in Iain M. Banks’ Matter.

If each floor has material density \rho kg/m2 (lets ignore the pillars for the moment) and ceiling height h, then the total mass from all floors is M = \sum_{n=0}^N 4 \pi (hn)^2 \rho. Moving terms over to the left we get M/4 \pi \rho h^2 = \sum_{n=0}^N n^2 = N(N+1)(2N+1)/6= N^3/3 +N^2/2+N/6. If N is very large the N^3/3 term dominates (just consider the case of N=1000: the first term is a third of a billion, the second half a million and the final one 166.6…) and we get

N \approx \left [\frac{3M}{4\pi \rho h^2}\right ]^{1/3}

with radius R=hN.

The total surface area is

A=\sum_{n=0}^N 4\pi (hn)^2 = 4 \pi h^2 \left (\frac{N^3}{3} +\frac{N^2}{2}+\frac{N}{6}\right ).

So the area grows proportional to the total mass (since N scales as M^{1/3}). It is nearly independent of h (N^3 scales as h^{-2}) – the closer together the floors are, the more floors you get, but the radius increases only slowly. Area also scales as 1/\rho: if we just sliced the planet into microthin films with maximal separation we could get a humongous area.

If we set h=3 meters, \rho=500 kg per square meter, and use the Earth’s mass, then N \approx 6.8\cdot 10^6, with a radius of 20,000 km. Not quite xkcd’s billion floor skyscraper, but respectable floorspace: 1.2\cdot 10^{22} square meters, about 23 million times Earth’s area.

If we raise the ceiling to h=100 meters the number of floors drops to 660,000 and the radius balloons to 65,000 km. If we raise them a fair bit more, h=20 kilometres, then we reach the orbit of the moon with the 19,000th floor. However, the area stubbornly remains about 23 million times Earth. We will get back to this ballooning shortly.

Keeping the roof up

The single floor shell has an interesting issue with gravity. If you stand on the surface of a big hollow sphere the surface gravity will be the same as for a planet with the same size and mass (it will be rather low, of course). However, on the inside you would be weightless. This follows from Newton’s shell theorem, which states that the force from a spherically symmetric distribution of mass is proportional to the amount of mass at radii closer to the centre: outside shells of mass do not matter.

This means that the inner shells do not have to worry about the gravity of the outer shells, which is actually a shame: they still weigh a lot, and that has to be transferred inwards by supporting pillars – some upward gravity would really have helped construction, if not habitability. If the shells were amazingly stiff they could just float there as domes with no edge (see discussion of Dyson shells below), but for real materials we need pillars.

How many pillars do we need? Let’s switch the meaning of \rho to denote mass per cubic meter again, making the mass inside a radius M(r)=4\pi \rho r^3/3. A shell at radius r needs to support the weight of all shells above it, a total force of F(r) = \int_r^R (4 \pi x^2 \rho) (G M(x)/x^2) dx (mass of the shell times the gravitational force). Then F(r) = (16 \pi^2 G \rho^2/3) \int_r^R x^3 dx = (16 \pi^2 G \rho^2/3) [x^4/4]^{R}_r = (4 \pi^2 G \rho^2/3)(R^4 - r^4).

If our pillars have compressive strength P per square meter, we need F(r)/P square meters of pillars at radius r: a fraction F(r)/4 \pi r^2 P = (\pi G \rho^2/3P)(R^4/r^2 - r^2) of the area needs to be pillars. Note that at some radius 100% of the floor has to be pillars.

Plugging in our original h=3 m, \rho=500/4 kg per cubic meter, R=20\cdot 10^6 meter world, and assuming P=443 GPa (diamond), and assuming I have done my algebra right, we get r \approx 880 km – this is the core, where there is actually no floors left. The big moonscraper has a core with radius 46 km, far less.

We have so far ignored the weight of all these pillars. They are not going to be insignificant, and if they are long we need to think about buckling and all those annoying real world engineering considerations that actually keep our buildings standing up.

We may think of topological shape optimization: start with a completely filled shell and remove material to make voids, while keeping everything stiff enough to support a spherical surface. At first we might imagine pillars that branch to hold up the surface. But the gravity on those pillars depend on how much stuff is under them, so minimizing it will make the the whole thing lighter. I suspect that in the end we get just a shell with some internal bracing, and nothing beneath. Recall the promising increase in area we got for fewer but taller levels: if there are no levels above a shell, there is no need for pillars. And since there is almost nothing beneath it, there will be little gravity.

Single shell worlds

Making a single giant shell is actually more efficient than the concentric shell world. – no wasted pillars, all material used to generate area That shell has R = \sqrt{M/4 \pi \rho} and area A=4 \pi M/4 \pi \rho = M/\rho (which, when you think about units, is the natural answer). For Earth mass shells with 500 kg per square meter, the radius becomes 31 million km, and the surface area is 1.2\cdot 10^{22} square meters, 23 million times the Earth’s surface.

The gravity will however be microscopic, since it scales as 1/R^2 – for all practical purposes it is zero. Bad for keeping an atmosphere in. We can of course cheat by simply putting a thin plastic roof on top of this sphere to maintain the atmosphere, but we would still be floating around.

Building shells around central masses seems to be a nice way of getting gravity at first. Just roof over Jupiter at the right radius (\sqrt{GM/g}= 113,000 km) and you have a lot of 1 G living area. Or why not do it with a suitably quiet star? For the sun, that would be a shell with radius 3.7 million km, with an area 334,000 times Earth.

Of course, we may get serious gravity by constructing shells around black holes. If we use the Sagittarius A* hole we get a radius of 6.9 light-hours, with 1.4 trillion times Earth’s area. Of course, it also needs a lot of shell material, something on the order of 20% of a sun mass if we still assume 500 kg per square meter.

As an aside, the shell theorem still remains true: the general relativity counterpart, Birkhoff’s theorem, shows that spherical arrangements of mass produce either flat spacetime (in central voids) or Schwartzschild spacetimes (outside the mass). The flat spacetimes still suffer gravitational time dilation, though.

A small problem is that the shell theorem means the shell will not remain aligned with the internal mass: there is no net force. Anything that hits the surface will give it a bit of momentum away from where it should be. However, this can likely solved with dynamical corrections: just add engines here and there to realign it.

A far bigger problem is that the structure will be in compression. Each piece will be pulled towards the centre with a force G M \rho/R^2 per m^2, and to remain in place it needs to be held up by neighbouring pieces with an equal force. This must be summed across the entire surface. Frank Palmer pointed out one could calculate this as two hemispheres joined at a seam, finding a total pressure of g \rho R /2. If we have a maximum strength P_{max} the maximal radius for this gravity becomes R = 2 P_{max}/g \rho. Using diamond and 1 G we get R=180,000 km. That is not much, at least if we dream about enclosing stars (Jupiter is fine). Worse, buckling is a real problem.

Bubbleworlds

Dani Eder suggested another way of supporting the shell: add gas inside, and let its pressure keep it inflated. Such bubble worlds have an upper limit set by self-gravity; Eder calculated the maximal radius as 240,000 km for a hydrogen bubble. It has 1400  times the Earth’s area, but one could of course divide the top layers into internal floors too. See also the analysis at gravitationalballoon.blogspot.se for more details (that blog itself is a goldmine for inflated megastructures).

Eder also points out that one limit of the size of such worlds is the need to radiate heat from the inhabitants. Each human produces about 100 W of waste heat; this has to be radiated away from a surface area of 4 \pi R^2 at around 300K: this means that the maximum number of inhabitants is N = 4 \pi \sigma R^2 300^4 / 100. For a bubbleworld this is 3.3\cdot 10^{18} people. For Earth, it is 2.3\cdot 10^{15} people.

Living space

If we accept volume instead of area, we may think of living inside such bubbles. Karl Schroeder’s Virga books come to mind, although he modestly went for something like a 5,000 mile diameter. Niven discusses building an air-filled volume around a Dyson shell surrounding the galactic core, with literally cubic lightyears of air.

The ultimate limit is avoiding Jeans instability: sufficiently large gas volumes are unstable against gravitational contraction and will implode into stars or planets. The Jeans length is

L=\sqrt{15 kT/4\pi G m \rho}

where m is the mass per particle. Plugging in 300 K, the mass of nitrogen molecules and air density I get a radius of 40,000 km (see also this post for some alternate numbers). This is a liveable volume of 2.5\cdot 10^{14} cubic kilometres, or 0.17 Jupiter volumes. The overall calculation is somewhat approximate, since such a gas mass will not have constant density throughout and there has to be loads of corrections, but it gives a rough sense of the volume. Schroeder does OK, but Niven’s megasphere is not possible.

Living on surfaces might be a mistake. At least if one wants a lot of living space.

Bigger than worlds

The locus classicus on artificial megastructures is Larry Niven’s essay Bigger than worlds. Beside the normal big things like O’Neill cylinders it leads up to the truly big ones like Dyson spheres. It mentions that Dan Alderson suggested a double Dyson sphere, where two concentric shells had atmosphere between them and gravity provided by the internal star. (His Alderson Disk design is ruled out for consideration in my essay because we do not know any physics that would allow that strong materials.) Of course, as discussed above, solid Dyson shells are problematic to build. A Dyson swarm of free-floating habitats and solar collectors is far more physically plausible, but fails at being *a* world: it is a collection of lot of worlds.

One fun idea mentioned by Niven is the topopolis suggested by Pat Gunkel. Consider a very long cylinder rotating about its axis: it has internal pseudogravity, it is mechanically possible (there is some stress on the circumferential material, but unless the radius or rotation is very large or fast we know how to build this from existing materials like carbon fibers). There is no force between the hoops making up the cylinder: were we to cut them apart they would still rotate in line.

Section of a long cylindrical O'Neill style habitat.
Section of a long cylindrical O’Neill style habitat.

Now make the cylinder 2 \pi R km long and bend it into a torus with major radius R. If the cylinder has radius r, the difference in circumference between the inner and outer edge is 2 \pi (R+r)-(R-r)=4\pi r. Spread out around the circumference, that means each hoop is subjected to a compression of size 4 \pi r / 2\pi R=2 (r/R) if it continues to rotate like it did before. Since R is huge, this is a very small factor. This is also why the curvature of the initial bend can be ignored. For a topopolis orbiting Earth in geostationary orbit, if r is 1 km the compression factor is 4.7\cdot 10^{-5}; if it loops around the sun and is a 1000 km across the effect is just 10^{-5}. Heat expansion is likely a bigger problem. At large enough scales O’Neill cylinders are like floppy hoses.

A long cylinder habitat has been closed into a torus. Rotation is still along the local axis, rather than around the torus axis.
A long cylinder habitat has been closed into a torus. Rotation is still along the local axis, rather than around the torus axis.

The area would be 2 \pi R r. In the first case 0.0005 of Earth’s area, in the second case 1842 times.

A topopolis wrapped as a 3:2 torus knot around another body.
A topopolis wrapped as a 3:2 torus knot around another body.

The funny thing about topopolis is that there is no reason for it to go just one turn around the orbited object. It could form a large torus knot winding around the object. So why not double, triple or quadruple the area? In principle we could just keep going and get nearly any area (up until the point where self-gravity started to matter).

There is some trouble with Kepler’s second law: parts closer to the central body will tend to move faster, causing tension and compression along the topopolis, but if the change in radial distance is small these forces will also be small and spread out along a enormous length.

Unfortunately topopolis has the same problem as a ringworld: it is not stably in orbit if it is rigid (any displacement tends to be amplified), and the flexibility likely makes things far worse. Like the ringworld and Dyson shell it can plausibly be kept in shape by active control, perhaps solar sails or thrusters that fire to keep it where it should. This also serves to ensure that it does not collide with itself: effectively there are carefully tuned transversal waves progressing around the circumference keeping it shaped like a proper knot. But I do not want to be anywhere close if there is an error: this kind of system will not fail gracefully.

Discussion

Radius (Earths)

Area (Earths)

Notes
Iron earth

2.7

7.3

Perovskite earth

3

9

Ice earth

5

25

Rotating ice

2.5x12x12

92

Diamond 1G planet

1.6

2.56

Graphite 1G planet

2.6

7

Unstable
Aerogel 1G planet

1837

337,000

Unstable
Microlattice 1G planet

6125

50 million

Unstable
Shellworld (h=3)

3.1

23 million

Shellworld (h=100)

10.2

23 million

Single shell

4865

23 million

Jupiter roof

17.7

313

Stability?
Sun roof

581

334000

Strength issue
Sag A roof

1.20\cdot 10^6

1.36\cdot 10^{12}

Strength issue
Bubbleworld

37.7

1400

Jeans length

6.27

39

1 AU ring

1842

Stability?

Why aim for a large world in the first place? There are three apparent reasons. The first is simply survival, or perhaps Lebensraum: large worlds have more space for more beings, and this may be a good thing in itself. The second is to have more space for stuff of value, whether that is toys, gardens or wilderness. The third is to desire for diversity: a large world can have more places that are different from each other. There is more space for exploration, for divergent evolution. Even if the world is deliberately made parts can become different and unique.

Planets are neat, self-assembling systems. They also use a lot of mass to provide gravity and are not very good at producing living space. Artificial constructs can become far larger and are far more efficient at living space per kilogram. But in the end they tend to be limited by gravity.

Our search for the largest possible world demonstrates that demanding a singular world may be a foolish constraint: a swarm of O’Neill cylinders, or a Dyson swarm surrounding a star, has enormously much more area than any singular structure and few of the mechanical problems. Even a carefully arranged solar system could have far more habitable worlds within (relatively) easy reach.

One world is not enough, no matter how large.

Energy requirements of the singularity

Infinity of Forces: The BeanstalkAfter a recent lecture about the singularity I got asked about its energy requirements. It is a good question. As my inquirer pointed out, humanity uses more and more energy and it generally has an environmental cost. If it keeps on growing exponentially, something has to give. And if there is a real singularity, how do you handle infinite energy demands?

First I will look at current trends, then different models of the singularity.

I will not deal directly with environmental costs here. They are relative to some idea of a value of an environment, and there are many ways to approach that question.

Current trends

Current computers are energy hogs. Currently general purpose computing consumes about one Petawatt-hour per year, with the entire world production somewhere above 22 Pwh.  While large data centres may be obvious, the vast number of low-power devices may be an even more significant factor; up to 10% of our electricity use may be due to ICT.

Together they perform on the order of 10^{20} operations per second, or somewhere in the zettaFLOPS range.

Koomey’s law states that the number of computations per joule of energy dissipated has been doubling approximately every 1.57 years. This might speed up as the pressure to make efficient computing for wearable devices and large data centres makes itself felt. Indeed, these days performance per watt is often more important than performance per dollar.

Meanwhile, general-purpose computing capacity has a growth rate of 58% per annum, doubling every 18 months. Since these trends cancel rather neatly, the overall energy need is not changing significantly.

The push for low-power computing may make computing greener, and it might also make other domains more efficient by moving tasks to the virtual world, making them efficient and allowing better resource allocation. On the other hand, as things become cheaper and more efficient usage tends to go up, sometimes outweighing the gain. Which trend wins out in the long run is hard to predict.

Semilog plot of global energy consumption over time.
Semilog plot of global energy (all types) consumption over time.

Looking at overall energy use trends it looks like overall energy use increases exponentially (but has stayed at roughly the same per capita level since the 1970s). In fact, plotting it on a semilog graph suggests that it is increasing faster than exponential (otherwise it would be a straight line). This is presumably due to a combination of population increase and increased energy use. The best fit exponential has a doubling time of 44.8 years.

Electricity use is also roughly exponential, with a doubling time of 19.3 years. So we might be shifting more and more to electricity, and computing might be taking over more and more of that.

Extrapolating wildly, we would need the total solar input on Earth in about 300 years and the total solar luminosity in 911 years. In about 1,613 years we would have used up the solar system’s mass energy. So, clearly, long before then these trends will break one way or another.

Physics places a firm boundary due to the Landauer principle: in order to erase on bit of information k T \ln(2) joules of energy have to be dissipated. Given current efficiency trends we will reach this limit around 2048.

The principle can be circumvented using reversible computation, either classical or quantum. But as I often like to point out, it still bites in the form of the need for error correction (erasing accidentally flipped bits) and formatting new computational resources (besides the work in turning raw materials into bits). We should hence expect a radical change in computation within a few decades, even if the cost per computation and second continues to fall exponentially.

What kind of singularity?

But how many joules of energy does a technological singularity actually need? It depends on what kind of singularity. In my own list of singularity meanings we have the following kinds:

A. Accelerating change
B. Self improving technology
C. Intelligence explosion
D. Emergence of superintelligence
E. Prediction horizon
F. Phase transition
G. Complexity disaster
H. Inflexion point
I. Infinite progress

Case A, acceleration, at first seems to imply increasing energy demands, but if efficiency grows faster they could of course go down.

Eric Chaisson has argued that energy rate density, how fast and densely energy get used (watts per kilogram), might be an indicator of complexity and growing according to a universal tendency. By this account, we should expect the singularity to have an extreme energy rate density – but it does not have to be using enormous amounts of energy if it is very small and light.

He suggests energy rate density may increase as Moore’s law, at least in our current technological setting. If we assume this to be true, then we would have \Phi(t) = \exp(kt) = P(t)/M(t), where P(t) is the power of the system and M(t) is the mass of the system at time t. One can maintain exponential growth by reducing the mass as well as increasing the power.

However, waste heat will need to be dissipated. If we use the simplest model where a radius R system with density \rho radiates it away into space, then the temperature will be T=[\rho \Phi R/3 \sigma]^{1/4}, or, if we have a maximal acceptable temperature, R < 3\sigma T^4 / \rho \Phi. So the system needs to become smaller as \Phi increases. If we use active heat transport instead (as outlined in my previous post), covering the surface with heat pipes that can remove X watts/square meter, then R < 3 X / \Phi \rho. Again, the radius will be inversely proportional to \Phi. This is similar to our current computers, where the CPU is a tiny part surrounded by cooling and energy supply.

If we assume the waste heat is just due to erasing bits, the rate of computation will be I = P/kT \ln(2) = \Phi M / kT\ln(2) = [4 \pi \rho /3 k \ln(2)] \Phi R^3 / T bits per second. Using the first cooling model gives us I \propto T^{11}/ \Phi^2 – a massive advantage for running extremely hot and dense computation. In the second cooling model I \propto \Phi^{-2}: in both cases higher energy rate densities make it harder to compute when close to the thermodynamic limit. Hence there might be an upper limit to how much we may want to push \Phi.

Also, a system with mass M will use up its own mass-energy in time Mc^2/P = c^2/\Phi: the higher the rate, the faster it will run out (and it is independent of size!). If the system is expanding at speed v it will gain and use up mass at a rate M'= 4\pi\rho v t^2 - M\Phi(t)/c^2; if \Phi grows faster than quadratic with time it will eventually run out of mass to use. Hence the exponential growth must eventually reduce simply because of the finite lightspeed.

The Chaisson scenario does not suggest a “sustainable” singularity. Rather, it suggests a local intense transformation involving small, dense nuclei using up local resources. However, such local “detonations” may then spread, depending on the long-term goals of involved entities.

Cases B, C, D (intelligence explosions, superintelligence) have an unclear energy profile. We do not know how complex code would become or what kind of computational search is needed to get to superintelligence. It could be that it is more a matter of smart insights, in which case the needs are modest, or a huge deep learning-like project involving massive amounts of data sloshing around, requiring a lot of energy.

Case E, a prediction horizon, is separate from energy use. As this essay shows, there are some things we can say about superintelligent computational systems based on known physics that likely remains valid no matter what.

Case F, phase transition, involves a change in organisation rather than computation, for example the formation of a global brain out of previously uncoordinated people. However, this might very well have energy implications. Physical phase transitions involve discontinuities of the derivatives of the free energy. If the phases have different entropies (first order transitions) there has to be some addition or release of energy. So it might actually be possible that a societal phase transition requires a fixed (and possibly large) amount of energy to reorganize everything into the new order.

There are also second order transitions. These are continuous do not have a latent heat, but show divergent susceptibilities (how much the system responds to an external forcing). These might be more like how we normally imagine an ordering process, with local fluctuations near the critical point leading to large and eventually dominant changes in how things are ordered. It is not clear to me that this kind of singularity would have any particular energy requirement.

Case G, complexity disaster, is related to superexponential growth, such as the city growth model of Bettancourt, West et al. or the work on bubbles and finite time singularities by Didier Sornette. Here the rapid growth rate leads to a crisis, or more accurately a series of crises increasingly rapidly succeeding each other until a final singularity. Beyond that the system must behave in some different manner. These models typically predict rapidly increasing resource use (indeed, this is the cause of the crisis sequence as one kind of growth runs into resource scaling problems and is replaced with another one), although as Sornette points out the post-singularity state might well be a stable non-rivalrous knowledge economy.

Case H, an inflexion point, is very vanilla. It would represent the point where our civilization is halfway from where we started to where we are going. It might correspond to “peak energy” where we shift from increasing usage to decreasing usage (for whatever reason), but it does not have to. It could just be that we figure out most physics and AI in the next decades, become a spacefaring posthuman civilization, and expand for the next few billion years, using ever more energy but not having the same intense rate of knowledge growth as during the brief early era when we went from hunter gatherers to posthumans.

Case I, infinite growth, is not normally possible in the physical universe. Information can as far as we know not be stored beyond densities set by the Bekenstein bound (I \leq k_I MR where k_I\approx 2.577\cdot 10^{43} bits per kg per meter), and we only have access to a volume 4 \pi c^3 t^3/3 with mass density \rho, so the total information growth must be bounded by I \leq 4 \pi k_I c^4 \rho t^4/3. It grows quickly, but still just polynomially.

The exception to the finitude of growth is if we approach the boundaries of spacetime. Frank J. Tipler’s omega point theory shows how information processing could go infinite in a finite (proper) time in the right kind of collapsing universe with the right kind of physics. It doesn’t look like we live in one, but the possibility is tantalizing: could we arrange the right kind of extreme spacetime collapse to get the right kind of boundary for a mini-omega? It would be way beyond black hole computing and never be able to send back information, but still allow infinite experience. Most likely we are stuck in finitude, but it won’t hurt poking at the limits.

Conclusions

Indefinite exponential growth is never possible for physical properties that have some resource limitation, whether energy, space or heat dissipation. Sooner or later they will have to shift to a slower rate of growth – polynomial for expanding organisational processes (forced to this by the dimensionality of space, finite lightspeed and heat dissipation), and declining growth rate for processes dependent on a non-renewable resource.

That does not tell us much about the energy demands of a technological singularity. We can conclude that it cannot be infinite. It might be high enough that we bump into the resource, thermal and computational limits, which may be what actually defines the singularity energy and time scale. Technological singularities may also be small, intense and localized detonations that merely use up local resources, possibly spreading and repeating. But it could also turn out that advanced thinking is very low-energy (reversible or quantum) or requires merely manipulation of high level symbols, leading to a quiet singularity.

My own guess is that life and intelligence will always expand to fill whatever niche is available, and use the available resources as intensively as possible. That leads to instabilities and depletion, but also expansion. I think we are – if we are lucky and wise – set for a global conversion of the non-living universe into life, intelligence and complexity, a vast phase transition of matter and energy where we are part of the nucleating agent. It might not be sustainable over cosmological timescales, but neither is our universe itself. I’d rather see the stars and planets filled with new and experiencing things than continue a slow dance into the twilight of entropy.

…contemplate the marvel that is existence and rejoice that you are able to do so. I feel I have the right to tell you this because, as I am inscribing these words, I am doing the same.
– Ted Chiang, Exhalation

 

Just how efficient can a Jupiter brain be?

Large information processing objects have some serious limitations due to signal delays and heat production.

Latency

XIX: The Dyson SunConsider a spherical “Jupiter-brain” of radius R. It will take maximally 2R/c seconds to signal across it, and the average time between two random points (selected uniformly) will be 36R/35 c.

Whether this is too much depends on the requirements of the system. Typically the relevant question is if the transmission latency L is long compared to the processing time t of the local processing. In the case of the human brain delays range between a few milliseconds up to 100 milliseconds, and neurons have typical frequencies up to maximally 100 Hz. The ratio L/t between transmission time and a “processing cycle” will hence be between 0.1-10, i.e. not far from unity. In a microprocessor the processing time is on the order of 10^{-9} s and delays across the chip (assuming 10% c signals) \approx 3\cdot 10^{-10} s, L/t\approx 0.3.

If signals move at lightspeed and the system needs to maintain a ratio close to unity, then the maximal size will be R < tc/2 (or tc/4 if information must also be sent back after a request). For nanosecond cycles this is on the order of centimeters, for femtosecond cycles 0.1 microns; conversely, for a planet-sized system (R=6000 km) t=0.04 s, 25 Hz.

The cycle size is itself bounded by lightspeed: a computational element such as a transistor needs to have a radius smaller than the time it takes to signal across it, otherwise it would not function as a unitary element. Hence it must be of size r < c t or, conversely, the cycle time must be slower than r/c seconds. If a unit volume performs C computations per second close to this limit, C=(c/r)(1/r)^3, or C=c/r^4. (More elaborate analysis can deal with quantum limitations to processing, but this post will be classical.)

This does not mean larger systems are impossible, merely that the latency will be long compared to local processing (compare the Web). It is possible to split the larger system into a hierarchy of subsystems that are internally synchronized and communicate on slower timescales to form a unified larger system. It is sometimes claimed that very fast solid state civilizations will be uninterested in the outside world since it both moves immeasurably slowly and any interaction will take a long time as measured inside the fast civilization. However, such hierarchical arrangements may be both very large and arbitrarily slow: the civilization as a whole may find the universe moving at a convenient speed, despite individual members finding it frozen.

Waste heat dissipation

Information processing leads to waste heat production at some rate P Watts per cubic meter.

Passive cooling

If the system just cools by blackbody radiation, the maximal radius for a given maximal temperature T is

R = \frac{3 \sigma T^4}{P}

where \sigma \approx 5.670\cdot 10^{-8} is the Stefan–Boltzmann constant. This assumes heat is efficiently distributed in the interior.

If it does C computations per volume per second, the total computations are 4 \pi R^3 C / 3=108 \pi \sigma^3 T^{12} C /P^3  – it really pays off being able to run it hot!

Still, molecular matter will melt above 3600 K, giving a max radius of around 29,000/P km. Current CPUs have power densities somewhat below 100 Watts per cm^2; if we assume 100 W per cubic centimetre P=10^8 and $R<29$ cm! If we assume a power dissipation similar to human brains P=1.43\cdot 10^4 the the max size becomes 2 km. Clearly the average power density needs to be very low to motivate a large system.

Using quantum dot logic gives a power dissipation of 61,787 W/m^3 and a radius of 470 meters. However, by slowing down operations by a factor \sqrt{f} the energy needs decrease by the factor f. A reduction of speed to 3% gives a reduction of dissipation by a factor 10^{-3}, enabling a 470 kilometre system. Since the total computations per second for the whole system scales with the size as R^3 \sqrt{f} = \sqrt{f}/P^3 = f^{-2.5} slow reversible computing produces more computations per second in total than hotter computing. The slower clockspeed also makes it easier to maintain unitary subsystems. The maximal size of each such system scales as r=1/\sqrt{f}, and the total amount of computation inside them scales as r^3=f^{-1.5}. In the total system the number of subsystems change as (R/r)^3 = f^{-3/2}: although they get larger, the whole system grows even faster and becomes less unified.

The limit of heat emissions is set by the Landauer principle: we need to pay at least k_B T\ln(2) Joules for each erased bit. So I the number of bit erasures per second and cubic meter will be less than P/k_B T\ln(2). To get a planet-sized system P will be around 1-10 W, implying I < 6.7\cdot 10^{19-20} for a hot 3600 K system, and I < 8.0\cdot 10^{22-23} for a cold 3 K system.

Active cooling

Passive cooling just uses the surface area of the system to radiate away heat to space. But we can pump coolants from the interior to the surface, and we can use heat radiators much larger than the surface area. This is especially effective for low temperatures, where radiation cooling is very weak and heat flows normally gentle (remember, they are driven by temperature differences: not much room for big differences when everything is close to 0 K).

If we have a sphere with radius R with internal volume V(R) of heat-emitting computronium, the surface must have PV(R)/X area devoted to cooling pipes to get rid of the heat, where X is the amount of Watts of heat that can b carried away by a square meter of piping. This can be formulated as the differential equation:

V'(R)= 4\pi R^2 - PV(R)/X.
The solution is
V(R)=4 \pi ( (P/X)^2R^2 - 2 (P/X) R - 2 \exp(-(P/X)R) + 2) (X^3/P^3).

This grows as R^2 for larger R. The average computronium density across the system falls as 1/R as the system becomes larger.

If we go for a cooling substance with great heat capacity per mass at 25 degrees C, hydrogen has 14.30 J/g/K. But in terms of volume water is better at 4.2 J/cm^3/K. However, near absolute zero heat capacities drop down towards zero and there are few choices of fluids. One neat possibility is superfluid cooling. They carry no thermal energy – they can however transport heat by being converted into normal fluid and have a frictionless countercurrent bringing back superfluid from the cold end. The rate is limited by the viscosity of the normal fluid, and apparently there are critical velocities of the order of mm/s. A CERN paper gives the formula Q=[A \rho_n / \rho_s^3 S^4 T^3 \Delta T ]^{1/3} for the heat transport rate per square meter, where A is 800 ms/kg at 1.8K, \rho_n is the density of normal fluid, \rho_s the superfluid, S is the entropy per unit mass. Looking at it as a technical coolant gives a steady state heat flux along a pipe around 1.2 W/cm^2 in a 1 meter pipe for a 1.9-1.8K difference in temperature. There are various nonlinearities and limitations due to the need to keep things below the lambda point. Overall, this produces a heat transfer coefficient of about 1.2\cdot 10^{4} , in line with the range 10,000-100,000 W/m^2/K found in forced convection (liquid metals have maximal transfer ability).

So if we assume about 1 K temperature difference, then for quantum dots at full speed P/X=61787/10^5=0.61787 we have a computational volume for a one km system 7.7 million cubic meters of computronium, or about 0.001 of the total volume. Slowing it down to 3% (reducing emissions by 1000) boosts the density to 86%. At this intensity a 1000 km system would look the same as the previous low-density one.

 Conclusion

If the figure of merit is just computational capacity, then obviously a larger computer is always better. But if it matters that parts stay synchronized, then there is a size limit set by lightspeed. Smaller components are better in this analysis, which leaves out issues of error correction – below a certain size level thermal noise, quantum tunneling and cosmic rays will start to induce errors. Handling high temperatures well pays off enormously for a computer not limited by synchronization or latency in terms of computational power; after that, reducing volume heat production has a higher influence on total computation than actual computation density.

Active cooling is better than passive cooling, but the cost is wasted volume, which means longer signal delays. In the above model there is more computronium at the centre than at the periphery, somewhat ameliorating the effect (the mean distance is just 0.03R). However, this ignores the key issue of wiring, which is likely to be significant if everything needs to be connected to everything else.

In short, building a Jupiter-sized computer is tough. Asteroid-sized ones are far easier. If we ever find or build planet-sized systems they will either be reversible computing, or mostly passive storage rather than processing. Processors by their nature tend to be hot and small.

[Addendum: this article has been republished in H+ Magazine thanks to Peter Rothman. ]