The end of the worlds

Nikkei existential riskGeorge Dvorsky has a piece on Io9 about ways we could wreck the solar system, where he cites me in a few places. This is mostly for fun, but I think it links to an important existential risk issue: what conceivable threats have big enough spatial reach to threaten a interplanetary or even star-faring civilization?

This matters, since most existential risks we worry about today (like nuclear war, bioweapons, global ecological/societal crashes) only affect one planet. But if existential risk is the answer to the Fermi question, then the peril has to strike reliably. If it is one of the local ones it has to strike early: a multi-planet civilization is largely immune to the local risks. It will not just be distributed, but it will almost by necessity have fairly self-sufficient habitats that could act as seeds for a new civilization if they survive. Since it is entirely conceivable that we could have invented rockets and spaceflight long before discovering anything odd about uranium or how genetics work it seems unlikely that any of these local risks are “it”. That means that the risks have to be spatially bigger (or, of course, that xrisk is not the answer to the Fermi question).

Of the risks mentioned by George physics disasters are intriguing, since they might irradiate solar systems efficiently. But the reliability of them being triggered before interstellar spread seems problematic. Stellar engineering, stellification and orbit manipulation may be issues, but they hardly happen early – lots of time to escape. Warp drives and wormholes are also likely late activities, and do not seem to be reliable as extinctors. These are all still relatively localized: while able to irradiate a largish volume, they are not fine-tuned to cause damage and does not follow fleeing people. Dangers from self-replicating or self-improving machines seems to be a plausible, spatially unbound risk that could pursue (but also problematic for the Fermi question since now the machines are the aliens). Attracting malevolent aliens may actually be a relevant risk: assuming von Neumann probes one can set up global warning systems or “police probes” that maintain whatever rules the original programmers desire, and it is not too hard to imagine ruthless or uncaring systems that could enforce the great silence. Since early civilizations have the chance to spread to enormous volumes given a certain level of technology, this might matter more than one might a priori believe.

So, in the end, it seems that anything releasing a dangerous energy effect will only affect a fixed volume. If it has energy E and one can survive it below a deposited energy e, if it just radiates in all directions the safe range is r = \sqrt{E/4 \pi e} \propto \sqrt{E} – one needs to get into supernova ranges to sterilize interstellar volumes. If it is directional the range goes up, but smaller volumes are affected: if a fraction f of the sky is affected, the range increases as \propto \sqrt{1/f} but the total volume affected scales as \propto f\sqrt{1/f}=\sqrt{f}.

Stable strangeletsSelf-sustaining effects are worse, but they need to cross space: if their space range is smaller than interplanetary distances they may destroy a planet but not anything more. For example, a black hole merely absorbs a planet or star (releasing a nasty energy blast) but does not continue sucking up stuff. Vacuum decay on the other hand has indefinite range in space and moves at lightspeed. Accidental self-replication is unlikely to be spaceworthy unless is starts among space-moving machinery; here deliberate design is a more serious problem.

The speed of threat spread also matters. If it is fast enough no escape is possible. However, many of the replicating threats will have sublight speed and could hence be escaped by sufficiently paranoid aliens. The issue here is if lightweight and hence faster replicators can always outrun larger aliens; given the accelerating expansion of the universe it might be possible to outrun them by being early enough, but our calculations do suggest that the margins look very slim.

The more information you have about a target, the better you can in general harm it. If you have no information, merely randomizing it with enough energy/entropy is the only option (and if you have no information of where it is, you need to radiate in all directions). As you learn more, you can focus resources to make more harm per unit expended, up to the extreme limits of solving the optimization problem of finding the informational/environmental inputs that cause desired harm (=hacking). This suggests that mindless threats will nearly always have shorter range and smaller harms than threats designed by (or constituted by) intelligent minds.

In the end, the most likely type of actual civilization-ending threat for an interplanetary civilization looks like it needs to be self-replicating/self-sustaining, able to spread through space, and have at least a tropism towards escaping entities. The smarter, the more effective it can be. This includes both nasty AI and replicators, but also predecessor civilizations that have infrastructure in place. Civilizations cannot be expected to reliably do foolish things with planetary orbits or risky physics.

[Addendum: Charles Stross has written an interesting essay on the risk of griefers as a threat explanation. ]

[Addendum II: Robin Hanson has a response to the rest of us, where he outlines another nasty scenario. ]

 

A sustainable orbital death ray

Visualizing lightI have for many years been a fan of the webcomic Schlock Mercenary. Hardish, humorous military sf with some nice, long-term plotting.

In the current plotline (some spoilers ahead) there is an enormous Chekov’s gun: Earth is surrounded by an equatorial ring of microsatellites that can reflect sunlight. It was intended for climate control, but as the main character immediately points out, it also makes an awesome weapon. You can guess what happens. That leds to an interesting question: just how effective would such a weapon actually be?

From any point on Earth’s surface only part of the ring is visible above the horizon. In fact, at sufficiently high latitudes it is entirely invisible – there you would be safe no matter what. Also, Earth likely casts a shadow across the ring that lowers the efficiency on the nightside.

I guessed, based on the appearance in some strips, that the radius is about two Earth radii (12,000 km), and the thickness about 2000 km. I did a Monte Carlo integration where I generated random ring microsatellites, checking whether they were visible above the horizon for different Earth locations (by looking at the dot product of the local normal and the satellite-location vector; for anything above the horizon this product must be possible) and were in sunlight (by checking that the distance to the Earth-Sun axis was more than 6000 km). The result is the following diagram of how much of the ring can be seen from any given location:

Visibility fraction of an equatorial ring 12,000-14,000 km out from Earth for different latitudes and longitudes.
Visibility fraction of an equatorial ring 12,000-14,000 km out from Earth for different latitudes and longitudes.

At most, 35% of the ring is visible. Even on the nightside where the shadow cuts through the ring about 25% is visible. In practice, there would be a notch cut along the equator where the ring cannot fire through itself; just how wide it would be depends on the microsatellite size and properties.

Overlaying the data on a world map gives the following footprint:

Visibility fraction of 12,000-14,000 ring from different locations on Earth.
Visibility fraction of 12,000-14,000 ring from different locations on Earth.

The ring is strongly visible up to 40 degrees of latitude, where it starts to disappear below the southern or northern horizon. Antarctica, northern Canada, Scandinavia and Siberia are totally safe.

This corresponds to the summer solstice, where the ring is maximally tilted relative to the Earth-Sun axis. This is when it has maximal power: at the equinoxes it is largely parallel to the sunlight and cannot reflect much at all.

The total amount of energy the ring receives is E_0 = \pi (r_o^2-r_i^2)|\sin(\theta)|S where r_o is the outer radius, r_i the inner radius, $\theta$ the tilt (between 23 degrees for the summer/winter solstice and 0 for equinoxes) and S is the solar constant, 1.361 kW/square meter. This ignores the Earth shadow. So putting in \theta=20^{\circ} for a New Years Eve firing, I get E_0 \approx 7.6\cdot 10^{16} Watt.

If we then multiply by 0.3 for visibility, we get 23 petawatts – is nothing to sneeze at! Of course, there will be losses, both in reflection (likely a few percent at most) and more importantly through light scattering (about 25%, assuming it behaves like normal sunlight). Now, a 17 PW beam is still pretty decent. And if you are on the nightside the shadowed ring surface can still give about 8 PW. That is about six times the energy flow in the Gulf Stream.

Light pillar

How destructive would such a beam be? A megaton of TNT is 4.18 PJ. So in about a second the beam could produce a comparable amount of heat.  It would be far redder than a nuclear fireball (since it is essentially 6000K blackbody radiation) and the IR energy would presumably bounce around and be re-radiated, spreading far in the transparent IR bands. I suspect the fireball would quickly affect the absorption in a complicated manner and there would be defocusing effects due to thermal blooming: keeping it on target might be very hard, since energy would both scatter and reflect. Unlike a nuclear weapon there would not be much of a shockwave (I suspect there would still be one, but less of the energy would go into it).

The awesome thing about the ring is that it can just keep on firing. It is a sustainable weapon powered by renewable energy. The only drawback is that it would not have an ommminous hummmm….

Addendum 14 December: I just realized an important limitation. Sunlight comes from an extended source, so if you reflect it using plane mirrors you will get a divergent beam – which means that the spot it hits on the ground will be broad. The sun has diameter 1,391,684 km and is 149,597,871 km away, so the light spot 8000 km below the reflector will be 74 km across. This is independent of the reflector size (down to the diffraction limit and up to a mirror that is as large as the sun in the sky).

Intensity with three overlapping beams.
Intensity with three overlapping beams.

At first this sounds like it kills the ring beam. But one can achieve a better focus by clever alignment. Consider three circular footprints arranged like a standard Venn diagram. The center area gets three times the solar input as the large circles. By using more mirrors one can make a peak intensity that is much higher than the side intensity. The vicinity will still be lit up very brightly, but you can focus your devastation better than with individual mirrors – and you can afford to waste sunlight anyway. Still, it looks like this is more of a wide footprint weapon of devastation rather than a surgical knife.

Intensity with 200 beams overlapping slightly.
Intensity with 200 beams overlapping slightly.

 

Galactic duck and cover

How much does gamma ray bursts (GRBs) produce a “galactic habitable zone”? Recently the preprint “On the role of GRBs on life extinction in the Universe” by Piran and Jimenez has made the rounds, arguing that we are near (in fact, inside) the inner edge of the zone due to plentiful GRBs causing mass extinctions too often for intelligence to arise.

This is somewhat similar to James Annis and Milan Cirkovic’s phase transition argument, where a declining rate of supernovae and GRBs causes global temporal synchronization of the emergence of intelligence. However, that argument has a problem: energetic explosions are random, and the difference in extinctions between lucky and unlucky parts of the galaxy can be large – intelligence might well erupt in a lucky corner long before the rest of the galaxy is ready.

I suspect the same problem is true for the Piran and Jimenez paper, but spatially. GRBs are believed to be highly directional, with beams typically a few degrees across. If we have random GRBs with narrow beams, how much of the center of the galaxy do they miss?

I made a simple model of the galaxy, with a thin disk, thick disk and bar population. The model used cubical cells 250 parsec long; somewhat crude, but likely good enough. Sampling random points based on star density, I generated GRBs. Based on Frail et al. 2001 I gave them lognormal energies and power-law distributed jet angles, directed randomly. Like Piran and Jimenez I assumed that if the fluence was above 100 kJ/m^2 it would be extinction level. The rate of GRBs in the Milky Way is uncertain, but a high estimate seems to be one every 100,000 years. Running 1000 GRBs would hence correspond to 100 million years.

Galactic model with gamma ray bursts (red) and density isocontours (blue).
Galactic model with gamma ray bursts (red) and density isocontours (blue).

If we look at the galactic plane we find that the variability close to the galactic centre is big: there are plenty of lucky regions with many stars.

Unaffected star density in the galactic plane.
Unaffected star density in the galactic plane.
Affected (red) and unaffected (blue) stars at different radii in the galactic plane.
Affected (red) and unaffected (blue) stars at different radii in the galactic plane.

When integrating around the entire galaxy to get a measure of risk at different radii and altitudes shows a rather messy structure:

Probability that a given volume would be affected by a GRB. Volumes are integrated around axisymmetric circles.
Probability that a given volume would be affected by a GRB. Volumes are integrated around axisymmetric circles.

One interesting finding is that the most dangerous place may be above the galactic plane along the axis: while few GRBs happen there, those in the disk and bar can reach there (the chance of being inside a double cone is independent of distance to the center, but along the axis one is within reach for the maximum number of GRBs).

Density of stars not affected by the GRBs.
Density of stars not affected by the GRBs.

Integrating the density of stars that are not affected as a function of radius and altitude shows that there is a mild galactic habitable zone hole within 4 kpc. That we are close to the peak is neat, but there is a significant number of stars very close to the center.

This is of course not a professional model; it is a slapdash Matlab script done in an evening to respond to some online debate. But I think it shows that directionality may matter a lot by increasing the variance of star fates. Nearby systems may be irradiated very differently, and merely averaging them will miss this.

If I understood Piran and Jimenez right they do not use directionality; instead they employ a scaled rate of observed GRBs, so they do not have to deal with the iffy issue of jet widths. This might be sound, but I suspect one should check the spatial statistics: correlations are tricky things (and were GRB axes even mildly aligned with the galactic axis the risk reduction would be huge). Another way of getting closer to their result is of course to bump up the number of GRBs: with enough, the centre of the galaxy will naturally be inhospitable. I did not do the same careful modelling of the link between metallicity and GRBs, nor the different sizes.

In any case, I suspect that GRBs are weak constraints on where life can persist and too erratic to act as a good answer to the Fermi question – even a mass extinction is forgotten within 10 million years.

Thunderbolts and lightning, very very frightning

Cloud powerOn the Conversation, I blog about the risks of electromagnetic disruption from solar storms and EMP: Electromagnetic disaster could cost trillions and affect millions. We need to be prepared.

The reports from Lloyds and the National Academies are worrying, but as a disaster it would not kill that many people directly. However, an overall weakening of our societal and global systems is nothing to joke about: when societies have less resources they are less resilient to other threats. In in this case it would both information processing, resources and ability to do stuff. Just the thing to make other risks way worse.

As a public goods problem I think this risk is easier to handle than others; it is more like Y2K than climate since most people have aligned interests. Nobody wants a breakdown, and few actually win from the status quo. But there are going to be costs and inertia nevertheless. Plus, I don’t think we have a good answer yet to local EMP risks.