Existential hope is in the air. The term was coined by my collegues Toby and Owen to denote the opposite of an existential catastrophe: the chance that things could turn out much better than expected.
Recently I had the chance to attend a visioning weekend with the Foresight Institute where we discussed ways of turning dystopias into utopias. It had a clear existential hope message, largely because it was organised by Allison Duettman who is writing a book on the topic. I must admit that I got a bit nervous when I found out since I am also writing my own grand futures book, but I am glad to say we are dealing with largely separate domains and reasons for hope.
I also participated in the Nexus Instituut event “The Battle between Good and Evil”. I assume the good guys won. I certainly had fun. I ended up arguing that good is only weak compared to evil like how water is weak compared to solid object – in small amounts it will deform and splash. In larger amounts it is like the tide or a tsunami: you better get out of the way. In retrospect that analogy might have been particularly powerful in the Netherlands. They know their water and how many hands (and windmills) can reshape a country.
Do we really have grounds for existential hope?
A useful analysis of the concept of hope can be found in Jayne M. Waterworth’s A Philosophical Analysis of Hope. He defines that hoping for something requires (1) a conception of an uncertain possibility, (2) a desire for an objective, (3) a desire that one’s desire be satisfied, and (4) that one takes an anticipatory stance towards the objective.
One can hope for things that have a certain or uncertain probability, but also for things that are merely possible. Waterworth calls the first category “hope because of reality” or probability hope, while the second category is “hope in spite of reality” or possibility hope. I might have probability hope in fixing climate change, but possibility hope in humanity one day resurrecting the dead – in the first case we have some ideas of how it might happen and what might be involved, in the second case we have no idea even where to begin.
Outcomes can also be of different importance: hoping for a nice Christmas present is what Waterworth calls an ordinary hope, while hoping for a solution of climate change or death is an extraordinary hope.
We may speak of existential hope in the sense that “existential eucatastrophes” can occur, or that our actions can make them happen. This would represent the most extraordinary kind of hope possible.
But note that this kind of hope is potentially “hope because of reality” rather than “hope in spite of reality”. We can affect the future to some extent (there is an interesting issue of how much). There doesn’t seem to be any law of nature dooming us to early existential risk or a necessary collapse of civilization. We have in the past changed the rules for our species in very positive ways, and may do so again. We may discover facts about the world that greatly expand the size and value of our future – we have already done so in the past. These are good reasons to hope.
Hope is a mental state. The reason hope is a virtue in Christian theology is that it is the antidote to despair.
Hope is different from optimism, the view that good things are likely to happen. First, optimism is a general disposition rather than directed at particular hoped for occurrences. Second, hope can be a very small and unspecific thing: rather than being optimistic about everything going the right way, a hopeful person can see the overwhelming problems and risks and yet hope that something will happen to get us through. Even a small grain of hope might be enough to fend of despair.
Still, there may be a psychological disposition towards being hopeful. As defined by Snyder in regarding motivations towards goals this involves a sense of agency (chosen goals can be achieved) and pathways (successful plans and strategies for those goals can be generated). This trait predicts academic achievement in students beyond intelligence, personality, and past achievement. Indeed, in law students hope but not optimism was predictive for achievement (but both contributed to life satisfaction). This trait may be more about being motivated to seek out good future states than actually being hopeful about many things, but the more possibilities are seen, the more likely something worth hoping for will show up.
If there is something I wish for everybody in 2019 and beyond it is having this kind of disposition relative to existential hope. Yes, there are monumental problems ahead. But we can figure out ways around/through/over them. There are opportunities to be grabbed. There are new values to be forged.
The winter solstice has just passed and the days will become brighter and longer for the next months. Cheers!
Locally we should expect them to be like on Earth: there is constant gravitational acceleration orthogonal to the ground, so they will just look like parabolas.
But if the trajectory is longer the rapid rotation ought to twist it, since there is a fair Coriolis effect. So the differential equation will be If we just look at the velocity vector we get
That is, the forcefield will twist the velocity around if it is large and orthogonal to the angular velocity vector. If the velocity is parallel it will just be affected by gravity. For a trajectory near the pole it will become twisted and tilted:
For a starting point on the equator the twisting gets a bit more complex:
One can also recognise the analogy to an electron in an electromagnetic field: $latex \mathbf{v}’ = (q/m)(\mathbf{E}+\mathbf{v}\times \mathbf{B})$. Without gravity we should hence expect thrown balls to just follow helices around the omega-vector direction just like charged particles follow magnetic field-lines. One can eliminate the electric field from the equation by using a different velocity coordinate $latex \mathbf{v_2}=\mathbf{v}-\mathbf{E}\times\mathbf{B}/B^2$. Hence we can treat ball trajectories like helices plus a drift velocity in the direction. The helix radius will be .
How large is the Coriolis effect? On Earth . On Donut it is 0.000614 and on Hoop 0.000494, several times higher. Still, the correction is not going to be enormous: for a ball moving 10 meters per second the helix radius will be 69 km on Earth (at the pole), 8.1 km on Donut, and 10 km on Hoop. We hence need to throw the ball a suborbital distance before the twists become really visible. At these distances the curvature of the planet and the non-linearity of the gravitational field also begins to bite.
I have not simulated such trajectories since I need a proper mass distribution model of the worlds, and it is messy. However, for an infinitely thin ring one can solve orbits numerically relatively easily (you “just” have to integrate elliptic integrals):
Beside the “normal” equatorial orbits and torus-like orbits winding themselves around the ring, there are internal halo-orbits and chaotic tangles.
Supposing that the entire Earth was instantaneously replaced with an equal volume of closely packed, but uncompressed blueberries, what would happen from the perspective of a person on the surface?
Unfortunately the site tends to frown on fun questions like this, so it was in my opinion prematurely closed while I was working out the answer. So here it is, with some extra extensions:
The density of blueberries has been estimated to 625.56 kg/m3, WillO on Stackexchange estimated it to 13% of Earth’s density (5510*0.13=716.3 kg/m3), so assuming it to be around kg/m3 appears to be reasonable. Blueberry pulp has a density similar to water, 980 to 1050 kg per m3 although this is both temperature dependent and depends on how much solids there are. The difference to the whole berries is due to the air between the berries. Note that these are likely the big, thick-skinned “American” blueberries rather than the small wild thin-skinned blueberries (bilberries) I grew up with; the latter would have higher density due to their smaller size and break far more easily.
So instantaneously turning Earth into blueberries will reduce its mass to 0.1274 of what it was. Gravity will become correspondingly weaker, .
However, blueberries are not particularly sturdy. While there is a literature on blueberry mechanics (of course!), I did not manage to find a great source on their compressive strength. A rough estimate is possible: stacking a sugar cube (1 g) on a berry will not break it, while a milk carton (1 kg) will; 100 g has a decent but not certain chance. So if we assume the blueberry area to be one square centimetre the breaking pressure is on the order of N/m2. This allows us to estimate at what depth the berries will start to break: m. So while the surface will be free blueberries they will start pulping within a few meters of the surface.
This pulping has an important effect: the pulp separates from the air, coalescing into a smaller sphere. If we assume pulp to be an incompressible fluid, then a sphere of pulp with the same mass as the initial berries will be , or . In this case we end up with a planet with 0.8879 times smaller radius (5,657 km), surrounded by a vast atmosphere.
The freefall timescale for the planet is initially 41 minutes, but relatively shortly the pulping interactions, the air convection etc will slow things down in a complicated way. I expect that the the actual coalescence will take hours, with some late bubbles from the deep interior erupting fairly late.
The gravity on the pulp surface is just 1.5833 m/s2, 16% of normal gravity – almost exactly lunar gravity. This weakens convection currents and the speed with which bubbles move up. The scale height of the atmosphere, assuming the same composition and temperature as on Earth, will be 6.2 times higher. This means that pressure will decline much less with altitude, allowing far thicker clouds and weather systems. As we will see, the atmosphere will puff up more.
The separation has big consequences. Enormous amounts of air will be pushing out from the pulp as bubbles and jets, producing spectacular geysers (especially since the gravity is low). Even more dramatic is the heating: a lot of gravitational energy is released as the mass is compacted. The total gravitational energy of a constant density sphere of radius R is
(the first factor in the integral is the mass of a spherical shell of radius r, the second the mass of the stuff inside, and the third the 1/r gravitational potential). If we ignore the mass of the air since it is small and we just want an order of magnitude estimate, the compression of the berry mass gives energy
J.
This is the energy output of the sun over half an hour, nothing to sneeze at: blueberry earth will become hot. There is about 573,000 J per kg, enough to heat the blueberries from freezing to boiling.
The result is that blueberry earth will turn into a roaring ocean of boiling jam, with the geysers of released air and steam likely ejecting at least a few berries into orbit (escape velocity is just 4.234 km/s, and berries at the initial surface will be even higher up in the potential). As the planet evolves a thick atmosphere of released steam will add to the already considerable air from the berries. It is not inconceivable that the planet may heat up further due to a water vapour greenhouse effect, turning into a very odd Venusian world.
Meanwhile the jam ocean is very deep, and the pressure at depth will be enough to cause the formation of high pressure ice even if it is warm. If the formation process is slow there will be some separation of water into ice and a concentration of other chemicals in the jam ocean, but I suspect the rapid collapse will instead make some kind of composite pulp ice. Ice VII forms above 9 GPa, so if we just use constant gravity this happens at a depth km, about two-thirds of the radius. This would make up most of the interior. However, gravity is a bit weaker in the interior, so we need to take that into account. The pressure from all the matter above radius r is , and the ice core will have radius 3,258 km. This is smaller, about 57% of the radius, and just 20% of the total volume.
The coalescence will also speed up rotation. The original blueberry earth would of course make one rotation every 24 hours, but the smaller result would have a smaller moment of inertia. The angular momentum conservation gives , or , in this case 18.9210 hours. This in turn will increase the oblateness a bit, to approximately 0.038 – an 8.8 times increase over Earth.
Another effect is the orbit of the Moon. Now the two bodies have about equal mass. Is the Moon bound to blueberry earth? A kilogram of lunar material has potential energy 1.6925 J, while the kinetic energy is J – more than enough to escape. Had it remained the jam ocean would have made an excellent tidal dissipation mechanism that would have slowed down rotation and moved blueberry earth towards tidal lock with the moon much earlier than the 50 billion years it would otherwise have taken.
So, to sum up, to a person standing on the surface of the Earth when it turns into blueberries, the first effect would be a drastic reduction of gravity. Standing on the blueberries might be possible in theory, except that almost immediately they begin to compress rapidly and air starts erupting everywhere. The effect is basically the worst earthquake ever, and it keeps on going until everything has fallen 714 km. While this is going on everything heats up drastically until the entire environment is boiling jam and steam. The end result is a world that has a steam atmosphere covering an ocean of jam on top of warm blueberry granita.
I have been working for about a year on a book on “Grand Futures” – the future of humanity, starting to sketch a picture of what we could eventually achieve were we to survive, get our act together, and reach our full potential. Part of this is an attempt to outline what we know is and isn’t physically possible to achieve, part of it is an exploration of what makes a future good.
Here are some things that appear to be physically possible (not necessarily easy, but doable):
Societies of very high standards of sustainable material wealth. At least as rich (and likely far above) current rich nation level in terms of what objects, services, entertainment and other lifestyle ordinary people can access.
Human enhancement allowing far greater health, longevity, well-being and mental capacity, again at least up to current optimal levels and likely far, far beyond evolved limits.
Sustainable existence on Earth with a relatively unchanged biosphere indefinitely.
Expansion into space:
Settling habitats in the solar system, enabling populations of at least 10 trillion (and likely many orders of magnitude more)
Settling other stars in the milky way, enabling populations of at least 1029 people
Settling over intergalactic distances, enabling populations of at least 1038 people.
Survival of human civilisation and the species for a long time.
As long as other mammalian species – on the order of a million years.
As long as Earth’s biosphere remains – on the order of a billion years.
Settling the solar system – on the order of 5 billion years
Settling the Milky Way or elsewhere – on the order of trillions of years if dependent on sunlight
Using artificial energy sources – up to proton decay, somewhere beyond 1032 years.
Constructing Dyson spheres around stars, gaining energy resources corresponding to the entire stellar output, habitable space millions of times Earth’s surface, telescope, signalling and energy projection abilities that can reach over intergalactic distances.
Moving matter and objects up to galactic size, using their material resources for meaningful projects.
Performing more than a google (10100)computations, likely far more thanks to reversible and quantum computing.
While this might read as a fairly overwhelming list, it is worth noticing that it does not include gaining access to an infinite amount of matter, energy, or computation. Nor indefinite survival. I also think faster than light travel is unlikely to become possible. If we do not try to settle remote galaxies within 100 billion years accelerating expansion will move them beyond our reach. This is a finite but very large possible future.
What kinds of really good futures may be possible? Here are some (not mutually exclusive):
Survival: humanity survives as long as it can, in some form.
“Modest futures”: humanity survives for as long as is appropriate without doing anything really weird. People have idyllic lives with meaningful social relations. This may include achieving close to perfect justice, sustainability, or other social goals.
Gardening: humanity maintains the biosphere of Earth (and possibly other planets), preventing them from crashing or going extinct. This might include artificially protecting them from a brightening sun and astrophysical disasters, as well as spreading life across the universe.
Happiness: humanity finds ways of achieving extreme states of bliss or other positive emotions. This might include local enjoyment, or actively spreading minds enjoying happiness far and wide.
Abolishing suffering: humanity finds ways of curing negative emotions and suffering without precluding good states. This might include merely saving humanity, or actively helping all suffering beings in the universe.
Posthumanity: humanity deliberately evolves or upgrades itself into forms that are better, more diverse or otherwise useful, gaining access to modes of existence currently not possible to humans but equally or more valuable.
Deep thought: humanity develops cognitive abilities or artificial intelligence able to pursue intellectual pursuits far beyond what we can conceive of in science, philosophy, culture, spirituality and similar but as yet uninvented domains.
Creativity: humanity plays creatively with the universe, making new things and changing the world for its own sake.
I have no doubt I have missed many plausible good futures.
Note that there might be moral trades, where stay-at-homes agree with expansionists to keep Earth an idyllic world for modest futures and gardening while the others go off to do other things, or long-term oriented groups agreeing to give short-term oriented groups the universe during the stelliferous era in exchange for getting it during the cold degenerate era trillions of years in the future. Real civilisations may also have mixtures of motivations and sub-groups.
Note that the goals and the physical possibilities play out very differently: modest futures do not reach very far, while gardener civilisations may seek to engage in megascale engineering to support the biosphere but not settle space. Meanwhile the happiness-maximizers may want to race to convert as much matter as possible to hedonium, while the deep thought-maximizers may want to move galaxies together to create permanent hyperclusters filled with computation to pursue their cultural goals.
I don’t know what goals are right, but we can examine what they entail. If we see a remote civilization doing certain things we can make some inferences on what is compatible with the behaviour. And we can examine what we need to do today to have the best chances of getting to a trajectory towards some of these goals: avoiding getting extinct, improve our coordination ability, and figure out if we need to perform some global coordination in the long run that we need to agree on before spreading to the stars.
The Universe Today wrote an article about a paper by me, Toby and Eric about the Fermi Paradox. The preprint can be found on Arxiv (see also our supplements: 1,2,3 and 4). Here is a quick popular overview/FAQ.
TL;DR
The Fermi question is not a paradox: it just looks like one if one is overconfident in how well we know the Drake equation parameters.
Our distribution model shows that there is a large probability of little-to-no alien life, even if we use the optimistic estimates of the existing literature (and even more if we use more defensible estimates).
The Fermi observation makes the most uncertain priors move strongly, reinforcing the rare life guess and an early great filter.
Getting even a little bit more information can update our belief state a lot!
No. We claim we could be alone, and the probability is non-negligible given what we know… even if we are very optimistic about alien intelligence.
What is the paper about?
The Fermi Paradox – or rather the Fermi Question – is “where are the aliens?” The universe is immense and old and intelligent life ought to be able to spread or signal over vast distances, so if it has some modest probability we ought to see some signs of intelligence. Yet we do not. What is going on? The reason it is called a paradox is that is there is a tension between one plausible theory ([lots of sites]x[some probability]=[aliens]) and an observation ([no aliens]).
Dissolving the Fermi paradox: there is not much tension
We argue that people have been accidentally misled to feel there is a problem by being overconfident about the probability.
The problem lies in how we estimate probabilities from a product of uncertain parameters (as the Drake equation above). The typical way people informally do this with the equation is to admit that some guesses are very uncertain, give a “representative value” and end up with some estimated number of alien civilisations in the galaxy – which is admitted to be uncertain, yet there is a single number.
Obviously, some authors have argued for very low probabilities, typically concluding that there is just one civilisation per galaxy (“the school”). This may actually still be too much, since that means we should expect signs of activity from nearly any galaxy. Others give slightly higher guesstimates and end up with many civilisations, typically as many as one expects civilisations to last (“the school”). But the proper thing to do is to give a range of estimates, based on how uncertain we actually are, and get an output that shows the implied probability distribution of the number of alien civilisations.
If one combines either published estimates or ranges compatible with current scientific uncertainty we get a distribution that makes observing an empty sky unsurprising – yet is also compatible with us not being alone.
The reason is that even if one takes a pretty optimistic view (the published estimates are after all biased towards SETI optimism since the sceptics do not write as many papers on the topic) it is impossible to rule out a very sparsely inhabited universe, yet the mean value may be a pretty full galaxy. And current scientific uncertainties of the rates of life and intelligence emergence are more than enough to create a long tail of uncertainty that puts a fair credence on extremely low probability – probabilities much smaller than what one normally likes to state in papers. We get a model where there is 30% chance we are alone in the visible universe, 53% chance in the Milky Way… and yet the mean number is 27 million and the median about 1! (see figure below)
This is a statement about knowledge and priors, not a measurement: armchair astrobiology.
The Great Filter: lack of obvious aliens is not strong evidence for our doom
After this result, we look at the Great Filter. We have reason to think at least one term in the Drake equation is small – either one of the early ones indicating how much life or intelligence emerges, or one of the last one that indicate how long technological civilisations survive. The small term is “the Filter”. If the Filter is early, that means we are rare or unique but have a potentially unbounded future. If it is a late term, in our future, we are doomed – just like all the other civilisations whose remains would litter the universe. This is worrying. Nick Bostrom argued that we should hope we do not find any alien life.
Our paper gets a somewhat surprising result: when updating our uncertainties in the light of no visible aliens, it reduces our estimate of the rate of life and intelligence emergence (the early filters) much more than the longevity factor (the future filter).
The reason is that if we exclude the cases where our galaxy is crammed with alien civilisations – something like the Star Wars galaxy where every planet has its own aliens – then that leads to an update of the parameters of the Drake equation. All of them become smaller, since we will have a more empty universe. But the early filter ones – life and intelligence emergence – change much more downwards than the expected lifespan of civilisations since they are much more uncertain (at least 100 orders of magnitude!) than the merely uncertain future lifespan (just 7 orders of magnitude!).
So this is good news: the stars are not foretelling our doom!
Note that a past great filter does not imply our safety.
The conclusion can be changed if we reduce the uncertainty of the past terms to less than 7 orders of magnitude, or the involved probability distributions have weird shapes. (The mathematical proof is in supplement IV, which applies to uniform and normal distributions. It is possible to add tails and other features that breaks this effect – yet believing such distributions of uncertainty requires believing rather strange things. )
Isn’t this armchair astrobiology?
Yes. We are after all from the philosophy department.
The point of the paper is how to handle uncertainties, especially when you multiply them together or combine them in different ways. It is also about how to take lack of knowledge into account. Our point is that we need to make knowledge claims explicit – if you claim you know a parameter to have the value 0.1 you better show a confidence interval or an argument about why it must have exactly that value (and in the latter case, better take your own fallibility into account). Combining overconfident knowledge claims can produce biased results since they do not include the full uncertainty range: multiplying point estimates together produces a very different result than when looking at the full distribution.
All of this is epistemology and statistics rather than astrobiology or SETI proper. But SETI makes a great example since it is a field where people have been learning more and more about (some) of the factors.
The same approach as we used in this paper can be used in other fields. For example, when estimating risk chains in systems (like the risk of a pathogen escaping a biosafety lab) taking uncertainties in knowledge will sometimes produce important heavy tails that are irreducible even when you think the likely risk is acceptable. This is one reason risk estimates tend to be overconfident.
Probability?
What kind of distributions are we talking about here? Surely we cannot speak of the probability of alien intelligence given the lack of data?
There is a classic debate in probability between frequentists, claiming probability is the frequency of events that we converge to when an experiment is repeated indefinitely often, and Bayesians, claiming probability represents states of knowledge that get updated when we get evidence. We are pretty Bayesian.
The distributions we are talking about are distributions of “credences”: how much you believe certain things. We start out with a prior credence based on current uncertainty, and then discuss how this gets updated if new evidence arrives. While the original prior beliefs may come from shaky guesses they have to be updated rigorously according to evidence, and typically this washes out the guesswork pretty quickly when there is actual data. However, even before getting data we can analyse how conclusions must look if different kinds of information arrives and updates our uncertainty; see supplement II for a bunch of scenarios like “what if we find alien ruins?”, “what if we find a dark biosphere on Earth?” or “what if we actually see aliens at some distance?”
Correlations?
Our use of the Drake equation assumes the terms are independent of each other. This of course is a result of how Drake sliced things into naturally independent factors. But there could be correlations between them. Häggström and Verendel showed that in worlds where the priors are strongly correlated updates about the Great Filter can get non-intuitive.
We deal with this in supplement II, and see also this blog post. Basically, it doesn’t look like correlations are likely showstoppers.
You can’t resample guesses from the literature!
Sure can. As long as we agree that this is not so much a statement about what is actually true out there, but rather the range of opinions among people who have studied the question a bit. If people give answers to a question in the range from ten to a hundred, that tells you something about their beliefs, at least.
What the resampling does is break up the possibly unconscious correlation between answers (“the school” and “the school” come to mind). We use the ranges of answers as a crude approximation to what people of good will think are reasonable numbers.
You may say “yeah, but nobody is really an expert on these things anyway”. We think that is wrong. People have improved their estimates as new data arrives, there are reasons for the estimates and sometimes vigorous debate about them. We warmly recommend Vakoch, D. A., Dowd, M. F., & Drake, F. (2015). The Drake Equation. The Drake Equation, Cambridge, UK: Cambridge University Press, 2015 for a historical overview. But at the same time these estimates are wildly uncertain, and this is what we really care about. Good experts qualify the certainty of their predictions.
But doesn’t resampling from admittedly overconfident literature constitute “garbage in, garbage out”?
Were we trying to get the true uncertainties (or even more hubristically, the true values) this would not work: we have after all good reasons to suspect these ranges are both biased and overconfidently narrow. But our point is not that the literature is right, but that even if one were to use the overly narrow and likely overly optimistic estimates as estimates of actual uncertainty the broad distribution will lead to our conclusions. Using the literature is the most conservative case.
Note that we do not base our later estimates on the literature estimate but our own estimates of scientific uncertainty. If they are GIGO it is at least our own garbage, not recycled garbage. (This reading mistake seems to have been made on Starts With a Bang).
What did the literature resampling show?
An overview can be found in Supplement III. The most important point is just that even estimates of super-uncertain things like the probability of life lies in a surprisingly narrow range of values, far more narrow than is scientifically defensible. For example, has five estimates ranging from to , and all the rest are in the range to 1. is even worse, with one microscopic and nearly all the rest between one in a thousand to one.
It also shows that estimates that are likely biased towards optimism (because of publication bias) can be used to get a credence distribution that dissolves the paradox once they are interpreted as ranges. See the above figure, were we get about 30% chance of being alone in the Milky Way and 8% chance of being alone in the visible universe… but a mean corresponding to 27 million civilisations in the galaxy and a median of about a hundred.
There are interesting patterns in the data. When plotting the expected number of civilisations in the Milky Way based on estimates from different eras the number goes down with time: the community has clearly gradually become more pessimistic. There are some very pessimistic estimates, but even removing them doesn’t change the overall structure.
What are our assumed uncertainties?
A key point in the paper is trying to quantify our uncertainties somewhat rigorously. Here is a quick overview of where I think we are, with the values we used in our synthetic model:
: the star formation rate in the Milky Way per year is fairly well constrained. The actual current uncertainty is likely less than 1 order of magnitude (it can vary over 5 orders of magnitude in other galaxies). In our synthetic model we put this parameter as log-uniform from 1 to 100.
: the fraction of systems with planets is increasingly clear ≈1. We used log-uniform from 0.1 to 1.
: number of Earth-like in systems with planets.
This ranges from rare earth arguments () to >1. We used log-uniform from 0.1 to 1 since recent arguments have shifted away from rare Earths, but we checked that adding it did not change the conclusions much.
: Fraction of Earthlike planets with life.
This is very uncertain; see below for our arguments that the uncertainty ranges over perhaps 100 orders of magnitude.
There is an absolute lower limit due to ergodic repetition: – in an infinite universe there will eventually be randomly generated copies of Earth and even the entire galaxy (at huge distances from each other). Observer selection effects make using the earliness of life on Earth problematic.
We used a log-normal rate of abiogenesis that was transformed to a fraction distribution.
: Fraction of lifebearing planets with intelligence/complex life.
This is very uncertain; see below for our arguments that the uncertainty ranges over perhaps 100 orders of magnitude.
One could argue there has been 5 billion species so far and only 1 intelligent, so we know . But one could argue that we should count assemblages of 10 million species, which gives a fraction 1/500 per assemblage. Observer selection effects may be distorting this kind of argument.
We could have used a log-normal rate of complex life emergence that was transformed to a fraction distribution or a broad log-linear distribution. Since this would have made many graphs hard to interpret we used log-uniform from 0.001 to 1, not because we think this likely but just as a simple illustration (the effect of the full uncertainty is shown in Supplement II).
: Fraction of time when it is communicating.
Very uncertain; humanity is 0.000615 so far. We used log-uniform from 0.01 to 1.
: Average lifespan of a civilisation.
Fairly uncertain; years (upper limit because of the Drake equation applicability: it assumes the galaxy is in a steady state, and if civilisations are long-lived enough they will still be accumulating since the universe is too young.)
We used log-uniform from 100 to 10,000,000,000.
Note that this is to some degree a caricature of current knowledge, rather than an attempt to represent it perfectly. Fortunately our argument and conclusions are pretty insensitive to the details – it is the vast ranges of uncertainty that are doing the heavy lifting.
Abiogenesis
Why do we think the fraction of planets with life parameters could have a huge range?
First, instead of thinking in terms of the fraction of planets having life, consider a rate of life formation in suitable environments: what is the induced probability distribution? The emergence is a physical/chemical transition of some kind of primordial soup, and transition events occur in this medium at some rate per unit volume: where is the available volume and is the available time. High rates would imply that almost all suitable planets originate life, while low rates would imply that almost no suitable planets originate life.
The uncertainty regarding the length of time when it is possible is at least 3 orders of magnitude ( years).
The uncertainty regarding volumes spans 20+ orders of magnitude – from entire oceans to brine pockets on ice floes.
Uncertainty regarding transition rates can span 100+ orders of magnitude! The reason is that this might involve combinatoric flukes (you need to get a fairly longish sequence of parts into the right sequence to get the right kind of replicator), or that it is like the protein folding problem where Levinthal’s paradox shows that it takes literally astronomical time to get entire oceans of copies of a protein to randomly find the correctly folded position (actual biological proteins “cheat” by being evolved to fold neatly and fast). Even chemical reaction rates span 100 orders of magnitude. On the other hand, spontaneous generation could conceivably be common and fast! So we should conclude that has an uncertainty range of at least 100 orders of magnitude.
Actual abiogenesis will involve several steps. Some are easy, like generating simple organic compounds (plentiful in asteroids, comets and Miller-Urey experiment). Some are likely tough. People often overlook that even how to get proteins and nucleic acids in a watery environment is somewhat of a mystery since these chains tend to hydrolyze; the standard explanation is to look for environments that have a wet-dry cycle allowing complexity to grow. But this means is much smaller than an ocean.
That we have tremendous uncertainty about abiogenesis does not mean we do not know anything. We know a lot. But at present we have no good scientific reasons to believe we know the rate of life formation per liter-second. That will hopefully change.
Doesn’t creationists argue stuff like this?
There is a fair number of examples of creationists arguing that the origin of life must be super-unlikely and hence we must believe in their particular god.
The problem(s) with this kind of argument is that it presupposes that there is only one planet, and somehow we got a one-in-a-zillion chance on that one. That is pretty unlikely. But the reality is that there is a zillion planets, so even if there is a one-in-a-zillion chance for each of them we should expect to see life somewhere… especially since being a living observer is a precondition for “seeing life”! Observer selection effects really matter.
We are also not arguing that life has to be super-unlikely. In the paper our distribution of life emergence rate actually makes it nearly universal 50% of the time – it includes the possibility that life will spontaneously emerge in any primordial soup puddle left alone for a few minutes. This is a possibility I doubt anybody believes in, but it could be that would-be new life is emerging right under our noses all the time, only to be outcompeted by the advanced life that already exists.
Creationists make a strong claim that they know ; this is not really supported by what we know. But is totally within possibility.
Complex life
Even if you have life, it might not be particularly good at evolving. The reasoning is that it needs to have a genetic encoding system that is both rigid enough to function efficiently and fluid enough to allow evolutionary exploration.
All life on Earth shares almost exactly the same genetic systems, showing that only rare and minor changes have occurred in cell divisions. That is tremendously stable as a system. Nonetheless, it is fairly commonly believed that other genetic systems preceded the modern form. The transition to the modern form required major changes (think of upgrading an old computer from DOS to Windows… or worse, from CP/M to DOS!). It would be unsurprising if the rate was < 1 per cell divisions given the stability of our current genetic system – but of course, the previous system might have been super-easy to upgrade.
Modern genetics required >1/5 of the age of the universe to evolve intelligence. A genetic system like the one that preceded ours might both be stable over a google cell divisions and evolve more slowly by a factor of 10, and run out the clock. Hence some genetic systems may be incapable of ever evolving intelligence.
This related to a point made by Brandon Carter much earlier, where he pointed out that the timescales of getting life, evolving intelligence and how long biospheres last are independent and could be tremendously different – that life emerged early on Earth may have been a fluke due to the extreme difficulty of also getting intelligence within this narrow interval (on all the more likely worlds there are no observers to notice). If there are more difficult transitions, you get an even stronger observer selection effect.
Evolution goes down branches without looking ahead, and we can imagine that it could have an easier time finding inflexible coding systems (“B life”) unlike our own nice one (“A life”). If the rate of discovering B-life is and the rate of discovering capable A-life is , then the fraction of A-life in the universe is just – and rates can differ many orders of magnitude, producing a life-rich but evolution/intelligence-poor universe. Multiple step models add integer exponents to rates: these the multiply order of magnitude differences.
So we have good reasons to think there could be a hundred orders of magnitude uncertainty on the intelligence parameter, even without trying to say something about evolution of nervous systems.
How much can we rule out aliens?
Humanity has not scanned that many stars, so obviously we have checked even a tiny part of the galaxy – and could have missed them even if we looked at the right spot. Still, we can model how this weak data updates our beliefs (see Supplement II).
The strongest argument against aliens is the Tipler-Hart argument that settling the Milky Way, even when you are expanding at low speed, will only take a fraction of its age. And once a civilisation is everywhere it is hard to have it go extinct everywhere – it will tend to persist even if local pieces crash. Since we do not seem to be in a galaxy paved over by an alien supercivilisation we have a very strong argument to assume a low rate of intelligence emergence. Yes, even if if 99% of civilisations stay home or we could be in an alien zoo, you still get a massive update against a really settled galaxy. In our model the probability of less than one civilisation per galaxy went from 52% to 99.6% if one include the basic settlement argument.
The G-hat survey of galaxies, looking for signs of K3 civilisations, did not find any. Again, maybe we missed something or most civilisations don’t want to re-engineer galaxies, but if we assume about half of them want to and have 1% chance of succeeding we get an update from 52% chance of less than one civilisation per galaxy to 66%.
Using models of us looking at about 1,000 stars or that we do not think there is any civilisation within 18 pc gives a milder update, from 52% to 53 and 57% respectively. These just rule out super-densely inhabited scenarios.
But uncertainty can be reduced! We can learn more, and that will change our knowledge.
From a SETI perspective, this doesn’t say that SETI is unimportant or doomed to failure, but rather that if we ever see even the slightest hint of intelligence out there many parameters will move strongly. Including the all-important .
From an astrobiology perspective, we hope we have pointed at some annoyingly uncertain factors and that this paper can get more people to work on reducing the uncertainty. Most astrobiologists we have talked with are aware of the uncertainty but do not see the weird knock-on-effects from it. Especially figuring out how we got our fairly good coding system and what the competing options are seems very promising.
Even if we are not sure we can also update our plans in the light of this. For example, in my tech report about settling the universe fast I pointed out that if one is uncertain about how much competition there might be for the universe one can use one’s probability estimates to decide on the range to aim for.
Uncertainty matters
Perhaps the most useful insight is that uncertainty matters and we should learn to embrace it carefully rather than assume that apparently specific numbers are better.
Perhaps never in the history of science has an equation been devised yielding values differing by eight orders of magnitude. . . . each scientist seems to bring his own prejudices and assumptions to the problem.
– History of Astronomy: An Encyclopedia, ed. by John Lankford, s.v. “SETI,” by Steven J. Dick, p. 458.
When Dick complained about the wide range of results from the Drake equation he likely felt it was too uncertain to give any useful result. But 8 orders of magnitude differences is in this case just a sign of downplaying our uncertainty and overestimating our knowledge! Things gets much better when we look at what we know and don’t know, figuring out the implications from both.
Jill Tarter said the Drake equation was “a wonderful way to organize our ignorance”, which we think is closer to the truth than demanding a single number as an answer.
Ah, but I already knew this!
We have encountered claims that “nobody” really is naive about using the Drake equation. Or at least not any “real” SETI and astrobiology people. Strangely enough people never seem to make this common knowledge visible, and a fair number of papers make very confident statements about “minimum” values for life probabilities that we think are far, far above the actual scientific support.
Sometimes we need to point out the obvious explicitly.
I have a piece in Dagens Samhälle with Olle Häggström, Carin Ism, Max Tegmark and Markus Anderljung urging the Swedish parliament to consider banning lethal autonomous weapons.
This is of course mostly symbolic; the real debate is happening right now over in Geneva at the CCW. I also participated in a round-table with the Red Cross that led to their report on the issue, which is one of the working papers presented there.
I am not particularly optimistic that we will get a ban – nor that a ban would actually achieve much. However, I am much more optimistic that this debate may force a general agreement about the importance of getting meaningful human control. This is actually an area where most military and peace groups would agree: nobody wants systems that are unaccountable and impossible to control. Making sure there are international agreements that using such systems is irresponsible and maybe even a war crime would be a big win. But there are lots of devils in the details.
When it comes to arguments for why LAWs are morally bad I am personally not so convinced that the bad comes from a machine making the decision to kill a person. Clearly some machine possible decisionmaking does improve proportionality and reduce arbitrariness. Similarly arguments about whether they would increase or reduce the risk of military action and how this would play out in terms of human suffering and death are interesting empirical arguments but we should not be overconfident in that we know the answers. Given that once LAWs are in use it will be hard to roll them back if the answers are bad, we might find it prudent to try to avoid them (but consider the opposing scenario where since time immemorial robots have fought our wars and somebody now suggests using humans too – there is a status quo bias here).
My main reason for being opposed to LAWs is not that they would be inherently immoral, nor that they would necessarily or even likely make war worse or more likely. My view is that the problem is that they give states too much power. Basically they make their monopoly on violence independent of the wishes of the citizens. Once a sufficiently potent LAW military (or police force) exist it will be able to exert coercive and lethal power as ordered without any mediation through citizens. While having humans in the army certainly doesn’t guarantee moral behavior, if ordered to turn against the citizenry or act in a grossly immoral way they can exert moral agency and resist (with varying levels of overtness). The LAW army will instead implement the orders as long as they are formally lawful (assuming there is at least a constraint against unlawful commands). States know that if they mistreat their population too much their army might side with the population, a reason why some of the nastier governments make use of mercenaries or a special separate class of soldier to reduce the risk. If LAWs become powerful enough they might make dictatorships far more stable by removing a potentially risky key component of state power from the internal politics.
Bans and moral arguments are unlikely to work against despots. But building broad moral consensuses on what is acceptable in war does have effects. If R&D emphasis is directed towards finding solutions to how to manage responsibility for autonomous device decisions that will develop a lot of useful technologies for making such systems at least safer – and one can well imagine similar legal and political R&D into finding better solutions to citizen-independent state power.
In fact, far more important than LAWs is what to do about Lethal Autonomous States. Bad governance kills, many institutions/corporations/states behave just as badly as the worst AI risk visions and have a serious value alignment problem, and we do not have great mechanisms for handling responsibility in inter-state conflicts. The UN system is a first stab at the problem but obviously much, much more can be done. In the meantime, we can try avoiding going too quickly down a risky path while we try to find safe-making technologies and agreements.
I have neglected Andart II for some time, partly for the good reason of work (The Book is growing!), partly because I got a new addiction: answering (and asking) questions on Physics and Astronomy StackExchange. Very addictive, but also very educational. Here are links to some of the stuff I have been adding, which might be of interest to some readers.
That got me thinking about transhumanist attitudes to death and how they are perceived.
While the brief Kotaku description makes it sound that death positivity is perhaps about celebrating death, the Order of the Good Death mainly is about acknowledging death and dying. That we hide it behind closed doors and avoid public discussion (or even thinking about it) is doing harm to society and arguably our own emotions. Fear and denial are not good approaches. Perhaps the best slogan-description is “Accepting that death itself is natural, but the death anxiety of modern culture is not.”
The Order aims at promoting more honest public discussion, curiosity, innovation and gatherings to discuss death-related topic. Much of this relates to the practices in the “death industry”, some of which definitely should be discussed in terms of economic costs, environmental impact, ethics and legal rights.
Denying death as a bad thing?
There is an odd paradox here. Transhumanism is often described as death denying, and this description is not meant as a compliment in the public debate. Wanting to live forever is presented as immature, selfish or immoral. Yet we have an overall death denying society, so how can this be held to be bad?
Part of it is that the typical frame of the critique is from a “purveyor of wisdom” (a philosopher, a public intellectual, the local preacher) who no doubt might scold society too had not the transhumanist been a more convenient target.
This critique is rarely applied to established religions that are even more radically death denying – Christianity after all teaches the immortality of the soul, and in Hinduism and Buddhism ending the self is a nearly impossible struggle through countless reincarnations: talk about denying death! You rarely hear people asking how life could have a meaning if there is an ever-lasting hereafter. (In fact, some have like Tolstoy argued that it is only because such ever-lasting states that anything could have meaning). Some of the lack of critique is due to social capital: major religions hold much of it, transhumanism less, so criticising tends to focus on those groups that have less impact. Not just because the “purveyor of wisdom” fears a response but because they are themselves consciously or not embedded inside the norms and myths of these influential groups.
Another reason for criticising the immortalist position is death denial. Immortalism, and its more plausible sibling longevism, directly breaks the taboo against discussing death honestly. It questions core ideas about what human existence is like, and it by necessity delves into the processes of ageing and death. It tries to bring up uncomfortable subjects and does not accept the standard homilies about why life should be like it is, and why we need to accept it. This second reason actually makes transhumanism and death positivity unlikely allies.
Naïve transhumanists sometimes try to recruit people by offering the hope of immortality. Often they are surprised and shocked by the negative reactions. Leaving the appearance of a Faustian bargain aside, people typically respond by shoring up their conventional beliefs and defending their existential views. Few transhumanist ideas cause stronger reactions than life extension – I have lectured about starting new human species, uploading minds, remaking the universe, enhancing love, and many extreme topics, but I rarely get as negative comments as when discussing the feasibility and ethics of longevity.
The reason for this is in my opinion very much fear of death (with a hefty dose of status quo bias mixed in). As we grow up we have to handle our mortality and we build a defensive framework telling us how to handle it – typically by downplaying the problem of death by ignoring it, explaining or hoping via a religious framework, or finding some form of existential acceptance. But since most people rarely are exposed to dissenting views or alternatives they react very badly when this framework is challenged. This is where death positivity would be very useful.
Why strict immortalism is a non-starter
Given our current scientific understanding death is unavoidable. The issue is not whether life extension is possible or not, just the basic properties of our universe. Given the accelerating expansion of the universe we can only gain access to a finite amount of material resources. Using these resources is subject to thermodynamic inefficiencies that cannot be avoided. Basically the third law of thermodynamics and Landauer’s principle imply that there is a finite number of information processing steps that can be undertaken in our future. Eventually the second law of thermodynamics wins (helped by proton decay and black hole evaporation) and nothing that can store information or perform the operations needed for any kind of life will remain. This means that no matter what strange means any being undertakes as far as we understand physics it will eventually dissolve.
One should also not discount plain bad luck: finite beings in a universe where quantum randomness happens will sooner or later be subjected to a life-ending coincidence.
The Heat Death of the Universe and Quantum Murphy’s Law are a very high upper bounds. They are important because they force any transhumanist who doesn’t want to dump rationality overboard and insist that the laws of physics must allow true immortality because it is desired to acknowledge that they will eventually die – perhaps aeons hence and in a vastly changed state, but at some point it will have happened (perhaps so subtly that nobody even noticed: shifts in identity also count).
To this the reasonable transhumanist responds with a shrug: we have more pressing mortality concerns today, when ageing, disease, accidents and existential risk are so likely that we can hardly expect to survive a century. We endlessly try to explain to interviewers that transhumanism is not really seeking capital “I” Immortality but merely indefinitely long lifespans, and actually we are interested in years of health and activity rather than just watching the clock tick as desiccated mummies. The point is, a reasonable transhumanistic view will be focused on getting more and better life.
Running from death or running towards life?
One can strive to extend life because one is scared of drying – death as something deeply negative – or because life is worth living – remaining alive has a high value.
But if one can never avoid having death at some point in one’s lifespan then the disvalue of death will always be present. It will not affect whether living one life is better than another.
An exception may be if one believes that the disvalue can be discounted by being delayed, but this merely affects the local situation in time: at any point one prefers the longest possible life, but the overall utility as seen from the outside when evaluating a life will always suffer the total disvalue.
I believe the death-apologist thinkers have made some good points about why death is not intensely negative (e.g. the Lucretian arguments). I do not think they are convincing in that it is a positive property of the world. If “death gives life meaning” then presumably divorce is what makes love meaningful. If it is a good thing that old people retire from positions of power, why not have mandatory retirement rather than the equivalent of random death-squads? In fact, defences of death as a positive tend to use remarkably weak reasons for motivations, reasons that would never be taken seriously if motivating complacency about a chronic or epidemic disease.
Life-affirming transhumanism on the other hand is not too worried about the inevitability of death. The question is rather how much and what kind of good life is possible. One can view it as a game of seeking to maximise a “score” of meaningfulness and value under risk. Some try to minimise the risk, others to get high points, still others want to figure the rules or structure their life projects to make a meaningful structure across time.
Ending the game properly
This also includes ending life when it is no longer meaningful. Were one to regard death as extremely negative, then one should hang on even if there was nothing but pain and misery in the future. If death merely has zero value, then one can be in bad states where it is better to be dead than alive.
As we have argued in a recent paper many of the anti-euthanasia arguments turn on their head when applied to cryonics: if one regards life as a too precious gift to be thrown away and that the honourable thing is to continue to struggle on, then undergoing cryothanasia (being cryonically suspended well before one would otherwise have died) when suffering a terminal disease in the rational hope that this improves ones chances clearly seems better than to not take the chance or allow disease to reduce one’s chances.
This also shows an important point where one kind of death positivity and transhumanism may part ways. One can frame accepting death as accept that death exists and deal with it. Another frame, equally compatible with the statement, is not struggling too much against it. The second frame is often what philosophers suggest as a means for equanimity. While possibly psychologically beneficial it clearly has limits: the person not going to the doctor with a treatable disease when they know it will develop into something untreatable (or not stepping out of the way of an approaching truck) is not just “not struggling” but being actively unreasonable. One can and should set some limit where struggle and interventions become unreasonable, but this is always going to be both individual and technology dependent. With modern medicine many previously lethal conditions (e.g. bacterial meningitis, many cancers) have become treatable to such an extent that it is not reasonable to avail oneself to treatment.
Transhumanism puts a greater value in longevity than is usual, partially because of its optimistic outlook (the future is likely to be good, technology is likely to advance), and this leads to a greater willingness to struggle on even when conventional wisdom says it is a good time to give up and become fatalistic. This is a reason transhumanists are far more OK with radical attempts to stave off death than most people, including cryonics.
Cryonics
Cryonics is another surprisingly death-positive aspect of transhumanism. It forces you to confront your mortality head on, and it does not offer very strong reassurance. Quite the opposite: it requires planning for ones (hopefully temporary) demise, consider the various treatment/burial options, likely causes of death, and the risks and uncertainties involved in medicine. I have friends who seriously struggled with their dread of death when trying to sign up.
Talking about the cryonics choice with family is one of the hardest parts of the practice and has caused significant heartbreak, yet keeping silent and springing it as a surprise guarantees even more grief (and lawsuits). This is one area where better openness about death would be extremely helpful.
It is telling that members of the cryonics community seeks out each other, since it is one of the few environments where these things can be discussed openly and without stigma. It seems likely that the death-positive and the cryonics community have more in common than they might think.
Cryonics also has to deal with the bureaucracy and logistics of death, with the added complication that it aims at something slightly different than conventional burial. To a cryonicist the patients are still patients even when they have undergone cardiac arrest, are legally declared dead, solid and immersed in liquid nitrogen: they need care and protection since they may only be temporarily dead. Or deanimated, if we want to reserve “death” as a word for irreversibly non-living. (As a philosopher, I must say I find the cryosuspended state delightfully like a thought-experiment in a philosophy paper).
Final words
I have argued that transhumanism should be death-positive, at least in the sense that discussing death and accepting its long-term inevitability is both healthy and realistic. Transhumanists will not generally support a positive value of death and will tend to react badly to that kind of statement. But assigning it a vastly negative value produces a timid outlook that is unlikely to work well with the other parts of the transhumanist idea complex. Rather, death is bad because life is good but that doesn’t mean we should not think about it.
Indeed, transhumanists may want to become better at talking about death. Respected and liked people who have been part of the movement for a long time have died and we are often awkward about how to handle it. Transhumanists need to handle grief too. Even if the subject may be only temporarily dead in a cryonic tank.
Conversely, transhumanism and cryonics may represent an interesting challenge for the death positive movement in that they certainly represent an unusual take on attitudes and customs towards death. Seeing death as an engineering problem is rather different from how most people see it. Questioning the human condition is risky when dealing with fragile situations. And were transhumanism to be successful in some of its aims there may be new and confusing forms of death.
This fall I have been chairing a programme at the Gothenburg Centre for Advanced Studies on existential risk, thanks to Olle Häggström. Visiting researchers come and participate in seminars and discussions on existential risk, ranging from the very theoretical (how do future people count?) to the very applied (should we put existential risk on the school curriculum? How?). I gave a Petrov Day talk about how to calculate risks of nuclear war and how observer selection might mess this up, beside seminars on everything from the Fermi paradox to differential technology development. In short, I have been very busy.
Due to a technical mishap, we have no video for David Denkenberger‘s talk on Cost of non-sunlight dependent food for agricultural catastrophes. Try instead watching his talk Feeding everyone no matter what given at CSER in Cambridge last year, which covers much of the same ground.
I think so far a few key realisations and themes have in my opinion been
(1) the pronatalist/maximiser assumptions underlying some of the motivations for existential risk reduction were challenged; there is an interesting issue of how “modest futures” rather than “grand futures” play a role and non-maximising goals imply existential risk reduction.
(2) the importance of figuring out how “suffering risks”, potential states of astronomical amounts of suffering, relate to existential risks. Allocating effort between them rationally touches on some profound problems.
(3) The under-determination problem of inferring human values from observed behaviour (a talk by Stuart) resonated with the under-determination of AI goals in Olle’s critique of the convergent instrumental goal thesis and other discussions. Basically, complex agent-like systems might be harder to succinctly describe than we often think.
(4) Stability of complex adaptive systems – brains, economies, trajectories of human history, AI. Why are some systems so resilient in a reliable way, and can we copy it?
(5) The importance of estimating force projection abilities in space and as the limits of physics are approached. I am starting to suspect there is a deep physics answer to the question of attacker advantage, and a trade-off between information and energy in attacks.
We will produce an edited journal issue with papers inspired by our programme, stay tuned. Avancez!
First, how do you make a Steiner chain? It is easy using inversion geometry. Just decide on the number of circles tangent to the inner circle (). Then the ratio of the radii of the inner and outer circle will be . The radii of the circles in the ring will be and their centres are located at distance from the origin. This produces a staid concentric arrangement. Now invert with relation to an arbitrary circle: all the circles are mapped to other circles, their tangencies preserved. Voila! A suitably eccentric Steiner chain to play with.
Since the original concentric chain obviously can be rotated continuously without losing touch with the inner and outer circle, this also generates a continuous family of circles after the inversion. This is why Steiner’s porism is true: if you can make the initial chain, you get an infinite number of other chains with the same number of circles.
Iterated function systems with circle maps
The fractal works by putting copies of the whole set of circles in the chain into each circle, recursively. I remap the circles so that the outer circle becomes the unit circle, and then it is easy to see that for a given small circle with (complex) centre and radius the map maps the interior of the unit circle to it. Use the ease of rotating the original concentric ring to produce an animation, and we can reconstruct the fractal.
Done.
Except… it feels a bit dry.
Ever since I first encountered iterated function systems in the 1980s I have felt they tend towards a geometric aesthetics that is not me, ferns notwithstanding. A lot has to do with the linearity of the transformations. One can of course add rotations, which cheers up the fractal a bit.
But still, I love the nonlinearity and harmony of conformal mappings.
Inversion makes things better!
Enter the circle inversion fractals. They are the sets of the plane that map to themselves when being inverted in any and all of a set of generating circles (or, equivalently, the limit set of points under these inversions). As a rule of thumb, when the circles do not touch the fractal will be Cantor/Fatou-style fractal dust. When the circles are tangent the fractal will pass through the point of tangency. If three circles are tangent the fractal will contain a circle passing through these points. Since Steiner chains have lots of tangencies, we should get a lot of delicious fractals by using them as generators.
I use nearly the same code I used for the elliptic inversion fractals, mostly because I like the colours. The “real” fractal is hidden inside the nested circles, composed of an infinite Apollonian gasket of circles.
Note how the fractal extends outside the generators, forming a web of circles. Convergence is slow near tangent points, making it “fuzzy”. While it is easy to see the circles that belong to the invariant set that are empty, there are also circles going through the foci inside the coloured disks, touching the more obvious circles near those fuzzy tangent points. There is a lot going on here.
But we can complicate things by allowing the chain to slide and see how the fractal changes.