Can you be a death positive transhumanist?

Spes altera vitaeI recently came across the concept of “death positivity”, expressed as the idea that we should accept the inevitability of death and embrace the diversity of attitudes and customs surrounding it. Looking a bit deeper, I found the Order of the Good Death and their statement.

That got me thinking about transhumanist attitudes to death and how they are perceived.

While the brief Kotaku description makes it sound that death positivity is perhaps about celebrating death, the Order of the Good Death mainly is about acknowledging death and dying. That we hide it behind closed doors and avoid public discussion (or even thinking about it) is doing harm to society and arguably our own emotions. Fear and denial are not good approaches. Perhaps the best slogan-description is “Accepting that death itself is natural, but the death anxiety of modern culture is not.”

The Order aims at promoting more honest public discussion, curiosity, innovation and gatherings to discuss death-related topic. Much of this relates to the practices in the “death industry”, some of which definitely should be discussed in terms of economic costs, environmental impact, ethics and legal rights.

Denying death as a bad thing?

Queuing for eternal restThere is an odd paradox here. Transhumanism is often described as death denying, and this description is not meant as a compliment in the public debate. Wanting to live forever is presented as immature, selfish or immoral. Yet we have an overall death denying society, so how can this be held to be bad?

Part of it is that the typical frame of the critique is from a “purveyor of wisdom” (a philosopher, a public intellectual, the local preacher) who no doubt might scold society too had not the transhumanist been a more convenient target.

This critique is rarely applied to established religions that are even more radically death denying – Christianity after all teaches the immortality of the soul, and in Hinduism and Buddhism ending the self is a nearly impossible struggle through countless reincarnations: talk about denying death! You rarely hear people asking how life could have a meaning if there is an ever-lasting hereafter. (In fact, some have like Tolstoy argued that it is only because such ever-lasting states that anything could have meaning). Some of the lack of critique is due to social capital: major religions hold much of it, transhumanism less, so criticising tends to focus on those groups that have less impact. Not just because the “purveyor of wisdom” fears a response but because they are themselves consciously or not embedded inside the norms and myths of these influential groups.

Another reason for criticising the immortalist position is death denial. Immortalism, and its more plausible sibling longevism, directly breaks the taboo against discussing death honestly. It questions core ideas about what human existence is like, and it by necessity delves into the processes of ageing and death. It tries to bring up uncomfortable subjects and does not accept the standard homilies about why life should be like it is, and why we need to accept it. This second reason actually makes transhumanism and death positivity unlikely allies.

Naïve transhumanists sometimes try to recruit people by offering the hope of immortality. Often they are surprised and shocked by the negative reactions. Leaving the appearance of a Faustian bargain aside, people typically respond by shoring up their conventional beliefs and defending their existential views. Few transhumanist ideas cause stronger reactions than life extension – I have lectured about starting new human species, uploading minds, remaking the universe, enhancing love, and many extreme topics, but I rarely get as negative comments as when discussing the feasibility and ethics of longevity.

The reason for this is in my opinion very much fear of death (with a hefty dose of status quo bias mixed in). As we grow up we have to handle our mortality and we build a defensive framework telling us how to handle it – typically by downplaying the problem of death by ignoring it, explaining or hoping via a religious framework, or finding some form of existential acceptance. But since most people rarely are exposed to dissenting views or alternatives they react very badly when this framework is challenged. This is where death positivity would be very useful.

Why strict immortalism is a non-starter

XIII: EntropyGiven our current scientific understanding death is unavoidable. The issue is not whether life extension is possible or not, just the basic properties of our universe. Given the accelerating expansion of the universe we can only gain access to a finite amount of material resources. Using these resources is subject to thermodynamic inefficiencies that cannot be avoided. Basically the third law of thermodynamics and Landauer’s principle imply that there is a finite number of information processing steps that can be undertaken in our future. Eventually the second law of thermodynamics wins (helped by proton decay and black hole evaporation) and nothing that can store information or perform the operations needed for any kind of life will remain. This means that no matter what strange means any being undertakes as far as we understand physics it will eventually dissolve.

One should also not discount plain bad luck: finite beings in a universe where quantum randomness happens will sooner or later be subjected to a life-ending coincidence.

The Heat Death of the Universe and Quantum Murphy’s Law are a very high upper bounds. They are important because they force any transhumanist who doesn’t want to dump rationality overboard and insist that the laws of physics must allow true immortality because it is desired to acknowledge that they will eventually die – perhaps aeons hence and in a vastly changed state, but at some point it will have happened (perhaps so subtly that nobody even noticed: shifts in identity also count).

To this the reasonable transhumanist responds with a shrug: we have more pressing mortality concerns today, when ageing, disease, accidents and existential risk are so likely that we can hardly expect to survive a century. We endlessly try to explain to interviewers that transhumanism is not really seeking capital “I” Immortality but merely indefinitely long lifespans, and actually we are interested in years of health and activity rather than just watching the clock tick as desiccated mummies. The point is, a reasonable transhumanistic view will be focused on getting more and better life.

Running from death or running towards life?

Love triangleOne can strive to extend life because one is scared of drying – death as something deeply negative – or because life is worth living – remaining alive has a high value.

But if one can never avoid having death at some point in one’s lifespan then the disvalue of death will always be present. It will not affect whether living one life is better than another.

An exception may be if one believes that the disvalue can be discounted by being delayed, but this merely affects the local situation in time: at any point one prefers the longest possible life, but the overall utility as seen from the outside when evaluating a life will always suffer the total disvalue.

I believe the death-apologist thinkers have made some good points about why death is not intensely negative (e.g. the Lucretian arguments). I do not think they are convincing in that it is a positive property of the world. If “death gives life meaning” then presumably divorce is what makes love meaningful. If it is a good thing that old people retire from positions of power, why not have mandatory retirement rather than the equivalent of random death-squads? In fact, defences of death as a positive tend to use remarkably weak reasons for motivations, reasons that would never be taken seriously if motivating complacency about a chronic or epidemic disease.

Life-affirming transhumanism on the other hand is not too worried about the inevitability of death. The question is rather how much and what kind of good life is possible. One can view it as a game of seeking to maximise a “score” of meaningfulness and value under risk. Some try to minimise the risk, others to get high points, still others want to figure the rules or structure their life projects to make a meaningful structure across time.

Ending the game properly

Restart, human!This also includes ending life when it is no longer meaningful. Were one to regard death as extremely negative, then one should hang on even if there was nothing but pain and misery in the future. If death merely has zero value, then one can be in bad states where it is better to be dead than alive.

As we have argued in a recent paper many of the anti-euthanasia arguments turn on their head when applied to cryonics: if one regards life as a too precious gift to be thrown away and that the honourable thing is to continue to struggle on, then undergoing cryothanasia (being cryonically suspended well before one would otherwise have died) when suffering a terminal disease in the rational hope that this improves ones chances clearly seems better than to not take the chance or allow disease to reduce one’s chances.

This also shows an important point where one kind of death positivity and transhumanism may part ways. One can frame accepting death as accept that death exists and deal with it. Another frame, equally compatible with the statement, is not struggling too much against it.  The second frame is often what philosophers suggest as a means for equanimity. While possibly psychologically beneficial it clearly has limits: the person not going to the doctor with a treatable disease when they know it will develop into something untreatable (or not stepping out of the way of an approaching truck) is not just “not struggling” but being actively unreasonable. One can and should set some limit where struggle and interventions become unreasonable, but this is always going to be both individual and technology dependent. With modern medicine many previously lethal conditions (e.g. bacterial meningitis, many cancers) have become treatable to such an extent that it is not reasonable to avail oneself to treatment.

Transhumanism puts a greater value in longevity than is usual, partially because of its optimistic outlook (the future is likely to be good, technology is likely to advance), and this leads to a greater willingness to struggle on even when conventional wisdom says it is a good time to give up and become fatalistic. This is a reason transhumanists are far more OK with radical attempts to stave off death than most people, including cryonics.

Cryonics

Long term careCryonics is another surprisingly death-positive aspect of transhumanism. It forces you to confront your mortality head on, and it does not offer very strong reassurance. Quite the opposite: it requires planning for ones (hopefully temporary) demise, consider the various treatment/burial options, likely causes of death, and the risks and uncertainties involved in medicine. I have friends who seriously struggled with their dread of death when trying to sign up.

Talking about the cryonics choice with family is one of the hardest parts of the practice and has caused significant heartbreak, yet keeping silent and springing it as a surprise guarantees even more grief (and lawsuits). This is one area where better openness about death would be extremely helpful.

It is telling that members of the cryonics community seeks out each other, since it is one of the few environments where these things can be discussed openly and without stigma. It seems likely that the death-positive and the cryonics community have more in common than they might think.

Cryonics also has to deal with the bureaucracy and logistics of death, with the added complication that it aims at something slightly different than conventional burial. To a cryonicist the patients are still patients even when they have undergone cardiac arrest, are legally declared dead, solid and immersed in liquid nitrogen: they need care and protection since they may only be temporarily dead. Or deanimated, if we want to reserve “death” as a word for irreversibly non-living. (As a philosopher, I must say I find the cryosuspended state delightfully like a thought-experiment in a philosophy paper).

Final words

Winter dawnI have argued that transhumanism should be death-positive, at least in the sense that discussing death and accepting its long-term inevitability is both healthy and realistic. Transhumanists will not generally support a positive value of death and will tend to react badly to that kind of statement. But assigning it a vastly negative value produces a timid outlook that is unlikely to work well with the other parts of the transhumanist idea complex. Rather, death is bad because life is good but that doesn’t mean we should not think about it.

Indeed, transhumanists may want to become better at talking about death. Respected and liked people who have been part of the movement for a long time have died and we are often awkward about how to handle it. Transhumanists need to handle grief too. Even if the subject may be only temporarily dead in a cryonic tank.

Conversely, transhumanism and cryonics may represent an interesting challenge for the death positive movement in that they certainly represent an unusual take on attitudes and customs towards death. Seeing death as an engineering problem is rather different from how most people see it. Questioning the human condition is risky when dealing with fragile situations. And were transhumanism to be successful in some of its aims there may be new and confusing forms of death.

Existential risk in Gothenburg

This fall I have been chairing a programme at the Gothenburg Centre for Advanced Studies on existential risk, thanks to Olle Häggström. Visiting researchers come and participate in seminars and discussions on existential risk, ranging from the very theoretical (how do future people count?) to the very applied (should we put existential risk on the school curriculum? How?). I gave a Petrov Day talk about how to calculate risks of nuclear war and how observer selection might mess this up, beside seminars on everything from the Fermi paradox to differential technology development. In short, I have been very busy.

To open the programme we had a workshop on existential risk September 7-8 2017. Now we have the videos up of our talks:

I think so far a few key realisations and themes have in my opinion been

(1) the pronatalist/maximiser assumptions underlying some of the motivations for existential risk reduction were challenged; there is an interesting issue of how “modest futures” rather than “grand futures” play a role and non-maximising goals imply existential risk reduction.

(2) the importance of figuring out how “suffering risks”, potential states of astronomical amounts of suffering, relate to existential risks. Allocating effort between them rationally touches on some profound problems.

(3) The under-determination problem of inferring human values from observed behaviour (a talk by Stuart) resonated with the under-determination of AI goals in Olle’s critique of the convergent instrumental goal thesis and other discussions. Basically, complex agent-like systems might be harder to succinctly describe than we often think.

(4) Stability of complex adaptive systems – brains, economies, trajectories of human history, AI. Why are some systems so resilient in a reliable way, and can we copy it?

(5) The importance of estimating force projection abilities in space and as the limits of physics are approached. I am starting to suspect there is a deep physics answer to the question of attacker advantage, and a trade-off between information and energy in attacks.

We will produce an edited journal issue with papers inspired by our programme, stay tuned. Avancez!

 

Fractals and Steiner chains

I recently came across this nice animated gif of a fractal based on a Steiner chain, due to Eric Martin Willén. I immediately wanted to replicate it.

Make Steiner chains easily

First, how do you make a Steiner chain? It is easy using inversion geometry. Just decide on the number of circles tangent to the inner circle (n). Then the ratio of the radii of the inner and outer circle will be r/R = (1-\sin(\pi/n))/(1+\sin(\pi/n)). The radii of the circles in the ring will be (R-r)/2 and their centres are located at distance (R+r)/2 from the origin. This produces a staid concentric arrangement. Now invert with relation to an arbitrary circle: all the circles are mapped to other circles, their tangencies preserved. Voila! A suitably eccentric Steiner chain to play with.

Since the original concentric chain obviously can be rotated continuously without losing touch with the inner and outer circle, this also generates a continuous family of circles after the inversion. This is why Steiner’s porism is true: if you can make the initial chain, you get an infinite number of other chains with the same number of circles.

Iterated function systems with circle maps

The fractal works by putting copies of the whole set of circles in the chain into each circle, recursively. I remap the circles so that the outer circle becomes the unit circle, and then it is easy to see that for a given small circle with (complex) centre z and radius r the map f(w)=(w+z)r maps the interior of the unit circle to it. Use the ease of rotating the original concentric ring to produce an animation, and we can reconstruct the fractal.

Done.

Except… it feels a bit dry.

Ever since I first encountered iterated function systems in the 1980s I have felt they tend towards a geometric aesthetics that is not me, ferns notwithstanding. A lot has to do with the linearity of the transformations. One can of course add rotations, which cheers up the fractal a bit.

But still, I love the nonlinearity and harmony of conformal mappings.

Inversion makes things better!

Enter the circle inversion fractals. They are the sets of the plane that map to themselves when being inverted in any and all of a set of generating circles (or, equivalently, the limit set of points under these inversions). As a rule of thumb, when the circles do not touch the fractal will be Cantor/Fatou-style fractal dust. When the circles are tangent the fractal will pass through the point of tangency. If three circles are tangent the fractal will contain a circle passing through these points. Since Steiner chains have lots of tangencies, we should get a lot of delicious fractals by using them as generators.

I use nearly the same code I used for the elliptic inversion fractals, mostly because I like the colours. The “real” fractal is hidden inside the nested circles, composed of an infinite Apollonian gasket of circles.

Note how the fractal extends outside the generators, forming a web of circles. Convergence is slow near tangent points, making it “fuzzy”. While it is easy to see the circles that belong to the invariant set that are empty, there are also circles going through the foci inside the coloured disks, touching the more obvious circles near those fuzzy tangent points. There is a lot going on here.

But we can complicate things by allowing the chain to slide and see how the fractal changes.

This is pretty neat.

 

Overcoming inertia

Balls

The tremendous accelerations involved in the kind of spaceflight seen on Star Trek would instantly turn the crew to chunky salsa unless there was some kind of heavy-duty protection. Hence, the inertial damping field.
— Star Trek: The Next Generation Technical Manual, page 24.

For a space opera RPG setting I am considering adding inertia manipulation technology. But can one make a self-consistent inertia dampener without breaking conservation laws? What are the physical consequences? How many cool explosions, superweapons, and other tropes can we squeeze out of it? How to avoid the worst problems brought up by the SF community?

What inertia is

As Newton put it, inertia is the resistance of an object to a change in its state of motion. Newton’s force law F=ma is a consequence of the definition of momentum, p=mv (which in a way is more fundamental since it directly ties in with conservation laws). The mass in the formula is the inertial mass. Mass is a measure of how much there is of matter, and we normally multiply it with a hidden constant of 1 to get the inertial mass – this constant is what we will want to mess with.

There are relativistic versions of the laws of motion that handles momentum and inertia for high velocities, where the kinetic energy becomes so large that it starts to add mass to the whole system. This makes the total inertia go up, as seen by an outside observer, and looks like a nice case for inertia-manipulating tech being vaguely possible.

However, Einstein threw a spanner into this: gravity also acts on mass and conveniently does so exactly as much as inertia: gravitational mass (the masses in F=Gm_1m_2/r^2) and inertial mass appear to be equal. At least in my old school physics textbook (early 1980s!) this was presented as a cool unsolved mystery, but it is a consequence of the equivalence principle in general relativity (1907): all test particles accelerate the same way in a gravitational field, and this is only possible if their gravitational mass and inertial mass are proportional to one another.

So, an inertia manipulation technology will have to imply some form of gravity manipulation technology. Which may be fine from my standpoint, since what space opera is complete without antigravity? (In fact, I already had decided to have Alcubierre warp bubble FTL anyway, so gravity manipulation is in.)

Playing with inertia

OK, let’s leave relativity to the side for the time being and just consider the classical mechanics of inertia manipulation. Let us posit that there is a magical field that allows us to dial up or down the proportionality constant for inertial mass: the momentum of a particle will be p=\mu m v, the force law F=\mu m a and the formula for kinetic energy K=(1/2) \mu m v^2. \mu is the effect of the magic field, running from 0<\mu<\infty, with 1 corresponding to it being absent.

I throw a 1 g ping-pong ball at 1 m/s into my inertics device and turn on the field. What happens? Let us assume the field is \mu=1000. Now the momentum and kinetic energy jumps by a factor of 1000 if the velocity remains unchanged. Were I to catch the ball I would have gained 999 times its original kinetic energy: this looks like an excellent perpetual motion machine. Since we do not want that to be possible (a space empire powered by throwing ping-pong balls sounds silly) we must demand that energy is conserved.

Velocity shifting to preserve kinetic energy

Radiation shieldingOne way of doing energy conservation is for the velocity to go down for my heavy ping-pong ball. This means that the new velocity will be v/\sqrt{\mu}. Inertia-increasing fields slow down objects, while inertia-decreasing fields speed them up.

Forcefields/armour

One could have a force-field made of super-high inertia that would slow down incoming projectiles. At first this seems pointless, since once they get through to the other side they speed up and will do the same damage. But we could of course put in a bunch of armour in this field, and have it resist the projectile. The kinetic energy will be the same but it will be a lower velocity collision which means that the strength of the armour has a better chance of stopping it (in fact, as we will see below, we can use superdense armour here too). Consider the difference between being shot with a rifle bullet or being slowly but strongly stabbed by it: in the later case the force can be distributed by a good armour to a vast surface. Definitely a good thing for a space opera.

Spacecraft

A spacecraft that wants to get somewhere fast could just project a low \mu field around itself and boost its speed by a huge 1/\sqrt{\mu} factor. Sounds very useful. But now an impacting meteorite will both have an high relative speed, and when it enters the field get that boosted by the same factor again: impacts will happen at velocities increased by a factor of 1/\mu as measured by the ship. So boosting your speed with a factor of a 1000 will give you dust hitting you at speeds a million times higher. Since typical interplanetary dust already moves a few km/s, we are talking about hyperrelativistic impactors. The armour above sounds like a good thing to have…

Note that any inertia-reducing technology is going to improve rockets even if there is no reactionless drive or other shenanigans: you just reduce the inertia of the reaction mass. The rocket equation no longer bites: sure, your ship is mostly massive reaction mass in storage, but to accelerate the ship you just take a measure of that mass, restore its inertia, expel it, and enjoy the huge acceleration as the big engine pushes the overall very low-inertia ship. There is just a snag in this particular case: when restoring the inertia you somehow need to give the mass enough kinetic energy to be at rest in relation to the ship…

Cannons

This kind of inertics does not make for a great cannon. I can certainly make my projectile speed up a lot in the bore by lowering its inertia, but as soon as it leaves it will slow down. If we assume a given amount of force F accelerating it along the length L bore, it will pick up FL Joules of kinetic energy from the work the cannon does – independent of mass or inertia! The difference may be power: if you can only supply a certain energy per second like in a coilgun, having a slower projectile in the bore is better.

Physics

Note that entering and leaving an inertics field will induce stresses. A metal rod entering an inertia-increasing field will have the part in the field moving more slowly, pushing back against the not slowed part (yet another plus for the armour!). When leaving the field the lighter part outside will pull away strongly.

Another effect of shifting velocities is that gases behave differently. At first it looks like changing speeds would change temperature (since we tend to think of the temperature of a gas as how fast the molecules are bouncing around), but actually the kinetic temperature of a gas depends on (you guessed it) the average kinetic energy. So that doesn’t change at all. However, the speed of sound should scale as \propto 1/\sqrt{\mu}: it becomes far higher in the inertia-dampening field, producing helium-voice like effects. Air molecules inside an inertia-decreasing field would tend to leave more quickly than outside air would enter, producing a pressure difference.

Momentum conservation is a headache

Atlas 6Changing the velocity so that energy is conserved unfortunately has a drawback: momentum is not conserved! I throw a heavy object at my inertics machine at velocity v, momentum mv and energy (1/2)mv^2, it reduces is inertia and increases the speed to v/\sqrt{\mu}, keeps the kinetic energy at (1/2)mv^2, and the momentum is now mv/\sqrt{\mu}.

What if we assume the momentum change comes from the field or machine? When I hit the mass M machine with an object it experiences a force enough to change its velocity by w=mv(1-1/\sqrt{\mu})/M. When set to increase inertia it is pushed back a bit, potentially moving up to speed (m/M)v. When set to decrease inertia it is pushed forward, starting to move towards the direction the object impacted from. In fact, it can get arbitrarily large velocities by reducing \mu close to 0.

This sounds odd. Demanding momentum and energy conservation requires mv = mv/\sqrt{\mu} + Mw (giving the above formula) and mv^2 = \mu m(v/\sqrt{\mu})^2 + Mw^2, which insists that w=0. Clearly we cannot have both.

I don’t know about you, but I’d rather keep energy conserved. It is more obvious when you cheat about energy conservation.

Still, as Einstein pointed out using 4-vectors, momentum and energy conservation are deeply entangled – one reason inertics isn’t terribly likely in the real world is that they cannot be separated. We could of course try to conserve 4-momentum ((E/c,\gamma \mu m v_x, \gamma \mu m v_y, \gamma \mu m v_z)), which would look like changing both energy and normal momentum at the same time.

Energy gain/loss to preserve momentum

Buffer stopsWhat about just retaining the normal momentum rather than the kinetic energy? The new velocity would be v/\mu, the new kinetic energy would be K_1=(1/2) \mu m (v/\mu)^2 = (1/2) mv^2 / \mu = K_0/\mu. Just like in the kinetic energy preserving case the object slows down (or speeds up), but more strongly. And there is an energy debt of K_0 (1-1/\mu) that needs to be fixed.

One way of resolving energy conservation is to demand that the change in energy is supplied by the inertia-manipulation device. My ping-pong ball does not change momentum, but requires 0.999 Joule to gain the new kinetic energy. The device has to provide that. When the ball leaves the field there will be a surge of energy the device needs to absorb back. Some nice potential here for things blowing up in dramatic ways, a requirement for any self-respecting space opera.

Spacecraft

If I want to accelerate my spaceship in this setting, I would point my momentum vector towards the target, reduce my inertia a lot, and then have to provide a lot of kinetic energy from my inertics devices and power supply (actually, store a lot – the energy is a surplus). At first this sounds like it is just as bad as normal rocketry, but in fact it is awesome: I can convert my electricity directly into velocity without having to lug around a lot of reaction mass! I will even get it back when slowing down, a bit like electric brake regeneration systems.  The rocket equation does not apply beyond getting some initial momentum. In fact, the less velocity I have from the start, the better.

At least in this scheme inertia-reduced reaction mass can be restored to full inertia within the conceptual framework of energy addition/subtraction.

One drawback is that now when I run into interplanetary dust it will drain my batteries as the inertics system needs to give it a lot of kinetic energy (which will then go on harming me!)

Another big problem (pointed out by Erik Max Francis) is that turning energy into kinetic energy gives an energy requirement $latex dK/dt=mva$, which depends on an absolute speed. This requires a privileged reference frame, throwing out relativity theory. Oops (but not unexpected).

Forcefields/armour

Energy addition/depletion makes traditional force-fields somewhat plausible: a projectile hits the field, and we use the inertics to reduce its kinetic energy to something manageable. A rifle bullet has a few thousand Joules of energy, and if you can drain that it will now harmlessly bounce off your normal armour. Presumably shields will be depleted when the ship cannot dissipate or store the incoming kinetic energy fast enough, causing the inertics to overload and then leaving the ship unshielded.

Cannons

This kind of inertics allows us to accelerate projectiles using the inertics technology, essentially feeding them as much kinetic energy as we want. If you first make your projectile super-heavy, accelerate it strongly, and then normalise the inertia it will now speed away with a huge velocity.

Physics

A metal rod entering this kind of field will experience the same type of force as in the kinetic energy respecting model, but here the field generator will also be working on providing energy balance: in a sense it will be acting as a generator/motor. Unfortunately it does not look like it could give a net energy gain by having matter flow through.

Note that this kind of device cannot be simply turned off like the previous one: there has to be an energy accounting as everything returns to \mu=1. The really tricky case is if you are in energy-debt: you have an object of lowered inertia in the field, and cut the power. Now the object needs to get a bunch of kinetic energy from somewhere. Sudden absorption of nearby kinetic energy, freezing stuff nearby? That would break thermodynamics (I could set up a perpetual motion heat engine this way). Leaving the inertia-changed object with the changed inertia? That would mean there could be objects and particles with any effective mass – space might eventually be littered with atoms with altered inertia, becoming part of normal chemistry and physics. No such atoms have ever been found, but maybe that is because alien predecessor civilisations were careful with inertial pollution.

Other approaches

Gravity manipulation

Levitating morris dancersAnother approach is to say that we are manipulating spacetime so that inertial forces are cancelled by a suitable gravity force (or, for purists, that the acceleration due to something gets cancelled by a counter-acceleration due to spacetime curvature that makes the object retain the same relative momentum).

The classic is the “gravitic drive” idea, where the spacecraft generates a gravity field somehow and then free-falls towards the destination. The acceleration can be arbitrarily large but the crew will just experience freefall. Same thing for accelerating projectiles or making force-fields: they just accelerate/decelerate projectiles a lot. Since momentum is conserved there will be recoil.

The force-fields will however be wimpy: essentially it needs to be equivalent to an acceleration bringing the projectile to a stop over a short distance. Given that normal interplanetary velocities are in tens of kilometres per second (escape velocity of Earth, more or less) the gravity field needs to be many, many Gs to work. Consider slowing down a 20 km/s railgun bullet to a stop over a distance of 10 meters: it needs to happen over a millisecond and requires a 20 million m/s^2 deceleration (2.03 megaG).

If we go with energy and momentum conservation we may still need to posit that the inertics/antigravity draws power corresponding to the work it does . Make a wheel turn because of an attracting and repulsing field, and the generator has to pay the work (plus experience a torque). Make a spacecraft go from point A to B, and it needs to pay the potential energy difference, momentum change, and at least temporarily the gain in kinetic energy. And if you demand momentum conservation for a gravitic drive, then you have the drive pulling back with the same “force” as the spacecraft experiences. Note that energy and momentum in general relativity are only locally conserved; at least this kind of drive can handwave some excuse for breaking local momentum conservation by positing that the momentum now resides in an extended gravity field (and maybe gravitational waves).

Unlike the previous kinds of inertics this doesn’t change the properties of matter, so the effects on objects discussed below do not apply.

One problem is edge tidal effects. Somewhere there is going to be a transition zone where there is a field gradient: an object passing through is going to experience some extreme shear forces and likely spaghettify. Conversely, this makes for a nifty weapon ripping apart targets.

One problem with gravity manipulation is that it normally has to occur through gravity, which is both very weak and only has positive charges. Electromagnetic technology works so well because we can play positive and negative charges against each other, getting strong effects without using (very) enormous numbers of electrons. Gravity (and gravitomagnetic effects) normally only occurs due to large mass-energy densities and momenta. So for this to work there better be antigravitons, negative mass, or some other way of making gravity behave differently from vanilla relativity. Inertics can typically handwave something about the Higgs field at least.

Forcefield manipulation

This leaves out the gravity part and just posits that you can place force vectors wherever you want. A bit like Iain M. Banks’ effector beams. No real constraints because it is entirely made-up physics; it is not clear it respects any particular conservation laws.

Other physical effects

Here are some of the nontrivial effects of changing inertia of matter (I will leave out gravity manipulation, which has more obvious effects).

Electromagnetism: beware the blue carrot

It is worth noting that this thought experiment does not affect light and other electromagnetic fields: photons are massless. The overall effect is that they will tend to push around charged objects in the field more or less. A low-inertia electron subjected to a given electric field will accelerate more, a high-inertia electron less. This in turn changes the natural frequencies of many systems: a radio antenna will change tuning depending on the inertia change. A receiver inside the inertics field will experience outside signals as being stronger (if the field decreases inertia) or weaker (if it increases it).

Reducing inertia also increases the Bohr magneton, e\hbar/2 \mu m_e. This means that paramagnetic materials become more strongly affected by magnetic fields, and that ferromagnets are boosted. Conversely, higher inertia reduces magnetic effects.

Changing inertia would likely change atomic spectra (see below) and hence optical properties of many compounds. Many pigments gain their colour from absorption due to conjugated systems (think of carotene or heme) that act as antennas: inertia manipulation will change the absorbed frequencies. Carotene with increased inertia will presumably shift its absorption spectra towards lower frequencies, becoming redder, while lowered inertia causes a green or blue shift. An interesting effect is that the rhodopsin in the eye will also be affected and colour vision will experience the same shift (objects will appear to change colour in regions with a different \mu from the place where the observer is, but not inside their field). Strong enough fields will cause shifts so that absorption and transmission outside the visual range will matter, e.g. infrared or UV becomes visible.

However, the above claim that photons should not be affected by inertia manipulation may not have to hold true. Photons carry momentum, p=\hbar k where k is the wave vector. So we could assume a factor of 1/\sqrt{\mu} or 1/\mu gets in there and the field red/blueshifts photons. This would complicate things a lot, so I will leave analysis to the interested reader. But it would likely make inertics fields visible due to refractive effects.

Chemistry: toxic energy levels, plus a shrink-ray

Projectile warningOne area inertics would mess up is chemistry. Chemistry is basically all about the behaviour of the valence electrons of atoms. Their behaviour depends on their distribution between the atomic orbitals, which in turn depends on the Schrödinger equation for the atomic potential. And this equation has a dependency on the mass of the electron and nucleus.

If we look at hydrogen-like atoms, the main effect is that the energy levels become

E_n = - \mu (M Z^2 e^4/8 \epsilon_0^2 h^2 n^2),

where M=m_e m_p/(m_e+m_p) is the reduced mass. In short, the inertial manipulation field scales the energy levels up and down proportionally. One effect is that it becomes much easier to ionise low-inertia materials, and that materials that are normally held together by ionization bonds (say NaCl salt) may spontaneously decay when in high-inertia fields.

The Bohr radius scales as a_0 \propto 1/\mu: low-inertia atoms become larger. This really messes with materials. Placed in a low-inertia field atoms expand, making objects such as metals inflate. In a high inertia-field, electrons keep closer to the nuclei and objects shrink.

As distances change, the effects of electromagnetic forces also change: internal molecular electric forces, van der Waals forces and things like that change in strength, which will no doubt have effects on biology. Not to mention melting points: reducing the inertia will make many materials melt at far lower temperatures due to larger inter-atomic and inter-molecular distances, increasing it can make room-temperature liquids freeze because they are now more closely packed.

This size change also affects the electron-electron interactions, which among other things shield the nucleus and reduce the effective nuclear charge. The changed energy levels do not strongly affect the structure of the lightest atoms, so they will likely form the same kind of chemical bonds and have the same chemistry. However, heavier atoms such as copper, chromium and palladium already have ordering rules that are slightly off because of the quirks of the energy levels. As the field deviates from 1 we should expect lighter and lighter atoms to get alternative filling patterns and this means they will get different chemistry. Given that copper and chromium are essential for some enzymes, this does not bode well – if copper no longer works in cytochrome oxidase, the respiratory chain will lethally crash.

If we allow permanently inertia-altered particles chemistry can get extremely weird. An inertia-changed electron would orbit in a different way than a normal one, giving the atom it resided in entirely different chemical properties. Each changed electron could have its own individual inertia. Presumably such particles would randomise chemistry where they resided, causing all sorts of odd reactions and compounds not normally seen. The overall effect would likely be pretty toxic, since it would on average tend to catalyze metastable high-energy, low-entropy structures in biochemistry to fall down to lower energy, higher entropy states.

Lowering inertia in many ways looks like heating up things: particles move faster, chemicals diffuse more, and things melt. Given that much of biochemistry is tremendously temperature dependent, this suggests that even slight changes of \mu to 0.99 or 1.01 would be enough to create many of the bad effects of high fever or hypothermia, and a bit more would be directly lethal as proteins denaturate.

Fluids: I need a lie down

Inside a lowered inertia field matter responds more strongly to forces, and this means that fluids flow faster for the same pressure difference. Buoyancy cases stronger convection. For a given velocity, the inertial forces  are reduced compared to the viscosity, lowering the Reynolds number and making flows more laminar. Conversely, enhanced inertia fluids are hard to get to move but at a given speed they will be more turbulent.

This will really mess up the sense of balance and likely blood flow.

Gravity: equivalent exchange

I have ignored the equivalence of inertial and gravitational mass. One way for me to get away with it is to claim that they are still equivalent, since everything occurs within some local region where my inertics field is acting: all objects get their inertial mass multiplied by \mu and this also changes their gravitational mass. The equivalence principle still holds.

What if there is no equivalence principle? I could make 1 kg object and a 1 gram object fall at different accelerations. If I had a massless spring between them it would be extended, and I would gain energy. Beside the work done by gravity to bring down the objects (which I could collect and use to put them back where they started) I would now have extra energy – aha, another perpetual motion machine! So we better stick to the equivalence principle.

Given that boosting inertia makes matter both tend to shrink to denser states and have more gravitational force, an important worldbuilding issue is how far I will let this process go. Using it to help fission or fusion seems fine. Allowing it to squeeze matter into degenerate states or neutronium might be more world-changing. And easy making of black holes is likely incompatible with the survival of civilisation.

[ Still, destroying planets with small black holes is harder than it looks. The traditional “everything gets sucked down into the singularity” scenario is surprisingly slow. If you model it using spherical Bondi accretion you need an Earth-mass black hole to make the sun implode within a year or so, and a 3\cdot 10^{19} kg asteroid mass black hole to implode the Earth. And the extreme luminosity slows things a lot more. A better way may be to use an evaporating black hole to irradiate the solar system instead, or blow up something sending big fragments. ]

Another fun use of inertics is of course to mess up stars directly. This does not work with the energy addition/depletion model, but the velocity change model would allow creating a region of increased inertia where density ramps up: plasma enters the volume and may start descending below the spot. Conversely, reducing inertia may open a channel where it is easier for plasma from the interior to ascend (especially since it would be lighter). Even if one cannot turn this into a black hole or trigger surface fusion, it might enable directed flares as the plasma drags electromagnetic field lines with it.

The probe was invisible on the monitor, but its effects were obvious: titanic volumes of solar plasma were sucked together into a strangely geometric sunspot. Suddenly there was a tiny glint in the middle and a shock-wave: the telemetry screens went blank.

“Seems your doomsday weapon has failed, professor. Mad science clearly has no good concept of proper workmanship.”

“Stay your tongue. This is mad engineering: the energy ran out exactly when I had planned. Just watch.”

Without the probe sucking it together the dense plasma was now wildly expanding. As it expanded it cooled. Beyond a certain point it became too cold to remain plasma: there was a bright flash as the protons and electrons recombined and the vortex became transparent. Suddenly neutral the matter no longer constrained the tortured magnetic field lines and they snapped together at the speed of light. The monitor crashed.

“I really hope there is no civilization in this solar system sensitive to massive electromagnetic pulses” the professor gloated in the dark.

Conclusions

Model Pros Cons
Preserve kinetic energy Nice armour. Fast spacecraft with no energy needs (but weird momentum changes). Interplanetary dust is a problem. Inertics cannons inefficient. Toxic effects on biochemistry.
Preserve momentum Nice classical forcefield. Fast spacecraft with energy demands. Inertics cannons work. Potential for cool explosions due to overloads. Interplanetary dust drains batteries. Extremely weird issues of energy-debts: either breaking thermodynamics or getting altered inertia materials. Toxic effects on biochemistry. Breaks relativity.
Gravity manipulation No toxic chemistry effects. Fast spacecraft with energy demands. Inertics cannons work. Forcefields wimpy. Gravitic drives are iffy due to momentum conservation (and are WMDs). Gravity is more obviously hard to manipulate than inertia. Tidal edge forces.

In both cases where actual inertia is changed inertics fields appear pretty lethal. A brief brush with a weak field will likely just be incapacitating, but prolonged exposure is definitely going to kill. And extreme fields are going to do very nasty stuff to most normal materials – making them expand or contract, melt, change chemical structure and whatnot. Hence spacecraft, cannons and other devices using inertics need to be designed to handle these effects. One might imagine placing the crew compartment in a counter-inertics field keeping \mu=1 while the bulk of the spacecraft is surrounded by other fields. A failure of this counter-inertics field does not just instantly turn the crew into tuna paste, but into blue toxic tuna paste.

Gravity manipulation is cleaner, but this is not necessarily a plus from the cool fiction perspective: sometimes bad side effects are exactly what world-building needs. I love the idea of inertics with potential as an anti-personnel or assassination weapon through its biochemical effects, or “forcefields” being super-dense metal with amplified inertia protecting against high-velocity or beam impact.

The atomic rocket page makes a big deal out of how reactionless propulsion makes space opera destroying weapons of mass destruction (if every tramp freighter can be turned into a relativistic missile, how long is the Imperial Capital going to last?) This is a smaller problem here: being hit by a inertia-reduced freighter hurts less, even when it is very fast (think of being hit by a fast ping-pong ball). Gravity propulsion still enables some nasty relativistic weaponry, and if you spend time adding kinetic energy to your inertia-reduced missile it can become pretty nasty. But even if the reactionless aspect does not trivially produce WMDs inertia manipulation will produce a fair number of other risky possibilities. However, given that even a normal space freighter is a hypervelocity missile, the problem lies more in how to conceptualise a civilisation that regularly handles high-energy objects in the vicinity of centres of civilisation.

Not discussed here are issues of how big the fields can be made. Could we reduce the inertia of an asteroid or planet, sending it careening around? That has some big effects on the setting. Similarly, how small can we make the inertics: do they require a starship to power them, or could we have them in epaulettes? Can they be counteracted by another field?

Inertia-changing devices are really tricky to get to work consistently; most space opera SF using them just conveniently ignores the mess – just like how FTL gives rise to time travel or that talking droids ought to transform the global economy totally.

But it is fun to think through the awkward aspects, since some of them make the world-building more exciting. Plus, I would rather discover them before my players, so I can make official handwaves of why they don’t matter if they are brought up.

How much for that neutron in the window?

Zach Weinersmith asked:

That is a great question. I once came up with the answer “50 tons of neutrons are needed” to a serious problem (you don’t want to know). How cheaply could you get that?

Figuring out roughly how many neutrons there are per kilogram of pure elements is pretty easy. Get their standard atomic weights, A, and subtract the atomic number Z since that is the number of protons: N=A-Z. Now we know how many neutrons there are per atom on average (standard atomic weights include the different isotope weights, weighted by their abundance).

[ Since nucleons (protons and neutrons) are about 1830 times heavier than electrons, we can ignore the electrons for an error on order of 0.05%. There is also a binding energy error, since some of the total atomic mass is because of binding energy between nucleons, which is 0.94% or less. These errors are nothing compared to price uncertainties.]

We know that one nucleon weighs about u=1.660539040\cdot 10^{-27} kg, so the number of nucleons per kilogram is N_{\mathrm{nucl}} \approx 1/(Au) and the number of neutrons per kilo is N_n \approx N_{\mathrm{nucl}}(N/A). This ranges from 7.5\cdot 10^{25} for helium down to 1.2\cdot 10^{24} for Oganesson. Hydrogen just has 4.7\cdot 10^{24} neutrons per kilogram, despite having 5.97\cdot 10^{26} nucleons per kilogram – there isn’t that much deuterium and tritium around to contribute neutrons.

Now, the price of elements is badly defined. I can get a kilogram of coal much cheaper than a kilogram of diamond, and ultra-pure elements are very expensive even if the everyday element is cheap. Plus, prices vary. And it is hard to buy plutonium on the open market. Ignoring all that and taking the numbers from Wikipedia (and ignoring the that some values look odd, and some are for compounds, and that the prices are unadjusted for inflation, and that they are lacking for many elements…) we can actually calculate the number of neutrons per dollar:

Neutrons per dollar if one buys one kilogram of the element.
Neutrons per dollar if one buys one kilogram of the element.

And the winner is… aluminium! You can get 8.8\cdot 10^{24} neutrons per dollar from aluminium.

In second place, nitrogen (7.1\cdot 10^{24}) and in third, hydrogen (6.8\cdot 10^{24})! Hydrogen may be very neutron-poor, but since it is rather cheap and you get lots of nucleons per kilo, this balances the lack.

Given that these prices are dodgy, I would expect an uncertainty on the order of a magnitude (at least). So the true winner, given the cheapest actual source of the element, might be hard to find without excruciating price comparisons. But we can be fairly certain it is going to be something with an atomic number less than 25. Uranium is unlikely to be a cheap neutron source in this sense (and just look at poor plutonium!)

So, given that aluminium is 51.8% neutrons by weight I need 96.5 tons. The current aluminium price is $1,650.00 per ton, so I would have to pay $159,225 for the neutrons in my doomsday weapon – I mean, totally innocuous thought experiment!

The Aestivation hypothesis: popular outline and FAQ

Anders Sandberg & Milan Ćirković

Since putting up a preprint for our paper “That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi’s paradox” (Journal of the British Interplanetary Society, in press) we have heard some comments and confusion that suggest to us that it would be useful to try to outline and clarify what our idea is, what we think about it, and some of the implications.

Table of contents

The super-short version of the paper

Maybe we are not seeing alien civilizations because they are all rationally “sleeping” in the current early cosmological era, waiting for a remote future when it is more favourable to exploit the resources of the universe. We show that given current observations we can rule out a big chunk of possibilities like this, but not all.

A bit more unpacked explanation

Information processing requires physical resources, not just computers or brains, but energy to run them. There is a thermodynamic cost to perform information processing that is temperature dependent: in principle, running processing become 10 times more efficient if your computer is 10 times colder (measured in Kelvins). Right now the cosmic background radiation makes nearly everything in the universe hotter than 3 Kelvin, but as the universe expands this background temperature will decline exponentially. So if you want to do as much information processing as possible with the energy you have it makes sense to wait. It becomes exponentially better. Eventually the background temperature bottoms out because of horizon radiation in a few trillion years: at this point it no longer makes sense to wait with the computation.

Hence, an advanced civilization may have explored a big chunk of the universe, done what is doable with existing nature, and now mostly have internal “cultural” things to do. These things can be regarded as information processing. If they want to maximize processing they should not do it today but wait until the cold future when they will get tremendously more done (1030 times more!). They should hence aestivate, leaving their domain protected by some automation until they wake up.

If this is correct, there might be old and powerful civilizations around that are hard to observe, not because they are deliberately hiding but because they are inactive for the time being.

However, were this hypothesis true, they would not want to lose their stuff. We should expect to see fewer processes that reduce resources  that could be useful in the far future. In the paper we look at processes that look like they might waste resources: stars converting mass into energy that is lost, stars imploding into black holes, galactic winds losing gas into intergalactic space, galaxy collisions, and galaxy clusters getting separated by the expansion of the universe. Current observations do not seem to indicate anything preventing these processes (and most interventions would be very visible).

Hence, either:

  1. the answer to the Fermi question “where are they?!” is something else (like there being no aliens),
  2. advanced civilizations aestivate but do so with only modest hoards of resources rather than entire superclusters,
  3. they are mostly interested in spreading far and wide since this gives a lot of stuff with a much smaller effort than retaining it.

Necessary assumptions

The aestivation hypothesis makes the following assumptions:

  1. There are civilizations that mature much earlier than humanity. (not too implausible, given that Earth is somewhat late compared to other planets)
  2. These civilizations can expand over sizeable volumes, gaining power over their contents. (we have argued that this is doable)
  3. These civilizations have solved their coordination problems. (otherwise it would be hard to jointly aestivate; assumption likelihood hard to judge)
  4. A civilization can retain control over its volume against other civilizations. (otherwise it would need to actively defend its turf in the present era and cannot aestivate; likelihood hard to judge)
  5. The fraction of mature civilizations that aestivate is non-zero. (if it is rational at least some will try)
  6. Aestivation is largely invisible. (seems likely, since there would be nearly no energy release)

Have you solved the Fermi question?

We are not claiming we now know the answer to the Fermi question. Rather, we have a way of ruling out some possibilities and a few new possibilities worth looking for (like galaxies with inhibited heavy star formation).

Do you really believe in it?

I (Anders) personally think the likeliest reason we are not seeing aliens is not that they are aestivating, but just that they do not exist or are very far away.

We have an upcoming paper giving some reasons for this belief. The short of it is that we are very uncertain about the probability of life and intelligence given the current state of scientific knowledge. They could be exceedingly low, and this means we have to assign a fairly high credence to the empty universe hypothesis. If that hypothesis is not true, then aestivation is a pretty plausible answer in my personal opinion.

Why write about a hypothesis you do not think is the most likely one? Because we need to cover as much of possibility space as possible, and the aestivation hypothesis is neatly suggested by considerations of the thermodynamics of computation and physical eschatology. We have been looking at other unlikely Fermi hypotheses like the berserker hypothesis to see if we can give good constraints on them (in that case, our existence plus some ecological instability problems make berzerkers unlikely).

What is the point?

Understanding the potential and limits of intelligence in the universe tells us things about our own chances and potential future.

At the very least, this paper shows what a future advanced human-derived civilization may try to achieve, and some of the ultimate limits on far-future information processing. It gives some new numbers to feed into Nick Bostrom’s astronomical waste argument for working very hard on reducing existential risk in the present: the potential future is huge.

In regards to alien civilizations, the paper maps a part of possibility space, showing what is required for this Fermi paradox explanation to work as an explanation. It helps cut down on the possibilities a fair bit.

What about the Great Filter?

We know there has to be at least one the unlikely step between non-living matter and easily observable technological civilizations (“the Great Filter”), otherwise the sky would be full of them. If it is an early filter (life or intelligence is rare) we may be fairly alone but our future is open; were the filter a later step, we should expect to be doomed.

The aestivation hypothesis doesn’t tell us much about the filter. It allows explaining away the quiet sky as evidence for absence of aliens, so without knowing if it is true or not we do not learn anything from the silence. The lack of megascale engineering is evidence against certain kinds of alien goals and activities, but rather weak evidence.

Meaning of life

Depending on what you are trying to achieve, different long-term strategies make sense. This is another way SETI may tell us something interesting about the Big Questions by showing what advanced species are doing (or not):

If the ultimate value you aim for is local such as having as many happy minds as possible, then you want to spread very far and wide, even though the galaxy clusters you have settled will eventually drift apart and be forever separated. The total value doesn’t depend on all those happy minds talking to each other. Here the total amount of value is presumably proportional to the amount of stuff you have gathered times how long it can produce valuable thoughts. Aestivation makes sense, and you want to spread far and wide before doing it.

If the ultimate value you aim for is nonlocal, such as having your civilization produce the deepest possible philosophy, then all parts need to stay in touch with each other. This means that expanding outside a gravitationally bound supercluster is pointless: your expansion will halt at this point. We can be fairly certain there are no advanced civilizations trying to scrape together larger superclusters since it would be very visible.

If the ultimate value you aim for is finite, then at some point you may be done: you have made the perfect artwork or played all the possible chess games. Such a civilization only needs resources enough to achieve the goal, and then presumably will shut down. If the goal is small it might do this without aestivating, while if it is large it may aestivate with a finite hoard.

If the ultimate goal is modest, like enjoying your planetary utopia, then you will not affect the large-scale universe (although launching intergalactic colonization may still be good for security, leading to a nonlocal instrumental goal). Modest civilizations do not affect the overall fate of the universe.

Can we test it?

Yes! The obvious way is to carefully look for odd processes keeping the universe from losing potentially useful raw materials. The suggestions in the paper give some ideas, but there are doubtless other things to look for.

Also, aestivators would protect themselves from late-evolving species that could steal their stuff. If we were to start building self-replicating von Neumann probes in the future, if there are aestivations around they better stop us. This hypothesis test may of course be rather dangerous…

Isn’t there more to life than information processing?

Information is “a difference that makes a difference”: information processing is just going from one distinguishable state to another in a meaningful way. This covers not just computing with numbers and text, but having one brain state follow another, doing economic transactions, and creating art. Falling in love means that a mind goes from one state to another in a very complex way. Maybe the important subjective aspect is something very different from states of brain, but unless you think that it is possible to fall in love without having the brain change state there will be an information processing element to it. And that information processing is bound by the laws of thermodynamics.

Some theories of value place importance on how or that something is done rather than the consequences or intentions (which can be viewed as information states): maybe a perfect Zen action holds value on its own. If the start and end state are the same, then an infinite amount of such actions can be done and an equal amount of value achieved – yet there is no way of telling if they have ever happened, since there will not be a memory of them occurring.

In short, information processing is something we instrumentally need for the mental or practical activities that truly matter.

“Aestivate”?

Like hibernate, but through summer (latin aestus=heat, aestivate=spend the summer). Hibernate (latin hibernus=wintry) is more common, but since this is about avoiding heat we choose the slightly rarer term.

Can’t you put your computer in a fridge?

Yes, it is possible to cool below 3 K. But you need to do work to achieve it, spending precious energy on the cooling. If you want your computing done *now* and do not care about the total amount of computing, this is fine. But if you want as much computing as possible, then fridges are going to waste some of your energy.

There are some cool (sorry) possibilities by using very large black holes as heat sinks, since their temperature would be lower than the background radiation. But this will only last for a few hundred billion years, then the background will be cooler.

Does computation costs have to be temperature dependent?

The short answer is no, but we do not think this matters for our conclusion.

The irreducible energy cost of computation is due to the Landauer limit (this limit or principle has also been ascribed to Brillouin, Shannon, von Neumann and many others): to erase one bit of information you need to pay an energy cost equal to kT\ln(2) or more. Otherwise you could cheat the second law of thermodynamics.

However, logically reversible computation can work without paying this by never erasing information. The problem is of course that eventually memory runs out, but Bennett showed that one can then “un-compute” the computation by running it backwards, removing the garbage. The problem is that reversible computation needs to run very close to the average energy of the system (taking a long time) and that error correction is irreversible and temperature dependent. Same thing is true for quantum computation.

If one has a pool of negentropy, that is, something ordered that can be randomized, then one can “pay” for bit erasure using this pool until it runs out. This is potentially temperature independent! One can imagine having access to a huge memory full of zero bits. By swapping your garbage bit for a zero, you can potentially run computations without paying an energy cost (if the swapping is free): it has essentially zero temperature.

If there are natural negentropy pools aestivation is pointless: advanced civilizations would be dumping their entropy there in the present. But as far as we know, there are no such pools. We can make them by ordering matter or energy, but that has a work cost that depends on temperature (or using yet another pool of negentropy).

Space-time as a resource?

Maybe the flatness of space-time is the ultimate negentropy pool, and by wrinkling it up we can get rid of entropy: this is in a sense how the universe has become so complex thanks to matter lumping together. The total entropy due to black holes dwarfs the entropy of normal matter by several orders of magnitude.

Were space-time lumpiness a useful resource we should expect advanced civilizations to dump matter into black holes on a vast scale; this does not seem to be going on.

Lovecraft, wasn’t he, you know… a bit racist?

Yup. Very racist. And fearful of essentially everything in the modern world: globalisation, large societies, changing traditions, technology, and how insights from science make humans look like a small part of the universe rather than the centre of creation. Part of what make his horror stories interesting is that they are horror stories about modernity and the modern world-view. From a modernist perspective these things are not evil in themselves.

His vision of a vast universe inhabited by incomprehensible alien entities far outside the range of current humanity does fit in with Dysonian SETI and transhumanism: we should not assume we are at the pinnacle of power and understanding, we can look for signs that there are far more advanced civilizations out there (and if there is, we better figure out how to relate to this fact), and we can aspire to become something like them – which of course would have horrified Lovecraft to no end. Poor man.

Håkan’s surface

Here is a minimal surface based on the Weierstrass-Enneper representation f(z)=1, g(z)=\tanh^2(z). Written explicitly as a function from the complex number z to 3-space it is \Re([-\tanh(z)(\mathrm{sech}^2(z)-4)/6,i(6z+\tanh(z)(\mathrm{sech}^2(z)-4))/6,z-\tanh(z)]).

Håkan’s surface, a minimal surface with Weierstrass-Enneper representation f=1,g=tanh(z)^2.

It is based on my old tanh surface, but has a wilder style. It gets helped by the fact that my triangulation in the picture is pretty jagged. On one hand it has two flat ends, but also a infinite number of catenoid openings (only two shown here).

I call it Håkan’s surface, since I came up with it on my dear husband’s birthday. Happy birthday, Håkan!

Why fears of supersizing are misplaced

I am a co-author of the paper “On the Impossibility of Supersized Machines” (together with Ben Garfinkel, Miles Brundage, Daniel Filan, Carrick Flynn, Jelena Luketina, Michael Page, Andrew Snyder-Beattie, and Max Tegmark):

In recent years, a number of prominent computer scientists, along with academics in fields such as philosophy and physics, have lent credence to the notion that machines may one day become as large as humans. Many have further argued that machines could even come to exceed human size by a significant margin. However, there are at least seven distinct arguments that preclude this outcome. We show that it is not only implausible that machines will ever exceed human size, but in fact impossible.

In the spirit of using multiple arguments to bound a risk (so that the failure of single arguments do not decrease the power of the joint argument strongly) we show that there are philosophical reasons (the meaninglessness of “human-level largeness”, the universality of human largeness, the hard problem of largeness), psychological reasons (acting as an error theory based on motivated cognition), conceptual reasons (humans plus machines will be larger) and scientific/mathematical reasons (irreducible complexity, the quantum-Gödel issue) to not believe the possibility of machines larger than humans.

While it is cool to do exploratory engineering to demonstrate what can in principle be built, it is also very reassuring to show there are boundaries of what is possible. That allows us to focus on the (large) space within.

 

Catastrophizing for not-so-fun and non-profit

T-valuesOren Cass has an article in Foreign Affairs about the problem of climate catastrophizing. It is basically how it becomes driven by motivated reasoning but also drives motivated reasoning in a vicious circle. Regardless of whether he himself has motivated reasoning too, I think the text is relevant beyond the climate domain.

Some of FHI research and reports are mentioned in passing. Their role is mainly in showing that there could be very bright futures or other existential risks, which undercuts the climate catastrophists that he is really criticising:

Several factors may help to explain why catastrophists sometimes view extreme climate change as more likely than other worst cases. Catastrophists confuse expected and extreme forecasts and thus view climate catastrophe as something we know will happen. But while the expected scenarios of manageable climate change derive from an accumulation of scientific evidence, the extreme ones do not. Catastrophists likewise interpret the present-day effects of climate change as the onset of their worst fears, but those effects are no more proof of existential catastrophes to come than is the 2015 Ebola epidemic a sign of a future civilization-destroying pandemic, or Siri of a coming Singularity

I think this is an important point for the existential risk community to be aware of. We are mostly interested in existential risks and global catastrophes that look possible but could be impossible (or avoided), rather than trying to predict risks that are going to happen. We deal in extreme cases that are intrinsically uncertain, and leave the more certain things to others (unless maybe they happen to be very under-researched). Siri gives us some singularity-evidence, but we think it is weak evidence, not proof (a hypothetical AI catastrophist would instead say “so, it begins”).

Confirmation bias is easy to fall for. If you are looking for signs of your favourite disaster emerging you will see them, and presumably loudly point at them in order to forestall the disaster. That suggests extra value in checking what might not be xrisks and shouldn’t be emphasised too much.

Catastrophizing is not very effective

The nuclear disarmament movement also used a lot of catastrophizing, with plenty of archetypal cartoons showing Earth blowing up as a result of nuclear war or commonly claiming it would end humanity. The fact that the likely outcome merely would be mega- or gigadeath and untold suffering was apparently not regarded as rhetorically punchy enough. Ironically, Threads, The Day After or the Charlottesville scenario in Effects of Nuclear War may have been far more effective in driving home the horror and undesirability of nuclear war better, largely by giving a smaller-scale more relateable scenarios. Scope insensitivity, psychic numbing, compassion fade and related effects make catastrophizing a weak, perhaps even counterproductive, tool.

Defending bad ideas

Another take-home message: when arguing for the importance of xrisk we should make sure we do not end up in the stupid loop he describes. If something is the most important thing ever, we better argue for it well and backed up with as much evidence and reason as can possibly be mustered. Turning it all into a game of overcoming cognitive bias through marketing or attributing psychological explanations to opposing views is risky.

The catastrophizing problem for very important risks is related to Janet Radcliffe-Richards’ analysis of what is wrong with political correctness (in an extended sense). A community argues for some high-minded ideal X using some arguments or facts Y. Someone points out a problem with Y. The rational response would be to drop Y and replace it with better arguments or facts Z (or, if it is really bad, drop X). The typical human response is to (implicitly or explicitly) assume that since Y is used to argue for X, then criticising Y is intended to reduce support for X. Since X is good (or at least of central tribal importance) the critic must be evil or at least a tribal enemy – get him! This way bad arguments or unlikely scenarios get embedded in a discourse.

Standard groupthink where people with doubts figure out that they better keep their heads down if they want to remain in the group strengthens the effect, and makes criticism even less common (and hence more salient and out-groupish when it happens).

Reasons to be cheerful?

An interesting detail about the opening: the GCR/Xrisk community seems to be way more optimistic than the climate community as described. I mentioned Warren Ellis little novel Normal earlier on this blog, which is about a mental asylum for futurists affected by looking into the abyss. I suspect he was maybe modelling them on the moody climate people but adding an overlay of other futurist ideas/tropes for the story.

Assuming climate people really are that moody.

An elliptic remark

I recently returned to toying around with circle and sphere inversion fractals, that is, fractal sets that are invariant under inversion in a given set of circles or spheres.

That got me thinking: can you invert points in other things than circles? Of course you can! José L. Ramírez has written a nice overview of inversion in ellipses. Basically a point P is projected to another point P' so that ||P-O||\cdot ||P'-O||=||Q-O||^2 where O is the centre of the ellipse and Q is the point where the ray between O, P', P intersects the ellipse.

In Cartesian coordinates, for an ellipse centered on the origin and with semimajor and minor axes a,b, the inverse point of P=(u,v) is P'=(x,y) where x=\frac{a^2b^2u}{b^2u^2+a^2v^2} and y=\frac{a^2b^2v}{b^2u^2+a^2v^2}. Basically this is a squashed version of the circle formula.

Many of the properties remain the same. Lines passing through the centre of the ellipse are unchanged. Other lines get mapped to ellipses; if they intersect the inversion ellipse the new ellipse also intersect it at those points. Hence tangent lines are mapped to tangent ellipses. Ellipses with parallel axes and equal eccentricities map onto other ellipses (or lines if they intersect the centre of the inversion ellipse). Other conics get turned into cubics; for example a hyperbola gets mapped to a lemniscate. (See also this paper for more examples).

Now, from a fractal standpoint this means that if you have a set of ellipses that are tangent you should expect a fractal passing through their points of tangency. Basically all of the standard circle inversion fractals hence have elliptic counterparts. Here is the result for a ring of 4 or 6 mutually tangent ellipses:

Invariant set fractal (blue) for inversion in the red ellipses. Generated using an IFS algorithm.

Invariant set fractal (blue) for inversion in the red ellipses. Generated using an IFS algorithm.
Invariant set fractal (blue) for inversion in the red ellipses. Generated using an IFS algorithm.

These pictures were generated by taking points in the plane and inverting them with randomly selected ellipses; as the process continues they get attracted to the invariant set (this is basically a standard iterated function system). It also has the known problem of finding the points at the tangencies, since the iteration has to loop consistently between inverting in the two ellipses to get there, but it is likely that a third will be selected at some point.

One approach is to deliberately recurse downward to find the points using a depth first search. We can take look at where each ellipse is mapped by each of the inversions, and since the fractal is inside each of the mapped ellipses we can then continue mapping the chain of mapped ellipses, getting nice bounds on where it is going (as long as everything is shrinking: this is guaranteed as long as it is mappings from the outside to the inside of the generating ellipses, but if they were to overlap things can explode). Doing this for just one step reveals one reason for the quirky shapes above: some of the ellipses get mapped into crescents or pears, adding a lot of bends:

Mappings of the ellipses by their inversions: each of the four main ellipses map the other three to their interior but distort the shape of two of them.
Mappings of the ellipses by their inversions: each of the four main ellipses map the other three to their interior but distort the shape of two of them.

Now, continuing this process makes a nested structure where the invariant set is hidden inside all the other mapped ellipses.

Nested mappings of the ellipses in the chain, bounding the invariant set. Colors are mixtures of the colors of the generating ellipses, with an increase in saturation.
Nested mappings of the ellipses in the chain, bounding the invariant set. Colors are mixtures of the colors of the generating ellipses, with an increase in saturation.

It is still hard to reach the tangent points, but at least now they are easier to detect. They are also numerically tough: most points on the ellipse circumferences are mapped away from them towards the interior of the generating ellipse. Still, if we view the mapped ellipses as uncertainties and shade them in we can get a very pleasing map of the invariant set:

Invariant set of chain of four outer ellipses and a circle tangent to them on the inside.
Invariant set of chain of four outer ellipses and a circle tangent to them on the inside.

Here are a few other nice fractals based on these ideas:

Using a mix of circles and ellipses produces a nice mix of the regularity of the circle-based Apollonian gaskets and the swooshy, Hénon fractal shape the ellipses induce.

Appendix: Matlab code

 

% center=[-1 -1 2 1; -1 1 1 2; 1 -1 1 2; 1 1 2 1];
% center(:,3:4)=center(:,3:4)*(2/3);
%
%center=[-1 -1 2 1; -1 1 1 2; 1 -1 1 2; 1 1 2 1; 3 1 1 2; 3 -1 2 1];
%center(:,3:4)=center(:,3:4)*(2/3);
%center(:,1)=center(:,1)-1;
%
% center=[-1 -1 2 1; -1 1 1 2; 1 -1 1 2; 1 1 2 1];
% center(:,3:4)=center(:,3:4)*(2/3);
% center=[center; 0 0 .51 .51];
%
% egg
% center=[0 0 0.6666 1; 2 2 2 2; -2 2 2 2; -2 -2 2 2; 2 -2 2 2];
%
% double
%r=0.5;
%center=[-r 0 r r; r 0 r r; 2 2 2 2; -2 2 2 2; -2 -2 2 2; 2 -2 2 2];
%
% Double egg
center=[0.3 0 0.3 0.845; -0.3 0 0.3 0.845; 2 2 2 2; -2 2 2 2; -2 -2 2 2; 2 -2 2 2];
%
M=size(center,1); % number of ellipses
N=100; % points on fill curves
X=randn(N+1,2);
clf
hold on
tt=2*pi*(0:N)/N;
alpha 0.2
for i=1:M
    X(:,1)=center(i,1)+center(i,3)*cos(tt);
    X(:,2)=center(i,2)+center(i,4)*sin(tt);
    plot(X(:,1),X(:,2),'k'); 
    for j=1:M
        if (i~=j)
            recurseDown(X,[i j],10,center)
            drawnow
       end
    end
end

Recursedown.m

function recurseDown(X,ellword,maxlevel,center)
i=ellword(end); % invert in latest ellipse
%
% Perform inversion
C=center(i,1:2);
A2=center(i,3).^2;
B2=center(i,4).^2;
Y(:,1)=X(:,1)-C(:,1);
Y(:,2)=X(:,2)-C(:,2);
X(:,1)=C(:,1)+A2.*B2.*Y(:,1)./(B2.*Y(:,1).^2+A2.*Y(:,2).^2);
X(:,2)=C(:,2)+A2.*B2.*Y(:,2)./(B2.*Y(:,1).^2+A2.*Y(:,2).^2);
%
if (norm(max(X)-min(X))<0.005) return; end
%
co=hsv(size(center,1));
coco=mean([1 1 1; 1 1 1; co(ellword,:)]);
%
%    plot(X(:,1),X(:,2),'Color',coco)
fill(X(:,1),X(:,2),coco,'FaceAlpha',.2,'EdgeAlpha',0)
%
if (length(ellword)<maxlevel)
    for j=1:size(center,1)
        if (j~=i)
            recurseDown(X,[ellword j],maxlevel,center)
        end
    end
end