Energetics of the brain and AI

Lawrence Krauss is not worried about AI risk (ht to Luke Muelhauser); while much of his complacency is based on a particular view of the trustworthiness and level of common sense exhibited by possible future AI that is pretty impossible to criticise, he makes a particular claim:

First, let’s make one thing clear. Even with the exponential growth in computer storage and processing power over the past 40 years, thinking computers will require a digital architecture that bears little resemblance to current computers, nor are they likely to become competitive with consciousness in the near term. A simple physics thought experiment supports this claim:

Given current power consumption by electronic computers, a computer with the storage and processing capability of the human mind would require in excess of 10 Terawatts of power, within a factor of two of the current power consumption of all of humanity. However, the human brain uses about 10 watts of power. This means a mismatch of a factor of 1012, or a million million. Over the past decade the doubling time for Megaflops/watt has been about 3 years. Even assuming Moore’s Law continues unabated, this means it will take about 40 doubling times, or about 120 years, to reach a comparable power dissipation. Moreover, each doubling in efficiency requires a relatively radical change in technology, and it is extremely unlikely that 40 such doublings could be achieved without essentially changing the way computers compute.

This claim has several problems. First, there are few, if any, AI developers who think that we must stay with current architectures. Second, more importantly, the community concerned with superintelligence risk is generally agnostic about how soon smart AI could be developed: it doesn’t have to happen soon for us to have a tough problem in need of a solution, given how hard AI value alignment seems to be. And third, consciousness is likely irrelevant for instrumental intelligence; maybe the word is just used as a stand-in for some equally messy term like “mind”, “common sense” or “human intelligence”.

The interesting issue is however what energy requirements and computational power tells us about human and machine intelligence, and vice versa.

Computer and brain emulation energy use

PowergridI have earlier on this blog looked at the energy requirements of the Singularity. To sum up, current computers are energy hogs requiring 2.5 TW of power globally, with an average cost around 25 nJ per operation. More efficient processors are certainly possible (a lot of the current ones are old and suboptimal). For example, current GPUs consume about a hundred Watts and have 10^{10} transistors, reaching performance in the 100 Gflops range, one nJ per flop. Koomey’s law states that the energy cost per operation halves every 1.57 years (not 3 years as Krauss says). So far the growth of computing capacity has grown at about the same pace as energy efficiency, making the two trends cancel each other. In the end, Landauer’s principle gives a lower bound of kT\ln(2) J per irreversible operation; one can circumvent this by using reversible or quantum computation, but there are costs to error correction – unless we use extremely slow and cold systems in the current era computation will be energy-intensive.

I am not sure what brain model Krauss bases his estimate on, but 10 TW/25 nJ = 4\cdot 10^{20} operations per second (using slightly more efficient GPUs ups it to 10^{22} flops). Looking at the estimates of brain computational capacity in appendix A of my old roadmap, this is higher than most. The only estimate that seem to be in the same ballpark is (Thagard 2002), which argues that the number of computational elements in the brain are far greater than the number of neurons (possibly even individual protein molecules). This is a fairly strong claim, to say the least. Especially since current GPUs can do a somewhat credible job of end-to-end speech recognition and transcription: while that corresponds to a small part of a brain, it is hardly 10^{-11} of a brain.

Generally, assuming a certain number of operations per second in a brain and then calculating an energy cost will give you any answer you want. There are people who argue that what really matters is the tiny conscious bandwidth (maybe 40 bits/s or less) and that over a lifetime we may only learn a gigabit. I used 10^{22} to 10^{25} flops just to be on the safe side in one post. AIimpacts.org has collected several estimates, getting the median estimate 10^{18}. They have also argued in favor of using TEPS (traversed edges per second) rather than flops, suggesting around 10^{14} TEPS for a human brain – a level that is soon within reach of some systems.

(Lots of apples-to-oranges comparisions here, of course. A single processor operation may or may not correspond to a floating point operation, let alone to what a GPU does or a TEPS. But we are in the land of order-of-magnitude estimates.)

Brain energy use

Poke-a-brainWe can turn things around: what does the energy use of human brains tell us about their computational capacity?

Ralph Merkle calculated back in 1989 that given 10 Watts of usable energy per human brain, and that the cost of each jump past a node of Ranvier cost 5\cdot 10^{-15} J, producing 2\cdot 10^{15} such operations. He estimated this was about equal to the number of synaptic operations, ending up with 10^{13}10^{16} operations per second.

A calculation I overheard at a seminar by Karlheinz Meier argued the brain uses 20 W power, has 100 billion neurons firing per second, uses 10^{-10} J per action potential, plus it has 10^{15} synapses receiving signals at about 1 Hz, and uses 10^{-14} J per synaptic transmission. One can also do it from the bottom to the top: there are 10^9 ATP molecules per action potential, 10^5 are needed for synaptic transmission. 10^{-19} J per ATP gives 10^{-10} J per action potential and 10^{-14} J per synaptic transmission. Both these converge on the same rough numbers, used to argue that we need much better hardware scaling if we ever want to get to this level of detail.

Digging deeper into neural energetics, maintaining resting potentials in neurons and glia account for 28% and 10% of the total brain metabolic cost, respectively, while the actual spiking activity is about 13% and transmitter release/recycling plus calcium movement is about 1%. Note how this is not too far from the equipartition in Meier’s estimate. Looking at total brain metabolism this constrains the neural firing rate: more than 3.1 spikes per second per neuron would consume more energy than the brain normally consumes (and this is likely an optimistic estimate). The brain simply cannot afford firing more than 1% of neurons at the same time, so it likely relies on rather sparse representations.

Unmyelinated axons require about 5 nJ/cm to transmit action potentials. In general, the brain gets around it through some current optimization, myelinisation (which also speeds up transmission at the price of increased error rate), and likely many clever coding strategies. Biology is clearly strongly energy constrained. In addition, cooling 20 W through a bloodflow of 750-1000 ml/min is relatively tight given that the arterial blood is already at body temperature.

20 W divided by 1.3\cdot 10^{-21} J (the Landauer limit at body temperature) suggests a limit of no more than 1.6\cdot 10^{22} irreversible operations per second. While a huge number, it is just a few orders higher than many of the estimates we have been juggling so far. If we say these operations are distributed across 100 billion neurons (which is at least within an order of magnitude of the real number) we get 160 billion operations per second per neuron; if we instead treat synapses (about 8000 per neuron) as the loci we get 20 million operations per second per synapse.

Running the full Hodgkin-Huxley neural model at 1 ms resolution requires about 1200 flops, or 1.2 million flops per second of simulation. If we treat a synapse as a compartment (very reasonable IMHO) that is just 16.6 times the Landauer limit: if the neural simulation had multiple digit precision and erased a few of them per operation we would bump into the Landauer limit straight away. Synapses are actually fairly computationally efficient! At least at body temperature: cryogenically cooled computers could of course do way better. And as Izikievich, the originator of the 1200 flops estimate, loves to point out, his model requires just 13 flops: maybe we do not need to model the ion currents like HH to get the right behavior, and can suddenly shave off two orders of magnitude.

Information dissipation in neural networks

Just how much information is lost in neural processing?

A brain is a dynamical system changing internal state in a complicated way (let us ignore sensory inputs for the time being). If we start in a state somewhere within some predefined volume of state-space, over time the state will move to other states – and the initial uncertainty will grow. Eventually the possible volume we can find the state in will have doubled, and we will have lost one bit of information.

Intermittent Lorenz AttractorThings are a bit more complicated, since the dynamics can contract along some dimensions and diverge along others: this is described by the Lyapunov exponents. If the trajectory has exponent \lambda in some direction nearby trajectories diverge like |x_a(t)-x_b(t)| \propto |x_a(0)-x_b(0)| e^{\lambda t} in that direction. In a dissipative dynamical system the sum of the exponents is negative: in total, trajectories move towards some attractor set. However, if at least one of the exponents is positive, then this can be a strange attractor that the trajectories endlessly approach, yet they locally diverge from each other and gradually mix. So if you can only measure with a fixed precision at some point in time, you can not certainly tell where the trajectory was before (because of the contraction due to negative exponents has thrown away starting location information), nor exactly where it will be on the attractor in the future (because the positive exponents are amplifying your current uncertainty).

A measure of the information loss is the Kolmogorov-Sinai entropy, which is bounded by K \leq \sum_{\lambda_i>0} \lambda_i, the positive Lyapunov exponents (equality holds for Axiom A attractors). So if we calculate the KS-entropy of a neural system, we can estimate how much information is being thrown away per unit of time.

Monteforte and Wolf looked at one simple neural model, the theta-neuron (presentation). They found a KS-entropy of roughly 1 bit per neuron and spike over a fairly large range of parameters. Given the above estimates of about one spike per second per neuron, this gives us an overall information loss of 10^{11} bits/s in the brain, which is 1.3\cdot 10^{-10} W at the Landauer limit – by this account, we are some 11 orders of magnitude away from thermodynamic perfection. In this picture we should regard each action potential corresponding to roughly one irreversible yes/no decision: a not too unreasonable claim.

I begun to try to estimate the entropy and Lyapunov exponents of the Izikievich network to check for myself, but decided to leave this for another post. The reason is that calculating the Lyapunov exponents from time series is a pretty delicate thing, especially when there is noise. And the KS-dimension is even more noise-sensitive. In research on EEG data (where people have looked at the dimension of chaotic attractors and their entropies to distinguish different mental states and epilepsy) an approximate entropy measure is used instead.

It is worth noticing that one can look at cognition as a system with a large-scale dynamics that has one entropy (corresponding to shifting between different high-level mental states) and microscale dynamics with different entropy (corresponding to the neural information processing). It is a safe bet that the biggest entropy costs are on the microscale (fast, numerous simple states) than the macroscale (slow, few but complex states).

Energy of AI

Mark IWhere does this leave us in regards to the energy requirements of artificial intelligence?

Assuming the same amount of energy is needed for a human and machine to do a cognitive task is a mistake.

First, as the Izikievich neuron demonstrates, it might be that judicious abstraction easily saves two orders of magnitude of computation/energy.

Special purpose hardware can also save one or two orders of magnitude; using general purpose processors for fixed computations is very inefficient. This is of course why GPUs are so useful for many things: in many cases you just want to perform the same action on many pieces of data rather than different actions on the same piece.

But more importantly, on what level the task is implemented matters. Sorting or summing a list of a thousand elements is a fast computer operation that can be done in memory, but a hour-long task for a human: because of our mental architecture we need to represent the information in a far more redundant and slow way, not to mention perform individual actions on the seconds time-scale. A computer sort uses a tight representation more like our low-level neural circuitry. I have no doubt one could string together biological neurons to perform a sort or sum operation quickly, but cognition happens on a higher, more general level of the system (intriguing speculations about idiot savants aside).

While we have reason to admire brains, they are also unable to perform certain very useful computations. In artificial neural networks we often employ non-local matrix operations like inversion to calculate optimal weights: these computations are not possible to perform locally in a distributed manner. Gradient descent algorithms such as backpropagation are unrealistic in a biological sense, but clearly very successful in deep learning. There is no shortage of papers describing various clever approximations that would allow a more biologically realistic system to perform similar operations – in fact, the brains may well be doing it – but artificial systems can perform them directly, and by using low-level hardware intended for it, very efficiently.

When a deep learning system learns object recognition in an afternoon it beats a human baby by many months. When it learns to do analogies from 1.6 billion text snippets it beats human children by years. Yes, these are small domains, yet they are domains that are very important for humans and would presumably develop as quickly as possible in us.

Biology has many advantages in robustness and versatility, not to mention energy efficiency. But it is also fundamentally limited by what can be built out of cells with a particular kind of metabolism, that organisms need to build themselves from the inside, and the need of solving problems that exist in a particular biospheric environment.

Conclusion

Unless one thinks the human way of thinking is the most optimal or most easily implementable way, we should expect de novo AI to make use of different, potentially very compressed and fast, processes. (Brain emulation makes sense if one either cannot figure out how else to do AI, or one wants to copy extant brains for their properties.) Hence, the costs of brain computation is merely a proof of existence that there are systems that effective – the same mental tasks could well be done by far less or far more efficient systems.

In the end, we may try to estimate fundamental energy costs of cognition to bound AI energy use. If human-like cognition takes a certain number of bit erasures per second, we would get some bound using Landauer (ignoring reversible computing, of course). But as the above discussion has showed, it may be that the actual computational cost needed is just some of the higher level representations rather than billions of neural firings: until we actually understand intelligence we cannot say. And by that point the question is moot anyway.

Many people have the intuition that the cautious approach is always to state “thing’s won’t work”. But this mixes up cautious with conservative (or even reactionary). A better cautious approach is to recognize that “things may work”, and then start checking the possible consequences. If we want a reassuring constraint on why certain things cannot happen it need to be tighter than energy estimates.

Strategies for not losing things

Lost keysA dear family member has an annoying tendency to lose things – sometimes causing a delaying “But where did I put the keys?” situation when leaving home, sometimes brief panics when wallets go missing, and sometimes causing losses of valuable gadgets. I rarely lose things. This got me thinking about the difference in our approaches. Here are some strategies I seem to follow to avoid losing things.

This is intended more as an exploration of the practical philosophy and logistics of everyday life than an ultimate manual for never losing anything ever.

Since we spend so much of our time in everyday life, the returns of some time spent considering and improving it are large, even if the improvement is about small things.

Concentric layers

I think one of my core principles is to keep important stuff on me. I always keep my phone in my breast pocket, my glasses on my nose, my wallet and keys in my pocket. On travel, my passport is there too. My laptop, travel/backup drive, business cards, umbrella, USB connectors etc. are in the backpack I carry around or have in the same room. If I had a car, I would have tools, outdoor equipment and some non-perishable snacks in the trunk. Books I care about are in my own bookshelf, other books distributed across my office or social environment.

The principle is to ensure that the most important, irreplaceable things are under your direct personal control. The probability of losing stuff goes up as it moves away from our body.

Someone once said: “You do not own the stuff you cannot carry at a dead run.” I think there is a great deal of truth to that. If things turn pear-shaped I should in principle be able to bail out with what I got on me.

A corollary is that one should reduce the number of essential things one has to carry around. Fewer things to keep track of. I was delighted when my clock and camera merged with my phone. The more I travel, the less I pack. Fewer but more essential things also increases the cost of losing them: there is a balance to be made between resilience and efficiency.

Layering also applies to our software possessions. Having files in the cloud is nice as long as the cloud is up, the owner of the service behaves nicely to you, and you can access it. Having local copies on a hard drive means that you have access regardless. This is extra important for those core software possessions like passwords, one time pads, legal documents or proofs of identity – ideally they should be on a USB drive or other offline medium we carry at all times, making access hard for outsiders.

For information redundant remote backup copies also works great (a friend lost 20 years of files to a burglar – and her backup hard drives were next to the computer, so they were stolen too). But backups are very rarely accessed: they form a very remote layer. Make sure the backup system actually does work before trusting it: as a general rule you want to have ways to notice when you have lost something, but remote possessions can often quietly slip away.

Minimax

Another useful principle, foreshadowed above, is minimax: minimize the max loss. Important stuff should be less likely to be lost than less important stuff. The amount of effort I put into thinking up what could go wrong and what to do about it should be proportional to the importance of the thing.

Hence, think about what the worst possible consequence of a loss. A lost pen: annoying if there isn’t another nearby. A lost book: even more annoying. A lost key: lost time, frustration and quite possibly locksmith costs. Lost credit card: hassle to get it blocked and replaced, loss of chance to buy things. Identity theft: major hassle, long term problems. Lost master passwords: loss of online identity and perhaps reputation. Loss of my picture archive: loss of part of my memory.

The rational level of concern should be below the probability of loss times the consequences. We can convert consequences into time: consider how long it would take to get a new copy of a book, get a new credit card, or handle somebody hijacking your Facebook account (plus lost time due to worry and annoyance). The prior probability of loosing books may be about 1%, while identity theft has an incidence of 0.2% per year. So if identity theft would cause a month of work to you, it is probably worth spending a dedicated hour each year to minimize the risk.

Remember XKCDs nice analysis of how long it is rational to optimize daily tasks.

Things you have experience of losing a few times obviously require more thought. Are there better ways of carrying them, could you purchase suitable fasteners – or is their loss actually acceptable? Conversely, can the damage from the loss be mitigated? Spare keys or email accounts are useful to have.

There is of course a fuzzy border between conscientiousness, rationality and worry.

Scenarios

A piece of the puzzleI have the habit of running through scenarios about possible futures whenever I do things. “If I leave this thing here, will I find it again?” “When I come to the airport security check, how do I minimize the number of actions I will need to take to put my stuff in the trays?” The trick is to use these scenarios to detect possible mistakes or risks before they happen, especially in the light of the minimax principle.

Sometimes they lead to interesting realizations: a bank ID device was stored right next to a card with a bank ID code in my wallet: while not enough to give a thief access to my bank account they would pass by two of the three steps (the remaining was a not too strong password). I decided to move the device to another location near my person, making a loss of both the code and the device significantly less probable in a robbery or lost wallet.

The point is not to plan for everything, but over time as you notice them patch holes in your everyday habits. Again, there is a fine line between forethought and worrying. I think the defining feature is emotional valence: if the thought makes you upset rather than “OK, let’s not do that” then you are worrying and should stop. The same for scenarios you cannot actually do anything about.

When something did go wrong, we should think through how to not end up like that again. But it also helps to notice when something nearly went wrong, and treat that as seriously as if it had gone wrong – there are many more teachable instances of that kind than actual mistakes, although they often are less visible.

Poka-yoke

I love the idea of mistake-proofing my life. The trick is to set things up so my behaviour will be shaped to avoid the mistake: the standard example is putting your keys in your shoes or on the door handle, so that it is nearly impossible to leave home without them.

Often a bit of forethought can help construct poka-yokes. When washing clothes, the sound of the machine reminds me that it is ongoing, but when it ends there is no longer a reminder that I should hang the clothes – so I place coat hangers on the front door handle (for a morning wash) or in my bed (for an evening wash) to make it impossible to leave/go to bed without noticing the extra task.

Another mini-strategy is gestalt: put things together on a tray, so that they all get picked up or there will be an easier noticeable lack of a key item. Here the tray acts as a frame forcing grouping of the objects. Seeing it can also act as a trigger (see below). For travel, I have ziploc bags with currency, travel plugs, and bus cards relevant for different destinations.

Habits

Lost memoryOne of the main causes of loss is attention/working memory lapses: you put the thing there for a moment, intending to put it back where it belongs, but something interferes and you forget where you placed it.

The solution is not really to try to pay more attention since it is very hard to do all the time (although training mindfulness and actually noticing what you do is perhaps healthy for other reasons). The trick is to ensure that other unconscious processes – habits – help fix the situation. If you always put stuff where it should be by habit, it does not matter that your attention lapses.

The basic approach is to have a proper spot where one habitually puts the particular thing. First decide on the spot, and start putting it there. Then continue doing this. Occasional misses are OK, the point is to make this an automatic habit.

Many things have two natural homes: their active home when you bring them with you, and  a passive home when they are not on you. Glasses on your nose or on your nightstand, cellphone in your pocket or in the charger. As long as you have a habit of putting them in the right home when you arrive at it there is no problem. Even if you miss doing that, you have a smaller search space to go through when trying to find them.

One can also use triggers, a concrete cue, to start the action. When going to be, put the wedding ring on the bed stand. When leaving the car, when you are one pace beyond it turn and lock the door. The trick here is that the cue can be visualized beforehand as leading to the action: imagine it vividly, ensuring that they are linked. Every time you follow the trigger with the action they get strengthened.

Another cause of lost items is variability: habits are all about doing the same thing again and again, typically at the same time and place. But I have a fairly variable life where I travel, change my sleep times and do new things at a fairly high rate. Trigger habits can still handle this, if the trigger is tied to some reliable action like waking up in the morning, shaving or going to bed – look out for habits that only make sense when you are at home or doing your normal routine.

One interesting option is negative habits: things you never do. The superstition that it is bad luck to put the keys on the table serves as a useful reminder not to leave them in a spot where they are more likely to be forgotten. It might be worth culturing a few similar personal superstitions to inhibit actions like leaving wallets on restaurant counters (visualize how the money will flee to the proprietor).

Checklists might be overkill, but they can be very powerful. They can be habits, or literal rituals with prescribed steps. The habit could just be a check that the list of everyday objects are with you, triggered whenever you leave a location. I am reminded of the old joke about the man who always made the sign of the cross when leaving a brothel. A curious neighbour eventually asks him why he, such an obviously religious man, regularly visited such a place. The man responds: “Just checking: glasses, testicles, wallet and watch.”

Personality

I suspect a lot just hinges on personality. I typically do run scenarios of every big and small possibility through my head, I like minimizing the number of things I need to carry, and as I age I become more conscientious (a common change in personality, perhaps due to learning, perhaps due to biological changes). Others have other priorities with their brainpower.

But we should be aware of who we are and what our quirks are, and take steps based on this knowledge.

The goal is to maximize utility and minimize hassle, not to be perfect. If losing things actually doesn’t bother you or prevent you from living a good life this essay is fairly irrelevant. If you spend too much time and effort preventing possible disasters, then a better time investment is to recognize this and start living a bit more.

Dampening theoretical noise by arguing backwards

WhiteboardScience has the adorable headline Tiny black holes could trigger collapse of universe—except that they don’t, dealing with the paper Gravity and the stability of the Higgs vacuum by Burda, Gregory & Moss. The paper argues that quantum black holes would act as seeds for vacuum decay, making metastable Higgs vacua unstable. The point of the paper is that some new and interesting mechanism prevents this from happening. The more obvious explanation that we are already in the stable true vacuum seems to be problematic since apparently we should expect a far stronger Higgs field there. Plenty of theoretical issues are of course going on about the correctness and consistency of the assumptions in the paper.

Don’t mention the war

What I found interesting is the treatment of existential risk in the Science story and how the involved physicists respond to it:

Moss acknowledges that the paper could be taken the wrong way: “I’m sort of afraid that I’m going to have [prominent theorist] John Ellis calling me up and accusing me of scaremongering.

Ellis is indeed grumbling a bit:

As for the presentation of the argument in the new paper, Ellis says he has some misgivings that it will whip up unfounded fears about the safety of the LHC once again. For example, the preprint of the paper doesn’t mention that cosmic-ray data essentially prove that the LHC cannot trigger the collapse of the vacuum—”because we [physicists] all knew that,” Moss says. The final version mentions it on the fourth of five pages. Still, Ellis, who served on a panel to examine the LHC’s safety, says he doesn’t think it’s possible to stop theorists from presenting such argument in tendentious ways. “I’m not going to lose sleep over it,” Ellis says. “If someone asks me, I’m going to say it’s so much theoretical noise.” Which may not be the most reassuring answer, either.

There is a problem here in that physicists are so fed up with popular worries about accelerator-caused disasters – worries that are often second-hand scaremongering that takes time and effort to counter (with marginal effects) – that they downplay or want to avoid talking about things that could feed the worries. Yet avoiding topics is rarely the best idea for finding the truth or looking trustworthy. And given the huge importance of existential risk even when it is unlikely, it is probably better to try to tackle it head-on than skirt around it.

Theoretical noise

“Theoretical noise” is an interesting concept. Theoretical physics is full of papers considering all sorts of bizarre possibilities, some of which imply existential risks from accelerators. In our paper Probing the Improbable we argue that attempts to bound accelerator risks have problems due to the non-zero probability of errors overshadowing the probability they are trying to bound: an argument that there is zero risk is actually just achieving the claim that there is about 99% chance of zero risk, and 1% chance of some risk. But these risk arguments were assumed to be based on fairly solid physics. Their errors would be slips in logic, modelling or calculation rather than being based on an entirely wrong theory. Theoretical papers are often making up new theories, and their empirical support can be very weak.

An argument that there is some existential risk with probability P actually means that, if the probability of the argument is right is Q, there is risk with probability PQ plus whatever risk there is if the argument is wrong (which we can usually assume to be close to what we would have thought if there was no argument in the first place) times 1-Q. Since the vast majority of theoretical physics papers never go anywhere, we can safely assume Q to be rather small, perhaps around 1%. So a paper arguing for P=100% isn’t evidence the sky is falling, merely that we ought to look more closely to a potentially nasty possibility that is likely to turn into a dud. Most alarms are false alarms.

However, it is easier to generate theoretical noise than resolve it. I have spent some time working on a new accelerator risk scenario, “dark fire”, trying to bound the likelihood that it is real and threatening. Doing that well turned out to be surprisingly hard: the scenario was far more slippery than expected, so ruling it out completely turned out to be very hard (don’t worry, I think we amassed enough arguments to show the risk to be pretty small). This is of course the main reason for the annoyance of physicists: it is easy for anyone to claim there is risk, but then it is up to the physics community to do the laborious work of showing that the risk is small.

The vacuum decay issue has likely been dealt with by the Tegmark and Bostrom paper: were the decay probability high we should expect to be early observers, but we are fairly late ones. Hence the risk per year in our light-cone is small (less than one in a billion). Whatever is going on with the Higgs vacuum, we can likely trust it… if we trust that paper. Again we have to deal with the problem of an argument based on applying anthropic probability (a contentious subject where intelligent experts disagree on fundamentals) to models of planet formation (based on elaborate astrophysical models and observations): it is reassuring, but it does not reassure as strongly as we might like. It would be good to have a few backup papers giving different arguments bounding the risk.

Backward theoretical noise dampening?

The lovely property of the Tegmark and Bostrom paper is that it covers a lot of different risks with the same method. In a way it handles a sizeable subset of the theoretical noise at the same time. We need more arguments like this. The cosmic ray argument is another good example: it is agnostic on what kind of planet-destroying risk is perhaps unleashed from energetic particle interactions, but given the past number of interactions we can be fairly secure (assuming we patch its holes).

One shared property of these broad arguments is that they tend to start with the risky outcome and argue backwards: if something were to destroy the world, what properties does it have to have? Are those properties possible or likely given our observations? Forward arguments (if X happens, then Y will happen, leading to disaster Z) tend to be narrow, and depend on our model of the detailed physics involved.

While the probability that a forward argument is correct might be higher than the more general backward arguments, it only reduces our concern for one risk rather than an entire group. An argument about why quantum black holes cannot be formed in an accelerator is limited to that possibility, and will not tell us anything about risks from Q-balls. So a backwards argument covering 10 possible risks but just being half as likely to be true as a forward argument covering one risk is going to be more effective in reducing our posterior risk estimate and dampening theoretical noise.

In a world where we had endless intellectual resources we would of course find the best possible arguments to estimate risks (and then for completeness and robustness the second best argument, the third, … and so on). We would likely use very sharp forward arguments. But in a world where expert time is at a premium and theoretical noise high we can do better by looking at weaker backwards arguments covering many risks at once. Their individual epistemic weakness can be handled by making independent but overlapping arguments, still saving effort if they cover many risk cases.

Backwards arguments also have another nice property: they help dealing with the “ultraviolet cut-off problem“. There is an infinite number of possible risks, most of which are exceedingly bizarre and a priori unlikely. But since there are so many of them, it seems we ought to spend an inordinate effort on the crazy ones, unless we find a principled way of drawing the line. Starting from a form of disaster and working backwards on probability bounds neatly circumvents this: production of planet-eating dragons is among the things covered by the cosmic ray argument.

Risk engineers will of course recognize this approach: it is basically a form of fault tree analysis, where we reason about bounds on the probability of a fault. The forward approach is more akin to failure mode and effects analysis, where we try to see what can go wrong and how likely it is. While fault trees cannot cover every possible initiating problem (all those bizarre risks) they are good for understanding the overall reliability of the system, or at least the part being modelled.

Deductive backwards arguments may be the best theoretical noise reduction method.