All models are wrong, some are useful – but how can you tell?

City engineeringOur whitepaper about the systemic risk of risk modelling is now out. The topic is how the risk modelling process can make things worse – and ways of improving things. Cognitive bias meets model risk and social epistemology.

The basic story is that in insurance (and many other domains) people use statistical models to estimate risk, and then use these estimates plus human insight to come up with prices and decisions. It is well known (at least in insurance) that there is a measure of model risk due to the models not being perfect images of reality; ideally the users will take this into account. However, in reality (1) people tend to be swayed by models, (2) they suffer from various individual and collective cognitive biases making their model usage imperfect and correlates their errors, (3) the markets for models, industrial competition and regulation leads to fewer models being used than there could be. Together this creates a systemic risk: everybody makes correlated mistakes and decisions, which means that when a bad surprise happens – a big exogenous shock like a natural disaster or a burst of hyperinflation, or some endogenous trouble like a reinsurance spiral or financial bubble – the joint risk of a large chunk of the industry failing is much higher than it would have been if everybody had had independent, uncorrelated models. Cue bailouts or skyscrapers for sale.

Note that this is a generic problem. Insurance is just unusually self-aware about its limitations (a side effect of convincing everybody else that Bad Things Happen, not to mention seeing the rest of the financial industry running into major trouble). When we use models the model itself (the statistics and software) is just one part: the data fed into the model, the processes of building and tuning the model, how people use it in their everyday work, how the output leads to decisions, and how the eventual outcomes become feedback to the people involved – all of these factors are important parts in making model use useful. If there is no or too slow feedback people will not learn what behaviours are correct or not. If there are weak incentives to check errors of one type, but strong incentives for other errors, expect the system to become biased towards one side. It applies to climate models and military war-games too.

The key thing is to recognize that model usefulness is not something that is directly apparent: it requires a fair bit of expertise to evaluate, and that expertise is also not trivial to recognize or gain. We often compare models to other models rather than reality, and a successful career in predicting risk may actually be nothing more than good luck in avoiding rare but disastrous events.

What can we do about it? We suggest a scorecard as a first step: comparing oneself to some ideal modelling process is a good way of noticing where one could find room for improvement. The score does not matter as much as digging into one’s processes and seeing whether they have cruft that needs to be fixed – whether it is following standards mindlessly, employees not speaking up, basing decisions on single models rather than more broad views of risk, or having regulators push one into the same direction as everybody else. Fixing it may of course be tricky: just telling people to be less biased or to do extra error checking will not work, it has to be integrated into the organisation. But recognizing that there may be a problem and getting people on board is a great start.

In the end, systemic risk is everybody’s problem.

Dampening theoretical noise by arguing backwards

WhiteboardScience has the adorable headline Tiny black holes could trigger collapse of universe—except that they don’t, dealing with the paper Gravity and the stability of the Higgs vacuum by Burda, Gregory & Moss. The paper argues that quantum black holes would act as seeds for vacuum decay, making metastable Higgs vacua unstable. The point of the paper is that some new and interesting mechanism prevents this from happening. The more obvious explanation that we are already in the stable true vacuum seems to be problematic since apparently we should expect a far stronger Higgs field there. Plenty of theoretical issues are of course going on about the correctness and consistency of the assumptions in the paper.

Don’t mention the war

What I found interesting is the treatment of existential risk in the Science story and how the involved physicists respond to it:

Moss acknowledges that the paper could be taken the wrong way: “I’m sort of afraid that I’m going to have [prominent theorist] John Ellis calling me up and accusing me of scaremongering.

Ellis is indeed grumbling a bit:

As for the presentation of the argument in the new paper, Ellis says he has some misgivings that it will whip up unfounded fears about the safety of the LHC once again. For example, the preprint of the paper doesn’t mention that cosmic-ray data essentially prove that the LHC cannot trigger the collapse of the vacuum—”because we [physicists] all knew that,” Moss says. The final version mentions it on the fourth of five pages. Still, Ellis, who served on a panel to examine the LHC’s safety, says he doesn’t think it’s possible to stop theorists from presenting such argument in tendentious ways. “I’m not going to lose sleep over it,” Ellis says. “If someone asks me, I’m going to say it’s so much theoretical noise.” Which may not be the most reassuring answer, either.

There is a problem here in that physicists are so fed up with popular worries about accelerator-caused disasters – worries that are often second-hand scaremongering that takes time and effort to counter (with marginal effects) – that they downplay or want to avoid talking about things that could feed the worries. Yet avoiding topics is rarely the best idea for finding the truth or looking trustworthy. And given the huge importance of existential risk even when it is unlikely, it is probably better to try to tackle it head-on than skirt around it.

Theoretical noise

“Theoretical noise” is an interesting concept. Theoretical physics is full of papers considering all sorts of bizarre possibilities, some of which imply existential risks from accelerators. In our paper Probing the Improbable we argue that attempts to bound accelerator risks have problems due to the non-zero probability of errors overshadowing the probability they are trying to bound: an argument that there is zero risk is actually just achieving the claim that there is about 99% chance of zero risk, and 1% chance of some risk. But these risk arguments were assumed to be based on fairly solid physics. Their errors would be slips in logic, modelling or calculation rather than being based on an entirely wrong theory. Theoretical papers are often making up new theories, and their empirical support can be very weak.

An argument that there is some existential risk with probability P actually means that, if the probability of the argument is right is Q, there is risk with probability PQ plus whatever risk there is if the argument is wrong (which we can usually assume to be close to what we would have thought if there was no argument in the first place) times 1-Q. Since the vast majority of theoretical physics papers never go anywhere, we can safely assume Q to be rather small, perhaps around 1%. So a paper arguing for P=100% isn’t evidence the sky is falling, merely that we ought to look more closely to a potentially nasty possibility that is likely to turn into a dud. Most alarms are false alarms.

However, it is easier to generate theoretical noise than resolve it. I have spent some time working on a new accelerator risk scenario, “dark fire”, trying to bound the likelihood that it is real and threatening. Doing that well turned out to be surprisingly hard: the scenario was far more slippery than expected, so ruling it out completely turned out to be very hard (don’t worry, I think we amassed enough arguments to show the risk to be pretty small). This is of course the main reason for the annoyance of physicists: it is easy for anyone to claim there is risk, but then it is up to the physics community to do the laborious work of showing that the risk is small.

The vacuum decay issue has likely been dealt with by the Tegmark and Bostrom paper: were the decay probability high we should expect to be early observers, but we are fairly late ones. Hence the risk per year in our light-cone is small (less than one in a billion). Whatever is going on with the Higgs vacuum, we can likely trust it… if we trust that paper. Again we have to deal with the problem of an argument based on applying anthropic probability (a contentious subject where intelligent experts disagree on fundamentals) to models of planet formation (based on elaborate astrophysical models and observations): it is reassuring, but it does not reassure as strongly as we might like. It would be good to have a few backup papers giving different arguments bounding the risk.

Backward theoretical noise dampening?

The lovely property of the Tegmark and Bostrom paper is that it covers a lot of different risks with the same method. In a way it handles a sizeable subset of the theoretical noise at the same time. We need more arguments like this. The cosmic ray argument is another good example: it is agnostic on what kind of planet-destroying risk is perhaps unleashed from energetic particle interactions, but given the past number of interactions we can be fairly secure (assuming we patch its holes).

One shared property of these broad arguments is that they tend to start with the risky outcome and argue backwards: if something were to destroy the world, what properties does it have to have? Are those properties possible or likely given our observations? Forward arguments (if X happens, then Y will happen, leading to disaster Z) tend to be narrow, and depend on our model of the detailed physics involved.

While the probability that a forward argument is correct might be higher than the more general backward arguments, it only reduces our concern for one risk rather than an entire group. An argument about why quantum black holes cannot be formed in an accelerator is limited to that possibility, and will not tell us anything about risks from Q-balls. So a backwards argument covering 10 possible risks but just being half as likely to be true as a forward argument covering one risk is going to be more effective in reducing our posterior risk estimate and dampening theoretical noise.

In a world where we had endless intellectual resources we would of course find the best possible arguments to estimate risks (and then for completeness and robustness the second best argument, the third, … and so on). We would likely use very sharp forward arguments. But in a world where expert time is at a premium and theoretical noise high we can do better by looking at weaker backwards arguments covering many risks at once. Their individual epistemic weakness can be handled by making independent but overlapping arguments, still saving effort if they cover many risk cases.

Backwards arguments also have another nice property: they help dealing with the “ultraviolet cut-off problem“. There is an infinite number of possible risks, most of which are exceedingly bizarre and a priori unlikely. But since there are so many of them, it seems we ought to spend an inordinate effort on the crazy ones, unless we find a principled way of drawing the line. Starting from a form of disaster and working backwards on probability bounds neatly circumvents this: production of planet-eating dragons is among the things covered by the cosmic ray argument.

Risk engineers will of course recognize this approach: it is basically a form of fault tree analysis, where we reason about bounds on the probability of a fault. The forward approach is more akin to failure mode and effects analysis, where we try to see what can go wrong and how likely it is. While fault trees cannot cover every possible initiating problem (all those bizarre risks) they are good for understanding the overall reliability of the system, or at least the part being modelled.

Deductive backwards arguments may be the best theoretical noise reduction method.

The end of the worlds

Nikkei existential riskGeorge Dvorsky has a piece on Io9 about ways we could wreck the solar system, where he cites me in a few places. This is mostly for fun, but I think it links to an important existential risk issue: what conceivable threats have big enough spatial reach to threaten a interplanetary or even star-faring civilization?

This matters, since most existential risks we worry about today (like nuclear war, bioweapons, global ecological/societal crashes) only affect one planet. But if existential risk is the answer to the Fermi question, then the peril has to strike reliably. If it is one of the local ones it has to strike early: a multi-planet civilization is largely immune to the local risks. It will not just be distributed, but it will almost by necessity have fairly self-sufficient habitats that could act as seeds for a new civilization if they survive. Since it is entirely conceivable that we could have invented rockets and spaceflight long before discovering anything odd about uranium or how genetics work it seems unlikely that any of these local risks are “it”. That means that the risks have to be spatially bigger (or, of course, that xrisk is not the answer to the Fermi question).

Of the risks mentioned by George physics disasters are intriguing, since they might irradiate solar systems efficiently. But the reliability of them being triggered before interstellar spread seems problematic. Stellar engineering, stellification and orbit manipulation may be issues, but they hardly happen early – lots of time to escape. Warp drives and wormholes are also likely late activities, and do not seem to be reliable as extinctors. These are all still relatively localized: while able to irradiate a largish volume, they are not fine-tuned to cause damage and does not follow fleeing people. Dangers from self-replicating or self-improving machines seems to be a plausible, spatially unbound risk that could pursue (but also problematic for the Fermi question since now the machines are the aliens). Attracting malevolent aliens may actually be a relevant risk: assuming von Neumann probes one can set up global warning systems or “police probes” that maintain whatever rules the original programmers desire, and it is not too hard to imagine ruthless or uncaring systems that could enforce the great silence. Since early civilizations have the chance to spread to enormous volumes given a certain level of technology, this might matter more than one might a priori believe.

So, in the end, it seems that anything releasing a dangerous energy effect will only affect a fixed volume. If it has energy E and one can survive it below a deposited energy e, if it just radiates in all directions the safe range is r = \sqrt{E/4 \pi e} \propto \sqrt{E} – one needs to get into supernova ranges to sterilize interstellar volumes. If it is directional the range goes up, but smaller volumes are affected: if a fraction f of the sky is affected, the range increases as \propto \sqrt{1/f} but the total volume affected scales as \propto f\sqrt{1/f}=\sqrt{f}.

Stable strangeletsSelf-sustaining effects are worse, but they need to cross space: if their space range is smaller than interplanetary distances they may destroy a planet but not anything more. For example, a black hole merely absorbs a planet or star (releasing a nasty energy blast) but does not continue sucking up stuff. Vacuum decay on the other hand has indefinite range in space and moves at lightspeed. Accidental self-replication is unlikely to be spaceworthy unless is starts among space-moving machinery; here deliberate design is a more serious problem.

The speed of threat spread also matters. If it is fast enough no escape is possible. However, many of the replicating threats will have sublight speed and could hence be escaped by sufficiently paranoid aliens. The issue here is if lightweight and hence faster replicators can always outrun larger aliens; given the accelerating expansion of the universe it might be possible to outrun them by being early enough, but our calculations do suggest that the margins look very slim.

The more information you have about a target, the better you can in general harm it. If you have no information, merely randomizing it with enough energy/entropy is the only option (and if you have no information of where it is, you need to radiate in all directions). As you learn more, you can focus resources to make more harm per unit expended, up to the extreme limits of solving the optimization problem of finding the informational/environmental inputs that cause desired harm (=hacking). This suggests that mindless threats will nearly always have shorter range and smaller harms than threats designed by (or constituted by) intelligent minds.

In the end, the most likely type of actual civilization-ending threat for an interplanetary civilization looks like it needs to be self-replicating/self-sustaining, able to spread through space, and have at least a tropism towards escaping entities. The smarter, the more effective it can be. This includes both nasty AI and replicators, but also predecessor civilizations that have infrastructure in place. Civilizations cannot be expected to reliably do foolish things with planetary orbits or risky physics.

[Addendum: Charles Stross has written an interesting essay on the risk of griefers as a threat explanation. ]

[Addendum II: Robin Hanson has a response to the rest of us, where he outlines another nasty scenario. ]

 

“A lump of cadmium”

Cadmium crystal and metal. From Wikimedia Commons, Cc: creator Alchemist-hp 2010.
Cadmium crystal and metal. From Wikimedia Commons, creator Alchemist-hp 2010.

Stuart Armstrong sent me this email:

I have a new expression: “a lump of cadmium”.

Background: in WW2, Heisenberg was working on the German atomic reactor project (was he bad? see the fascinating play “Copenhagen” to find out!). His team almost finished a nuclear reactor. He thought that a reaction with natural uranium would be self-limiting (spoiler: it wouldn’t), so had no cadmium control rods or other means of stopping a chain reaction.

But, no worries: his team has “a lump of cadmium” that they could toss into the reactor if things got out of hand. So, now, if someone has a level of precaution woefully inadequate to the risk at hand, I will call it a lump of cadmium.

(Based on German Nuclear Program Before and During World War II by Andrew Wendorff)

It reminds me of the story that SCRAM (emergency nuclear reactor shutdowns) stands for “Safety Control Rod Axe Man“, a guy standing next to the rope suspending the control rods with an axe, ready to cut it. It has been argued it was liquid cadmium solution instead. Still, in the US project they did not assume the reaction was self stabilizing.

Going back to the primary citation, we read:

To understand it we must say something about Heisenberg’s concept of reactor design. He persuaded himself that a reactor designed with natural uranium and, say, a heavy water moderator would be self-stabilizing and could not run away. He noted that U(238) has absorption resonances in the 1-eV region, which means that a neutron with this kind of energy has a good chance of being absorbed and thus removed from the chain reaction. This is one of the challenges in reactor design—slowing the neutrons with the moderator without losing them all to absorption. Conversely, if the reactor begins to run away (become supercritical) , these resonances would broaden and neutrons would be more readily absorbed. Moreover, the expanding material would lengthen the mean free paths by decreasing the density and this expansion would also stop the chain reaction. In short, we might experience a nasty chemical explosion but not a nuclear holocaust. Whether Heisenberg realized the consequences of such a chemical explosion is not clear. In any event, no safety elements like cadmium rods were built into Heisenberg’s reactors. At best, a lump of cadmium was kepton hand in case things threatened to get out of control. He also never considered delayed neutrons, which, as we know, play an essential role in reactor safety. Because none of Heisenberg’s reactors went critical, this dubious strategy was never put to the test.
(Jeremy Bernstein, Heisenberg and the critical mass. Am. J. Phys. 70, 911 (2002); http://dx.doi.org/10.1119/1.1495409)

This reminds me a lot of the modelling errors we discuss in the “Probing the improbable” paper, especially of course the (ahem) energetic error giving Castle Bravo 15 megatons of yield instead of the predicted 4-8 megatons. Leaving out Li(7) from the calculations turned out to leave out the major contributor of energy.

Note that Heisenberg did have an argument for his safety, in fact two independent ones! The problem might have been that he was thinking in terms of mostly U(238) and then getting any kind of chain reaction going would be hard, so he was biased against the model of explosive chain reactions (but as the Bernstein paper notes, somebody in the project had correct calculations for explosive critical masses). Both arguments were flawed when dealing with reactors enriched in U(235). Coming at nuclear power from the perspective of nuclear explosions on the other hand makes it natural to consider how to keep things from blowing up.

We may hence end up with lumps of cadmium because we approach a risk from the wrong perspective. The antidote should always be to consider the risks from multiple angles, ideally a few adversarial ones. The more energy, speed or transformative power we expect something to produce, the more we should scrutinize existing safeguards for them being lumps of cadmium. If we think our project does not have that kind of power, we should both question why we are even doing it, and whether it might actually have some hidden critical mass.

The 12 threats of xrisk

The Global Challenges Foundation has (together with FHI) produced a report on the 12 risks that threaten civilization.

infiniteriskP

And, yes, the use of “infinite impact” grates on me – it must be interepreted as “so bad that it is never acceptable”, a ruin probability, or something similar, not that the disvalue diverges. But the overall report is a great start on comparing and analysing the big risks. It is worth comparing it with the WEF global risk report, which focuses on people’s perceptions of risk. This one aims at looking at what risks are most likely/impactful. Both try to give reasons and ideas for how to reduce the risks. Hopefully they will also motivate others to make even sharper analysis – this is a first sketch of the domain, rather than a perfect roadmap. Given the importance of the issues, it is a bit worrying that it has taken us this long.

Canine mechanics and banking

Mini LondonThere are some texts that are worth reading, even if you are outside the group they are intended for. Here is one that I think everybody should read at least the first half of:

Andrew G Haldane and Vasileios Madouros: The dog and the frisbee

Haldane, the Executive Director for Financial Stability at Bank of England, brings up the topic is how to act in situations of uncertainty, and the role of our models of reality in making the right decision. How complex should they be in the face of a complex reality? The answer, based on the literature on heuristics, biases and modelling, and the practical world of financial disasters, is simple: they should be simple.

Using too complex models means that they tend to overfit scarce data, weight data randomly, require significant effort to set up – and tends to promote overconfidence. As Haldane then moves on to his own main topic, banking regulation. Complex regulations – which are in a sense models of how banks ought to act – have the same problem, and also act as incentives for playing the rules to gain advantage. The end result is an enormous waste of everybody’s time and effort that does not give the desired reduction of banking risk.

It is striking how many people have been seduced by the siren call of complex regulation or models, thinking their ability to include every conceivable special case is a sign of strength. Finance and finance regulation are full of smart people who make the same mistake, as is science. If there is one thing I learned in computational biology is that your model better produce more nontrivial results than the number of parameters it has.

But coming up with simple rules or models is not easy: knowing what to include and what not to include requires expertise and effort. In many ways this may be why people like complex models, since there are no tricky judgement calls.

 

Threat reduction Thursday

Today seems to have been “doing something about risk”-day. Or at least, “let’s investigate risk so we know what we ought to do”-day.
First, the World Economic Forum launched their 2015 risk perception report. (Full disclosure: I am on the advisory committee)
Second, Elon Musk donated $10M to AI safety research. Yes, this is quite related to the FLI open letter.
Today has been a good day. Of course, it will be an even better day if and when we get actual results in risk mitigation.

Existential risk and hope

Spes altera vitaeToby and Owen started 2015 by defining existential hope, the opposite of existential risk.

In their report “Existential Risk and Existential Hope: Definitions” they look at definitions of existential risk. The initial definition was just the extinction of humanity, but that leaves out horrible scenarios where humanity suffers indefinitely, or situations where there is a tiny chance of humanity escaping. Chisholming their way through successive definitions they end up with:

An existential catastrophe is an event which causes the loss of most expected value.

They also get the opposite:

An existential eucatastrophe is an event which causes there to be much more expected value after the event than before.

So besides existential risk, where the value of our future can be lost, there is existential hope: the chance that our future is much greater than we expect. Just as we should work hard to avoid existential threats, we should explore to find potential eucatastrophes that vastly enlarge our future.

Infinite hope or fear

One problem with the definitions I can see is that expectations can be undefined or infinite, making “loss of most expected value” undefined. That would require potentially unbounded value, and that the probability of reaching a certain level has a sufficiently heavy tail. I guess most people would suspect the unbounded potential to be problematic, but at least some do think there could be infinite value somewhere in existence (I think this is what David Deutsch believes). The definition ought to work regardless of what kind of value structure exists in the universe.

There are a few approaches in Nick’s “Infinite ethics” paper. However, there might be simpler approaches based on stochastic dominance. Cutting off the upper half of a Chauchy distribution does change the situation despite the expectation remaining undefined (and in this case, changes the balance between catastrophe and eucatastrophe completely). It is clear that there is now more probability on the negative side: one can do a (first order) stochastic ordering of the distributions, even though the expectations diverge.

There are many kinds of stochastic orderings; which ones make sense likely depends on the kind of value one uses to evaluate the world. Toby and Owen point out that this what actually does the work in the definitions: without a somewhat precise value theory existential risk and hope will not be well defined. Just as there may be unknown threats and opportunities, there might be surprise twists in what is valuable – we might in the fullness of time discover that some things that looked innocuous or worthless were actually far more weighty than we thought, perhaps so much that they were worth the world.

 

 

Cool risks outside the envelope of nature

How do we apply the precautionary principle to exotic, low-probability risks?

The CUORE collaboration at the INFN Gran Sasso National Laboratory recently set a world record by cooling a cubic meter 400 kg copper vessel down to 6 milliKelvins: it was the coldest cubic meter in the universe for over 15 days. Yay! Applause! (And the rest of this post should in no way be construed as a criticism of the experiment)

Cold and weird risks

CrystalsI have not been able to dig up the project documentation, but I would be astonished if there was any discussion of risk due to the experiment. After all, cooling things is rarely dangerous. We do not have any physical theories saying there could be anything risky here. No doubt there are risk assessment of liquid nitrogen or helium practical risks somewhere, but no analysis of any basic physics risks.

Compare this to the debates around the LHC, where critics at least could point to papers suggesting that strangelets, small black holes and vacuum decay were theoretically possible. Yet the LHC could argue back that particle processes like those occurring in the accelerator were already naturally occurring almost everywhere: if the LHC was risky, we ought to see plenty of explosions in the sky. Leaving aside the complications of correcting for anthropic bias, this kind of argument seems reasonably solid: if you do something that is within the envelope of what happens in the universe normally and there are no observed super-dangerous processes linked to it, then this activity is likely fine. We might wish for careful risk assessment, but given that the activity is already happening it can be viewed as just as benign as the normal activity of the universe.

However, the CUORE experiment is actually going outside of the envelope of what we think is going on in the universe. In the past, the universe has been hotter, so there would not have been any large masses at 6 milliKelvins. And with a 3 Kelvin background temperature, there would not be any natural objects this cold. (Since 1995 there have been small Bose-Einstein condensates in the hundred nanoKelvin range on Earth, but the argument is the same.)

How risky is it to generate such an outside of the envelope phenomenon? There is no evidence from the past. There is no cause for alarm given the known laws of physics. Yet this lack of evidence does not argue against risk either. Maybe there is an ice-9 like phase transition of matter below a certain temperature. Maybe it implodes into a black hole because of some macroscale quantum(gravity) effect. Maybe the alien spacegods get angry. There is an endless number of possible hypotheses that cannot be ruled out.

We might think that such “small theories” can safely be ignored. But we have some potential evidence that the universe may be riskier than it looks: the Fermi paradox, the apparent absence of alien intelligence. If we are alone, it is either because there are one or more steps in the evolution of life and intelligence that are very unlikely (the “great filter” is behind us), or there is a high likelihood that intelligence disappears without a trace (a future great filter). Now, we might freely assign our probabilities to (1) that there are aliens around, (2) that the filter is behind us, and (3) that it is ahead. However, given our ignorance we cannot rationally give zero probability to any of these possibilities, and probably not even give any of them less than 1% (since that is about the natural lowest error rate of humans on anything). Anybody saying one of them is less likely than one in a million is likely very overconfident. Yet a 1% risk of a future great filter implies a huge threat. It is a threat that not only reliably wipes out intelligent life, but also does it to civilizations aware of its potential existence!

We then have a slightly odd reason to be slightly concerned with experiments like CUORE. We know there is some probability that intelligence gets reliably wiped out. We know intelligence is likely to explore conditions not found in the natural universe. So a potential explanation could be that there is some threat in this exploration. The probability is not enormous – we might think the filter is behind us or the universe is teeming with aliens, and even if there is a future filter there are many possibilities for what it could be besides low-temperature physics – but nearly any non-infinitesimal probability multiplied by the value of our species (at least 7 billion lives) tends to lead to a too large risk.

Precaution?

A tad chillyAt this point the precautionary principle rears its stupid head (the ugly head is asleep). The stupid head argues that we should hence never do anything that is outside the natural envelope.

The ugly head would argue we should investigate before doing anything risky, but since in this case the empirical studying is causing the risk the head would hence advice just trying out theoretical risk scenarios – not very useful given that we are dealing with something where all potential risk comes from scenarios unconstrained by evidence!

We cannot obey the stupid head much, since most human activity is about pushing the envelope. We are trying to have more and happier people than has ever existed in the universe before. Maybe that is risky (compare to Stapledon’s Last and First Men where it turned out to be dangerous to have too much intelligence in one spot), but it is both practically hard to prevent and this kind of open-ended “let’s not do anything that has not happened in the past” seems unreasonable given that most events are new ones and generally do not lead to disasters. But the pushing of the envelope into radically new directions does carry undefinable risk. We cannot avoid that. What we can do is to discuss whether we are willing to take on such hard to pin down risk.

However, this example also shows a way precaution can break down. Nobody has, to my knowledge, worried about cooling down matter besides me. There is no concerned group urging precaution since there is no empirical nor normative reason to think there is anything wrong specifically with CUORE: we only have a general Fermi paradox-induced inchoate worry. Yet proper precaution requires considering weak possibilities. I suspect that most future big new disasters will turn out to have avoided precautionary considerations just because there was no obvious reason to invoke the principle.

Conclusion?

Many people are scared more by uncertainty than actual risk. But we cannot escape it. Especially if we want to reduce existential risk, which tends to be more uncertain than most. This little essay is about some of the really tricky limits to what we can know about new risks. We should expect them to be unexpected. And we should expect that the standard decision methods will not behave sensibly.

As for the CUORE team, I wish them the best of luck to find neutrinoless double beta decay. But they should keep an eye open for weird anomalies too – they have a chance to peek outside the envelope of the natural in a well controlled setting, and that is valuable.