Thanks for the razor, Bill!

Thermocouple, Piotr KowalskiI like the idea of a thanksgiving day, leaving out all the Americana turkeys, problematic immigrant-native relations and family logistics: just the moment to consider what really matters to you and why life is good. And giving thanks for intellectual achievements and tools makes eminent sense: This thanksgiving Sean Carroll gave thanks for the Fourier transform.

Inspired by this, I want to give thanks for Occam’s razor.

These days a razor in philosophy denotes a rule of thumb that allows one to eliminate something unnecessary or unlikely. Occam’s was the first: William of Ockham (ca. 1285-1349) stated “Pluralitas non est ponenda sine neccesitate” (“plurality should not be posited without necessity.”) Today we usually phrase it as “the simplest theory that fits is best”.

Principles of parsimony have been suggested for a long time; Aristotle had one, so did Maimonides and various other medieval thinkers. But let’s give Bill from Ockham the name in the spirit of Stigler’s law of eponymy.

Of course, it is not always easy to use. Is the many worlds interpretation of quantum mechanics possible to shave away? It posits an infinite number of worlds that we cannot interact with… except that it does so by taking the quantum mechanics formalism seriously (each possible world is assigned a probability) and not adding extra things like wavefunction collapse or pilot waves. In many ways it is conceptually simpler: just because there are a lot of worlds doesn’t mean they are wildly different. Somebody claiming there is a spirit world is doubling the amount of stuff in the universe, but that there is a lot of ordinary worlds is not too different from the existence of a lot of planets.

Simplicity is actually quite complicated. One can argue about which theory has the fewest and most concise basic principles, but also the number of kinds of entities postulated by the theory. Not to mention why one should go for parsimony at all.

In my circles, we like to think of the principle in terms of Bayesian statistics and computational complexity. The more complex a theory is, the better it can typically fit known data – but it will also generalize worse to new data, since it overfits the first set of data points. Parsimonious theories have fewer degrees of freedom, so they cannot fit as well as complex theories, but they are less sensitive to noise and generalize better. One can operationalize the optimal balance using various statistical information criteria (AIC = minimize the information lost when fitting, BIC = maximize likeliehood of the model). And Solomonoff gave a version of the razor in theoretical computer science: for computable sequences of bits there exists a unique (up to choice of Turing machine) prior that promotes sequences generated by simple programs and has awesome powers of inference.

But in day-to-day life Occam works well, especially with a maximum probability principle (you are more likely to see likely things than unlikely; if you see hoofprints in the UK, think horses not zebras). A surprising number of people fall for the salient stories inherent in unlikely scenarios and then choose to ignore Occam (just think of conspiracy theories). If the losses from low-probability risks are great enough one should rationally focus on them, but then one must check one’s priors for such risks. Starting out with a possibilistic view that anything is possible (and hence have roughly equal chance) means that one becomes paranoid or frozen with indecision. Occam tells you to look for the simple, robust ways of reasoning about the world. When they turn out to be wrong, shift gears and come up with the next simplest thing.

Simplicity might sometimes be elegant, but that is not why we should choose it. To me it is the robustness that matters: given our biased, flawed thought processes and our limited and noisy data, we should not build too elaborate castles on those foundations.

 

Transvision 2014

My talk from TransVision 2014 is now up:

Indeed, all of the talks are there – thanks!

Some talks of note: Gabriel Dorthe’s talk introduced a nice taxonomy/map of transhumanism along the axes argumentation/fantasy and experimentation/speculation. It goes well together with James Hughes’ talk (not up when I write this) where he mapped out the various transhumanisms. David Wood gave a talk where he clearly mapped out concerns about inequality; not sure I agree with all parts, but it was a good overall structure for thinking.

Laurent Alexandre made a great talk where among other things he pointed out how medical ethics may be dead (in the Nietzschean ‘God is dead’ sense) and being replaced by code. Francesco Paolo Adorno argued that immortality and politics are opposed; I disagreed rather profoundly, but it is a good talk to start a conversation from. Marina Maestrutti gave a talk about the shift in transhumanism from a happy cyborg to pessimistic virtue-culturing: she has a good point, and I share some of the misgivings about the moral enhancement project, yet I do think the “xrisk is paramount” argument does hold water and might force us to be a bit less happy-go-lucky about emerging tech. Vincent Billard gave a talk about why to become posthuman; I think he is short-selling the arguments in the transhumanist literature and overstating how good anti-enhancement arguments are, but his use of David Benatars arguments that it may have been better to never have been born to (through an act of philosophical jiu-jitsu) argue in favor of posthumanity made me cheer!

Maël le Mées demonstration of the comfort organs from the Benway Institute was hilarious.

We have gone a long way with the conference from 20 guys in a hotel cellar in Weesp.

Contraire de l’esprit de l’escalier: enhancement and therapy

Public healthYesterday I participated in a round-table discussion with professor Miguel Benasyag about the therapy vs. enhancement distinction at the TransVision 2014 conference. Unfortunately I could not get in a word sidewise, so it was not much of a discussion. So here are the responses I wanted to make, but didn’t get the chance to do: in a way this post is the opposite of l’esprit de l’escalier.

Enhancement: top-down, bottom-up, or sideways?

Does enhancements – whether implanted or not – represent a top-down imposition of order on the biosystem? If one accepts that view, one ends up with a dichotomy between that and bottom-up approaches where biosystems are trained or placed in a smart context that produce the desired outcome: unless one thinks imposing order is a good thing, one becomes committed to some form of naturalistic conservatism.

But this ignores something Benasyag brought up himself: the body and brain are flexible and adaptable. The cerebral cortex can reorganize to become a primary cortex for any sense, depending on which input nerve is wired up to it. My friend Todd’s implanted magnet has likely reorganized a small part of his somatosensory cortex to represent his new sense. This enhancement is not a top-down imposition of a desired cortical structure, neither a pure bottom-up training of the biosystem.

Real enhancements integrate, they do not impose a given structure. This also addresses concerns of authenticity: if enhancements are entirely externally imposed – whether through implantation or external stimuli – they are less due to the person using them. But if their function is emergent from the person’s biosystem, the device itself, and how it is being used, then it will function in a unique, personal way. It may change the person, but that change is based on the person.

Complex enhancements

Enhancements are often described as simple, individualistic, atomic, things. But actual enhancements will be systems. A dramatic example was in my ears: since I am both French- and signing-impaired, I could listen to (and respond to) comments thanks to an enhancing system involving three skilled translators, a set of wireless headphones and microphones. This system was not just complex, but it was adaptive (translators know how to improvise, we the users learned how to use it) and social (micro-norms for how to use it emerged organically).

Enhancements need a social infrastructure to function – both a shared, distributed knowledge of how and when to use them (praxis) and possibly a distributed functioning itself. A brain-computer interface is of little use without anybody to talk to. In fact, it is the enhancements that affect communication abilities that are most powerful both in the sense of enhancing cognition (by bringing brains together) and changing how people are socially situated.

Cochlear implants and social enhancement

This aspect of course links to the issues in the adjacent debate about disability. Are we helping children by giving them cochlear implants, or are we undermining a vital deaf cultural community. The unique thing about cochlear implants is that they have this social effect and have to be used early in life for best results. In this case there is a tension between the need to integrate the enhancement with the hearing and language systems in an authentic way, a shift in which social community which will be readily available, and concerns over that this is just used to top-down normalize away the problem of deafness. How do we resolve this?

The value of deaf culture is largely its value to members: there might be some intrinsic value to the culture, but this is true for every culture and subculture. I think it is safe to say there is a fairly broad consensus in western culture today that individuals should not sacrifice their happiness – and especially not be forced to do it – for the sake of the culture. It might be supererogatory: a good thing to do, but not something that can be demanded. Culture is for the members, not the other way around: people are ends, not means.

So the real issue is the social linkages and the normalisation. How do we judge the merits of being able to participate in social networks? One might be small but warm, another vast and mainstream. It seems that the one thing to avoid is not being able to participate in either. But this is not a technical problem as much as a problem of adaptation and culture. Once implants are good enough that learning to use them does not compete with learning signing the real issue becomes the right social upbringing and the question of personal choices. This goes way beyond implant technology and becomes a question of how we set up social adaptation processes – a thick, rich and messy domain where we need to do much more work.

It is also worth considering the next step. What if somebody offered a communications device that would enable an entirely new form of communication, and hence social connection? In a sense we are gaining that using new media, but one could also consider something direct, like Egan’s TAP. As that story suggests, there might be rather subtle effects if people integrate new connections – in his case merely epistemic ones, but one could imagine entirely new forms of social links. How do we evaluate them? Especially since having a few pioneers test them tells us less than for non-social enhancements. That remains a big question.

Justifying off-label enhancement

A somewhat fierce question I got (and didn’t get to respond to) was how I could justify that I occasionally take modafinil, a drug intended for use of narcoleptics.

There seems to be a deontological or intention-oriented view behind the question: the intentions behind making the drug should be obeyed. But many drugs have been approved for one condition and then use expanded to other conditions. Presumably aspirin use for cardiovascular conditions is not unethical. And pharma companies largely intend to make money by making medicines, so the deep intention might be trivial to meet. More generally, claiming the point of drugs is to help sick people (who we have an obligation to help) doesn’t work since there obviously exist drug use for non-sick people (sports medicine, for example). So unless many current practices are deeply unethical this line of argument doesn’t work.

What I think was the real source was the concern that my use somehow deprived a sick person of the use. This is false, since I paid for it myself: the market is flexible enough to produce enough, and it was not the case of splitting a finite healthcare cake. The finiteness case might be applicable if we were talking about how much care me and my neighbours would get for our respective illnesses, and whether they had a claim on my behaviour through our shared healthcare cake. So unless my interlocutor thought my use was likely to cause health problems she would have to pay for, it seems that this line of reasoning fails.

The deep issue is of course whether there is a normatively significant difference between therapy and enhancement. I deny it. I think the goal of healthcare should not be health but wellbeing. Health is just an enabling instrumental thing. And it is becoming increasingly individual: I do not need more muscles, but I do benefit from a better brain for my life project. Yours might be different. Hence there is no inherent reason to separate treatment and enhancement: both aim at the same thing.

That said, in practice people make this distinction and use it to judge what care they want to pay for for their fellow citizens. But this will shift as technology and society changes, and as I said, I do not think this is a normative issue. Political issue, yes, messy, yes, but not foundational.

What do transhumanists think?

One of the greatest flaws of the term “transhumanism” is that it suggests that there is something in particular all transhumanist believe. Benasayag made some rather sweeping claims about what transhumanists (enhancement as embodying body-hate and a desire for control) wanted to do that were most definitely not shared by the actual transhumanists in the audience or stage. It is as problematic as claiming that all French intellectuals believe something: at best a loose generalisation, but most likely utterly misleading. But when you label a group – especially if they themselves are trying to maintain an official label – it becomes easier to claim that all transhumanists believe in something. Outsiders also do not see the sheer diversity inside, assuming everybody agrees on the few samples of writing they have  read.

The fault here lies both in the laziness of outside interlocutors and in transhumanists not making their diversity clearer, perhaps by avoiding slapping the term “transhumanism” on every relevant issue: human enhancement is of interest to transhumanists, but we should be able to discuss it even if there were no transhumanists.

Manufacturing love cheaply, slowly and in an evidence based way

Lecture yurtSince I am getting married tomorrow, it is fitting that the Institute of Art and Ideas TV has just put my lecture from Hay-on-Wye this year online: Manufacturing love.

It was a lovely opportunity to sit in a very comfy armchair and feel like a real philosopher. Of course, armchair philosophy is what it is: tomorrow I will do an empirical experiment with N=2 (with ethics approval from mother-in-law). We’ll see how it works out.

While I suspect my theoretical understanding is limited and the biomedical tools I have written about are not available, there is actually some nice empirical research on what makes good wedding vows. My husband and me also went for a cheap, simple wedding for just the closest friends and family, which seems to be a good sign (the rest of my friends come to an informal party the day after). And it is probably a good sign that we got together in a slow and gradual way: we have very compatible personalities.

A fun project.

Back to the future: neurohacking and the microcomputer revolution

My talk at LIFT 2014 Basel, <a href=”http://videos.liftconference.com/video/10528321/anders-sandberg-the-man-machine-are”>The Man-Machine: Are we living in the 1970s of brain hacking?</a> can now be viewed online (slides (pdf)).

My basic thesis is that the late 70s and early 80s microcomputer revolution might be a useful analogy for what might be happening with neurohacking today: technology changes are enabling amateur and startup experimentation that will in the long run lead to useful, society-changing products that people grow up with and accept. 20 years down the line we may be living in an enhanced society where some of the big neuro-players started up in the near future and became the Apple and Microsoft of enhancement.

A bit more detail:

Dreams from the early 80sIn the 70s the emergence of microprocessors enabled the emergence of microcomputers, far cheaper than the mainframes and desk-sized minicomputers that had come before. They were simple enough to be sold as kits to amateurs. The low threshold to entry led to the emergence of a hobbyist community, small start-ups and cluster formation (helped by the existence of pre-existing tech clusters like Silicon Valley). The result was intense growth, diversity and innovation. The result was a generation exposed to the technology, accepting it as cool or even ordinary, laying the foundation for later useful social and economic effects – it took 20+ years to actually have an effect on productivity! Some things take time, and integrating a new technology into society may take a generation.

Right now we have various official drivers for neural enhancement like concerns about an ageing society, chronic diseases, stress, lifelong learning and health costs. But the actual drivers of much of the bottom-up neurohacking seem to be exploration, innovation and the hacker ethos of taking something beyond its limits. Neurotechnologies may be leaving the confines of hospitals and becoming part of home or lifestyle technologies. Some, like enhancer drugs, are tightly surrounded by regulations and are likely to be heavily contested. Meanwhile many brain stimulation, neurofeedback and life monitoring devices exist in a regulatory vacuum and can evolve unimpeded. The fact that they are currently not very good is no matter: the 70s home computers were awful too, but they could do enough cool things to motivate users and innovation.

PartsWhy is there a neurotech revolution brewing? Part of it is technology advances: we already have platforms like computers, smartphones, wifi and arduinos available, enabling easy construction of new applications. Powerful signal processing can be done onboard, data mining in the cloud. Meanwhile costs for these technologies have been falling fast, and crowdsourcing is enabling new venues of funding for niche applications. The emergence of social consumers allow fast feedback and customisation. Information sharing has become orders of magnitude more efficient since the 70s, and technical results in neuroscience can now easily be re-used by amateurs as soon as they are published (the hobbyists are unlikely to get fMRI anytime soon, but results gained through expensive methods may then be re-used using cheap rigs).

It seems likely that at least some of these technologies are going to become more acceptable through exposure. Especially since they fit with the move towards individualized health concepts, preventative medicine and self-monitoring. The difference between a FitBit, a putative sleep-enhancing phone app, Google Glass and a brain stimulator might be smaller in practice than one expects.

Emlyn setting up interfaceThe technology is also going to cause a fair bit of business disruption by challenging traditional monopolies by enabling new platforms, new routes of distribution and access. The personal genomics battle between diagnostic monopolies and companies selling gene tests (not to mention individuals putting genetic information online) is a hint of what may come. Enhancement demands a certain amount of personalisation/fine tuning to work, and might hint of a future of ‘drugs as services’ – drugs may be part of a system of measurement, genetic testing and training rather than standalone chemicals.

Still, hype kills. Neuroenhancement is still slowly making its way up the slope of the hype curve, and will sooner or later reach a hype peak (moral panics, cover of Time Magazine) followed by inevitable disappointment when it doesn’t change everything (or make its entrepreneurs rich overnight). Things tend to take time, and there will be consolidation of the neuromarket after the first bubble.

The long-run effects of a society with neuroenhancement are of course hard to predict. It seems likely that we are going to continue towards a broader health concept – health as an individual choice, linked to individual life projects but also perhaps more of an individual responsibility. Mindstates will be personal choices, with an infrastructure for training, safety and control. Issues of morphological freedom – the right to change onself, and the right not to have change imposed – will be important social negotiations. Regulation can be expected to lag as always, and usually regulate devices based on older and different systems.

Generally, we can expect that our predictions will be bad. But past examples show that early design choices may stay with us for a long time: the computer mouse has not evolved that far from its original uses at Bell Labs and Xerox Parc. Designs also contain ideology: the personal microcomputer was very much a democratisation of computing. Neurohacking may be a democratisation of control over bodies and minds. We may hence want to work hard at shaping the future by designing well today.

My pet problem: Kim

Kim doesnt want to leaveSometimes a pet selects you – or perhaps your home – and moves in. In my case, I have been adopted by a small tortoiseshell butterfly (Aglais urticae).

When it arrived last week I did the normal thing and opened the window, trying to shoo the little thing out. It refused. I tried harder. I caught it on my hand and tried to wave it out: I have never experienced a butterfly holding on for dear life like that. It very clearly did not want to fly off into the rainy cold of British autumn. So I relented and let it stay.

I call it Kim, since I cannot tell whether it is a male or female. It seems to only have four legs. Yes, I know this is probably the gayest possible pet.

Kim looks outOver the past days I have occasionally opened the window when it has been fluttering against it, but it has always quickly settled down on the windowsill when it felt the open air. It is likely planning to hibernate in my flat.

This poses an interesting ethical problem: I know that if it hibernates at my home it will likely not survive, since the environment is far too warm and dry for it. Yet it looks like it is making a deliberate decision to stay. In the case of a human I would have tried to inform them of the problems with their choice, but then would generally have accepted their decision under informed consent (well, maybe not letting they live in my home, but you get the idea, dear reader). But butterflies have just a few hundred thousand neurons: they do not ‘know’ many things. Their behaviour is largely preprogrammed instincts with little flexibility. So there is not any choice to be respected, just behaviour. I am a superintelligence relative to Kim, and I know what would be best for it. I ought to overcome my anthropomorphising of its behaviour and release it in the wild.

Kim eatsYet if I buy this argument, what value does Kim have? Kim’s “life projects” are simple programs that do not have much freedom (beyond some chaotic behaviour) or complexity. So what does it matter whether they will fail? It might matter in regards to me: I might show the virtue of compassion by making the gesture of saving it – except that it is not clear that it matters whether I do it by letting it out or feeding it orange juice. I might be benefiting in an abstract way from the aesthetic or intellectual pleasure from this tricky encounter – indeed, by blogging about it I am turning a simple butterfly life into something far beyond itself.

Another approach is of course to consider pain or other forms of suffering. Maybe insect welfare does matter (I sincerely hope it does not, since it would turn Earth into a hell-world). But again either choice is problematic: outside Kim would likely become bird- or spider-food, or die from exposure. Inside it will likely die from failed hibernation. In terms of suffering both seem about likely bad. If I was more pessimistic I might consider that killing Kim painlessly might be the right course of action. But while I do think we should minimize unnecessary suffering I suspect – given the structure of the insect nervous system – that there is not much integrated experience going on there. Pain, quite likely, but not much phenomenology.

So where does this leave me? I cannot defend any particular line of action. So I just fall back on a behavioural program myself, the pet program – adopting individuals of other species, no doubt based on overly generalized child-rearing routines (which historically turned out to be a great boon to our species through domestication). I will give it fruit juice until it hibernates, and hope for the best.

Cool risks outside the envelope of nature

How do we apply the precautionary principle to exotic, low-probability risks?

The CUORE collaboration at the INFN Gran Sasso National Laboratory recently set a world record by cooling a cubic meter 400 kg copper vessel down to 6 milliKelvins: it was the coldest cubic meter in the universe for over 15 days. Yay! Applause! (And the rest of this post should in no way be construed as a criticism of the experiment)

Cold and weird risks

CrystalsI have not been able to dig up the project documentation, but I would be astonished if there was any discussion of risk due to the experiment. After all, cooling things is rarely dangerous. We do not have any physical theories saying there could be anything risky here. No doubt there are risk assessment of liquid nitrogen or helium practical risks somewhere, but no analysis of any basic physics risks.

Compare this to the debates around the LHC, where critics at least could point to papers suggesting that strangelets, small black holes and vacuum decay were theoretically possible. Yet the LHC could argue back that particle processes like those occurring in the accelerator were already naturally occurring almost everywhere: if the LHC was risky, we ought to see plenty of explosions in the sky. Leaving aside the complications of correcting for anthropic bias, this kind of argument seems reasonably solid: if you do something that is within the envelope of what happens in the universe normally and there are no observed super-dangerous processes linked to it, then this activity is likely fine. We might wish for careful risk assessment, but given that the activity is already happening it can be viewed as just as benign as the normal activity of the universe.

However, the CUORE experiment is actually going outside of the envelope of what we think is going on in the universe. In the past, the universe has been hotter, so there would not have been any large masses at 6 milliKelvins. And with a 3 Kelvin background temperature, there would not be any natural objects this cold. (Since 1995 there have been small Bose-Einstein condensates in the hundred nanoKelvin range on Earth, but the argument is the same.)

How risky is it to generate such an outside of the envelope phenomenon? There is no evidence from the past. There is no cause for alarm given the known laws of physics. Yet this lack of evidence does not argue against risk either. Maybe there is an ice-9 like phase transition of matter below a certain temperature. Maybe it implodes into a black hole because of some macroscale quantum(gravity) effect. Maybe the alien spacegods get angry. There is an endless number of possible hypotheses that cannot be ruled out.

We might think that such “small theories” can safely be ignored. But we have some potential evidence that the universe may be riskier than it looks: the Fermi paradox, the apparent absence of alien intelligence. If we are alone, it is either because there are one or more steps in the evolution of life and intelligence that are very unlikely (the “great filter” is behind us), or there is a high likelihood that intelligence disappears without a trace (a future great filter). Now, we might freely assign our probabilities to (1) that there are aliens around, (2) that the filter is behind us, and (3) that it is ahead. However, given our ignorance we cannot rationally give zero probability to any of these possibilities, and probably not even give any of them less than 1% (since that is about the natural lowest error rate of humans on anything). Anybody saying one of them is less likely than one in a million is likely very overconfident. Yet a 1% risk of a future great filter implies a huge threat. It is a threat that not only reliably wipes out intelligent life, but also does it to civilizations aware of its potential existence!

We then have a slightly odd reason to be slightly concerned with experiments like CUORE. We know there is some probability that intelligence gets reliably wiped out. We know intelligence is likely to explore conditions not found in the natural universe. So a potential explanation could be that there is some threat in this exploration. The probability is not enormous – we might think the filter is behind us or the universe is teeming with aliens, and even if there is a future filter there are many possibilities for what it could be besides low-temperature physics – but nearly any non-infinitesimal probability multiplied by the value of our species (at least 7 billion lives) tends to lead to a too large risk.

Precaution?

A tad chillyAt this point the precautionary principle rears its stupid head (the ugly head is asleep). The stupid head argues that we should hence never do anything that is outside the natural envelope.

The ugly head would argue we should investigate before doing anything risky, but since in this case the empirical studying is causing the risk the head would hence advice just trying out theoretical risk scenarios – not very useful given that we are dealing with something where all potential risk comes from scenarios unconstrained by evidence!

We cannot obey the stupid head much, since most human activity is about pushing the envelope. We are trying to have more and happier people than has ever existed in the universe before. Maybe that is risky (compare to Stapledon’s Last and First Men where it turned out to be dangerous to have too much intelligence in one spot), but it is both practically hard to prevent and this kind of open-ended “let’s not do anything that has not happened in the past” seems unreasonable given that most events are new ones and generally do not lead to disasters. But the pushing of the envelope into radically new directions does carry undefinable risk. We cannot avoid that. What we can do is to discuss whether we are willing to take on such hard to pin down risk.

However, this example also shows a way precaution can break down. Nobody has, to my knowledge, worried about cooling down matter besides me. There is no concerned group urging precaution since there is no empirical nor normative reason to think there is anything wrong specifically with CUORE: we only have a general Fermi paradox-induced inchoate worry. Yet proper precaution requires considering weak possibilities. I suspect that most future big new disasters will turn out to have avoided precautionary considerations just because there was no obvious reason to invoke the principle.

Conclusion?

Many people are scared more by uncertainty than actual risk. But we cannot escape it. Especially if we want to reduce existential risk, which tends to be more uncertain than most. This little essay is about some of the really tricky limits to what we can know about new risks. We should expect them to be unexpected. And we should expect that the standard decision methods will not behave sensibly.

As for the CUORE team, I wish them the best of luck to find neutrinoless double beta decay. But they should keep an eye open for weird anomalies too – they have a chance to peek outside the envelope of the natural in a well controlled setting, and that is valuable.

Pass the pith helmet, we are going to do epistemology!

Recognized outfittersEuronews has a series on explorers. Most are the kind of people you expect – characters who go off to harsh locations. But they also interviewed me, since I do a kind of exploration too: Anders Sandberg : Explorer of the mind.

“Explorer of the mind” sounds pretty awesome. Although the actual land I try to explore is the abstract and ill-defined spaces of the future, ethics, epistemology and emerging technology.

When I gave the interview I noticed how easy it was to slip into the explorer metaphor: we have a pretty clear cultural idea of what explorers are supposed to be and how their adventures look. Explaining how you do something very abstract like come up with robust decision procedures for judging emerging technology is hard, so it is very easy (and ego-stroking) to describe it as exploration. I think there is some truth in the metaphor, though.

Exploration is basically about trying to gather information about a domain. Some exploration is about the nature of the domain itself, some about its topography/topology, some about the contents of the domain. Sometimes it is about determining the existence of a location or not. In philosophical and mathematical exploration we are partially creating the domain as we go there, but because of consistency (and, sometimes, the need to fit with known facts about the world) it isn’t arbitrary. We might say it is procedurally generated (by a procedure we really would like to know more about!) Since the implications of any logical statement can go infinitely far and we have both limited mental resources and limited logical reach (as per Gödel) there will always be unknown and unknowable things out there. However, most of the unknown is boring and random. Real explorers try to find the important, useful, unique or just aesthetic things – something which again is really hard.

One of the things that fascinate me most about intellectual effort is that different domains have different “topographies”. Solving problems in discrete mathematics is very different from exploring probability or ethics. We know some corners are tough and others easy. Part of it is experience: people have been trying to understand consciousness or number theory for a long time and we see that they have moved less far than the people in geometry. But part of it is also a “feel” for how the landscape works. Getting from one useful result to another one requires different amounts of effort in logic (in my mind a mesa landscape where there are many plateaus of easy walk separated by immense canyons and deserts requiring real genius) and future studies (a thick jungle of fog, mud and creepers where you cannot see far and it is a huge slog to even move, but there is fascinating organisms everywhere within arms reach). Maybe category theory is like an Arctic vista of abstraction where one can move far but there is almost nothing to see. I don’t know, I keep to the mathematical tropics of calculus and geometry.

Another angle of exploration is how much exploitation to do. We want to learn things because of some value of knowledge. Understanding the topography of a domain helps us to direct efforts, so it is valuable at the very least for that (we might of course also value the knowledge about the domain itself). In some domains like engineering or surgery exploitation is so valuable that it tends to dominate: inventors or exploratory engineers/surgeons are rare. I suspect that this means these domains are seriously under-explored: were more people to investigate their limits, topography and nature we would probably learn some very valuable things. Maybe this is the curse of being rich in resources: there is little need to go far, and domains that are less useful get explored more widely. However, when such a broadly explored domain becomes useful it might be colonized on a huge scale (consider the shifts from being just philosophy to becoming proper somewhat mapped disciplines like natural science, economics, psychology etc.)

Of course, some domains are underexplored simply because the tools and opportunities for exploration are expensive or few. We cannot try wild surgical ideas on that many patients, and space engineering is still rather expensive. Coming up with a way of reducing these limitations and opening up their explorative frontiers ought to have big effects. We have seen this happening in scientific disciplines when new instruments arrive (think of the microscope, telescope or computer), or when costs come down (think computers, sequencing). If we could do something similar in abstract domains we might discover awesome things.

One of the best reasons to go exploring is to recognize how fantastic the stuff we already know is. Out there in the unknown there is likely equally fantastic things waiting to be discovered – and there is much more unknown than known.

Anthropic negatives

Inverted cumulusStuart Armstrong has come up with another twist on the anthropic shadow phenomenon. If existential risk needs two kinds of disasters to coincide in order to kill everybody, then observers will notice the disaster types to be anticorrelated.

The minimal example would be if each risk had 50% independent chance of happening: then the observable correlation coefficient would be -0.5 (not -1, since there is 1/3 chance to get neither risk; the possible outcomes are: no event, risk A, and risk B). If the probability of no disaster happening is N/(N+2) and the risks are equal 1/(N+2), then the correlation will be -1/(N+1).

I tried a slightly more elaborate model. Assume X and Y to be independent power-law distributed disasters (say war and pestillence outbreaks), and that if X+Y is larger than seven billion no observers will remain to see the outcome. If we ramp up their size (by multiplying X and Y with some constant) we get the following behaviour (for alpha=3):

(Top) correlation between observed power-law distributed independent variables multiplied by an increasing multiplier, where observation is contingent on their sum being smaller than 7 billion. Each point corresponds to 100,000 trials. (Bottom) Fraction of trials where observers were wiped out.
(Top) correlation between observed power-law distributed independent variables multiplied by an increasing multiplier, where observation is contingent on their sum being smaller than 7 billion. Each point corresponds to 100,000 trials. (Bottom) Fraction of trials where observers were wiped out.

As the situation gets more deadly the correlation becomes more negative. This also happens when allowing the exponent run from the very fat (alpha=1) to the thinner (alpha=3):

(top) Correlation between observed independent power-law distributed variables  (where observability requires their sum to be smaller than seven billion) for different exponents. (Bottom) fraction of trials ending in existential disaster. Multiplier=500 million.
(top) Correlation between observed independent power-law distributed variables (where observability requires their sum to be smaller than seven billion) for different exponents. (Bottom) fraction of trials ending in existential disaster. Multiplier=500 million.

The same thing also happens if we multiply X and Y.

I like the phenomenon: it gives us a way to look for anthropic effects by looking for suspicious anticorrelations. In particular, for the same variable the correlation ought to shift from near zero for small cases to negative for large cases. One prediction might be that periods of high superpower tension would be anticorrelated with mishaps in the nuclear weapon control systems. Of course, getting the data might be another matter. We might start by looking at extant companies with multiple risk factors like insurance companies and see if capital risk becomes anticorrelated with insurance risk at the high end.

Galactic duck and cover

How much does gamma ray bursts (GRBs) produce a “galactic habitable zone”? Recently the preprint “On the role of GRBs on life extinction in the Universe” by Piran and Jimenez has made the rounds, arguing that we are near (in fact, inside) the inner edge of the zone due to plentiful GRBs causing mass extinctions too often for intelligence to arise.

This is somewhat similar to James Annis and Milan Cirkovic’s phase transition argument, where a declining rate of supernovae and GRBs causes global temporal synchronization of the emergence of intelligence. However, that argument has a problem: energetic explosions are random, and the difference in extinctions between lucky and unlucky parts of the galaxy can be large – intelligence might well erupt in a lucky corner long before the rest of the galaxy is ready.

I suspect the same problem is true for the Piran and Jimenez paper, but spatially. GRBs are believed to be highly directional, with beams typically a few degrees across. If we have random GRBs with narrow beams, how much of the center of the galaxy do they miss?

I made a simple model of the galaxy, with a thin disk, thick disk and bar population. The model used cubical cells 250 parsec long; somewhat crude, but likely good enough. Sampling random points based on star density, I generated GRBs. Based on Frail et al. 2001 I gave them lognormal energies and power-law distributed jet angles, directed randomly. Like Piran and Jimenez I assumed that if the fluence was above 100 kJ/m^2 it would be extinction level. The rate of GRBs in the Milky Way is uncertain, but a high estimate seems to be one every 100,000 years. Running 1000 GRBs would hence correspond to 100 million years.

Galactic model with gamma ray bursts (red) and density isocontours (blue).
Galactic model with gamma ray bursts (red) and density isocontours (blue).

If we look at the galactic plane we find that the variability close to the galactic centre is big: there are plenty of lucky regions with many stars.

Unaffected star density in the galactic plane.
Unaffected star density in the galactic plane.
Affected (red) and unaffected (blue) stars at different radii in the galactic plane.
Affected (red) and unaffected (blue) stars at different radii in the galactic plane.

When integrating around the entire galaxy to get a measure of risk at different radii and altitudes shows a rather messy structure:

Probability that a given volume would be affected by a GRB. Volumes are integrated around axisymmetric circles.
Probability that a given volume would be affected by a GRB. Volumes are integrated around axisymmetric circles.

One interesting finding is that the most dangerous place may be above the galactic plane along the axis: while few GRBs happen there, those in the disk and bar can reach there (the chance of being inside a double cone is independent of distance to the center, but along the axis one is within reach for the maximum number of GRBs).

Density of stars not affected by the GRBs.
Density of stars not affected by the GRBs.

Integrating the density of stars that are not affected as a function of radius and altitude shows that there is a mild galactic habitable zone hole within 4 kpc. That we are close to the peak is neat, but there is a significant number of stars very close to the center.

This is of course not a professional model; it is a slapdash Matlab script done in an evening to respond to some online debate. But I think it shows that directionality may matter a lot by increasing the variance of star fates. Nearby systems may be irradiated very differently, and merely averaging them will miss this.

If I understood Piran and Jimenez right they do not use directionality; instead they employ a scaled rate of observed GRBs, so they do not have to deal with the iffy issue of jet widths. This might be sound, but I suspect one should check the spatial statistics: correlations are tricky things (and were GRB axes even mildly aligned with the galactic axis the risk reduction would be huge). Another way of getting closer to their result is of course to bump up the number of GRBs: with enough, the centre of the galaxy will naturally be inhospitable. I did not do the same careful modelling of the link between metallicity and GRBs, nor the different sizes.

In any case, I suspect that GRBs are weak constraints on where life can persist and too erratic to act as a good answer to the Fermi question – even a mass extinction is forgotten within 10 million years.