Time travel system

By Stuart Armstrong

Introduction

I’ve been looking to develop a system of time travel in which it’s possible to actually have a proper time war. To make it consistent and interesting, I’ve listed some requirements here. I think I have a fun system that obeys them all.

Time travel/time war requirement:

  • It’s possible to change the past (and the future). These changes don’t just vanish.
  • It’s useful to travel both forwards and backwards in time.
  • You can’t win by just rushing back to the Big Bang, or to the end of the universe.
  • There’s no “orthogonal time” that time travellers follow; I can’t be leaving 2015 to go to 1502 “while” you’re leaving 3015 to arrive at the same place.
  • You can learn about the rules of time travel while doing it; however, time travel must be dangerous to the ignorant (and not just because machines could blow up, or locals could kill you).
  • No restrictions that make no physical sense, or that could be got round by a human or a robot with half a brain. Eg: “you can’t make a second time jump from the arrival point of a first.” However, a robot could build a second copy of a time machine and of itself, and that could then jump back; therefore that initial restriction doesn’t make any particular sense.
  • Similarly, no restrictions that are unphysical or purely narrative.
  • It must be useful to, for instance, leave arrays of computers calculating things for you then jumping to the end to get the answer.
  • Ideally, there would be only one timeline. If there are parallel universes, they must be simply describable, and allow time-travellers to interact with each other in ways they would care about.
  • A variety of different strategies must be possible for fighting the war.

Consistent time travel

Earlier, I listed some requirements for a system of time travel – mainly that it be both scientifically consistent and open to interesting conflicts that aren’t trivially one-sided. Here is my proposal for such a thing, within the general relativity format.

So, suppose you build a time machine, and want to go back in time to kill Hitler, as one does. Your time machine is a 10m diameter sphere, which exchanges place with a similarly-size sphere in 1930. What happens then? The graph here shows the time jump, and the “light-cones” for the departure (blue) and arrival (red) points; under the normal rules of causality, the blue point can only affect things in the grey cone, the red point can only affect things in the black cone.

time_travel01The basic idea is that when you do a time jump like this, then you “fix” your points of departure and arrival. Hence the blue and red points cannot be changed, and the universe rearranges itself to ensure this. The big bang itself is also a fixed point.

All this “fixed point” idea is connected to entropy. Basically, we feel that time advances in one direction rather than the other. Many have argued that this is because entropy (roughly, disorder) increases in one direction, and that this direction points from the past to the future. Since most laws of physics are symmetric in the past and the future, I prefer to think of this as “every law of physics is time-symmetric, but the big bang is a fixed point of low entropy, hence the entropy increase as we go away from it.”

But here I’m introducing two other fixed points. What will that do?

Well, initially, not much. You go back into time, and kill Hitler, and the second world war doesn’t happen (or maybe there’s a big war of most of Europe against the USSR, see configuration 2 in “A Landscape Theory of Aggregation”). Yay! That’s because, close to the red point, causality works pretty much as you’d expect.

However, close to the blue point, things are different.

time_travel02

Here, the universe starts to rearrange things so that the blue point is unchanged. Causality isn’t exactly going backwards, but it is being funnelled in a particular direction. People who descended from others who “should have died” in WW2 start suddenly dying off. Memories shift; records change. By the time you’re very close to the blue point, the world is essentially identical to what it would have been had there been no time travelling.

Does this mean that you time jump made no difference? Not at all. The blue fixed point only constrains what happens in the light cone behind it (hence the red-to-blue rectangle in the picture). Things outside the rectangle are unconstrained – in particular, the future of that rectangle. Now, close to the blue point, the events are “blue” (ie similar to the standard history), so the future of those events are also pretty blue (similar to what would have been without the time jump) – see the blue arrows. At the edge of the rectangle, however, the events are pretty red (the alternative timeline), so the future is also pretty red (ie changed) – see the red arrows. If the influence of the red areas converges back in to the centre, the future will be radically different.

(some people might wonder why there aren’t “changing arrows” extending form the rectangle into the past as well as the future. There might be, but remember we have a fixed point at the big bang, which reduces the impact of these backward changes – and the red point is also fixed, exerting a strong stabilising influence for events in its own backwards light-cone).

So by time travelling, you can change the past, and you can change part of the future – but you can’t change the present.

But what would happen if you stayed alive from 1930, waiting and witnessing history up to the blue point again? This would be very dangerous; to illustrate, let’s change the scale, and assume we’ve only jumped a few minutes into the past.

time_travel03Maybe there you meet your past self, have a conversation about how wise you are, try and have sex with yourself, or whatever time travellers do with past copies of themselves. But this is highly dangerous! Within a few minutes, all trace of future you’s presesence will be gone; you past self will have no memory of it, there will be no physical or mental evidence remaining.

Obviously this is very dangerous for you! The easiest way for there to remain no evidence of you, is for there to be no you. You might say “but what if I do this, or try and do that, or…” But all your plans will fail. You are fighting against causality itself. As you get closer to the blue dot, it’s as if time itself was running backwards, erasing your new timeline, to restore the old one. Cleverness can’t protect you against an inversion of causality.

Your only real chance of survival (unless you do a second time jump to get out of there) is to rush away from the red point at near light-speed, getting yourself to the edge of the rectangle and ejecting yourself from the past of the blue point.

Right, that’s the basic idea!

Multiple time travellers

Ok, the previous section looked at a single time traveller. What happens when there are several? Say two time travellers (blue and green) are both trying to get to the red point (or places close to it). Who gets there “first”?

time_travel04Here is where I define the second important concept for time-travel, that of “priority”. Quite simply, a point with higher priority is fixed relative to the other. For instance, imagine that the blue and green time travellers appear in close proximity to each other:

time_travel06

This is a picture where the green time traveller has a higher priority than the blue one. The green arrival changes the timeline (the green cone) and the blue time traveller fits themselves into this new timeline.

If instead the blue traveller had higher priority, we get the following scenario:

time_travel05

Here the blue traveller arrives in the original (white) timeline, fixing their arrival point. The green time traveller arrives, and generates their own future – but this has to be put back into the first white timeline for the arrival of the blue time traveller.

Being close to a time traveller with a high priority is thus very dangerous! The green time traveller may get erased if they don’t flee-at-almost-light-speed.

Even arriving after a higher-priority time traveller is very dangerous – suppose that the green one has higher priority, and the blue one arrives after. Then suppose the green one realises they’re not exactly at the right place, and jump forwards a bit; then you get:

time_travel07(there’s another reason arriving after a higher priority time traveller is dangerous, as we’ll see).

So how do we determine priority? The simplest seems time-space distance. You start with a priority of zero, and this priority goes down proportional to how far your jump goes.

What about doing a lot of short jumps? You don’t want to allow green to get higher priority by doing a series of jumps:time_travel08This picture suggests how to proceed. Your first jump brings you a priority of -70. Then the second adds a second penalty of -70, bringing the priority penalty to -140 (the yellow point is another time traveller, who will be relevant soon)

How can we formalise this? Well, a second jump is a time jump that would not happen if the first jump hadn’t. So for each arrival in a time jump, you can trace it back to the original jump-point. Then your priority score is the (negative) of the volume of the time-space cone determined by the arrival and original jump-point. Since this volume is the point where your influence is strongest, this makes sense (note for those who study special relativity: using this volume means that you can’t jump “left along a light-beam”, then “right along a light-beam” and arrive with a priority of 0, which you could do if we used distance travelled rather than volume).

Let’s look at that yellow time traveller again. If there was no other time traveller, they would jump from that place. But because of the arrival of the green traveller (at -70), the ripples cause them to leave from a different point in space time, the purple one (the red arrow shows that the arrival there prevents the green time jump, and cause the purple time jump):time_travel09So what happens? Well, the yellow time jump will still happen. It has a priority of 0 (it happened without any influence of any time traveller), so the green arrival at -70 priority can’t change this fixed point. The purple time jump will also happen, but it will happen with a lower priority of -30, since it was caused by time jumps that can ultimately be traced back to the green 0 point. (note: I’m unsure whether there’s any problem with allowing priority to rise as you get back closer to your point of origin; you might prefer to use the smallest cone that includes all jump points that affected you, so the purple point would have priority -70, just like the green point that brought it into existence).

What other differences could there be between the yellow and the purple version? Well, for one, the yellow has no time jumps in their subjective pasts, while the purple has one – the green -70. So as time travellers wiz around, they create (potential) duplicate copies of themselves and other time travellers – but those with the highest priority, and hence the highest power, are those who have no evidence that time jumps work, and do short jumps. As your knowledge of time travel goes up, and as you explore more, your priority sinks, and you become more vulnerable.

So it’s very dangerous even having a conversation with someone of higher priority than yourself! Suppose Mr X talks with Mrs Y, who has higher priority than him. Any decision that Y does subsequently has been affected by that conversation, so her priority sinks to X’s level (call her Y’). But now imagine that, if she wouldn’t have had that conversation, she would have done another time jump anyway. The Y who didn’t have the conversation is not affected by X, so retains her higher priority.

So, imagine that Y would have done another time jump a few minutes after arrival. X arrives and convinces her not to do so (maybe there’s a good reason for that). But the “time jump in an hour” will still happen, because the unaffected Y has higher priority, and X can’t change that. So if the X and Y’ talk or linger too long, they run the risk of getting erased as they get close to the “point where Y would have jumped if X hadn’t been there”. In graphical form, the blue-to-green square is the area in which X and Y’ can operate in, unless they can escape into the white bands:time_travel10So the greatest challenge for a low priority time-traveller is to use their knowledge to evade erasure by higher priority ones. They have a much better understanding of what’s going on, they may know where other time jumps likely end up at or start, they might have experience at “rushing at light speed to get out of cone of danger while preserving most of their personality and memories” (or technology that helps them do so), but they are ever vulnerable. They can kill or influence higher priority time-travellers, but this will only work “until” the point where they would have done a time jump otherwise (and the cone before that point).

So, have I succeeded in creating an interesting time-travel design? Is it broken in any obvious way? Can you imagine interesting stories and conflicts being fought there?

 

The Biosphere Code

Let's build a smarter planetYesterday I contributed to a piece of manifesto writing, producing the Biosphere Code Manifesto. The Guardian has a version on its blog. Not quite as dramatic as Marinetti’s Futurist Manifesto but perhaps more constructive:

Principle 1. With great algorithmic powers come great responsibilities

Those implementing and using algorithms should consider the impacts of their algorithms.

Principle 2. Algorithms should serve humanity and the biosphere at large.

Algorithms should be considerate of human needs and the biosphere, and facilitate transformations towards sustainability by supporting ecologically responsible innovation.

Principle 3. The benefits and risks of algorithms should be distributed fairly

Algorithm developers should consider issues relating to the distribution of risks and opportunities more seriously. Developing algorithms that provide benefits to the few and present risks to the many are both unjust and unfair.

Principle 4. Algorithms should be flexible, adaptive and context-aware

Algorithms should be open, malleable and easy to reprogram if serious repercussions or unexpected results emerge. Algorithms should be aware of their external effects and be able to adapt to unforeseen changes.

Principle 5. Algorithms should help us expect the unexpected

Algorithms should be used in such a way that they enhance our shared capacity to deal with shocks and surprises – including problems caused by errors or misbehaviors in other algorithms.

Principle 6. Algorithmic data collection should be open and meaningful

Data collection should be transparent and respectful of public privacy. In order to avoid hidden biases, the datasets which feed into algorithms should be validated.

Principle 7. Algorithms should be inspiring, playful and beautiful

Algorithms should be used to enhance human creativity and playfulness, and to create new kinds of art. We should encourage algorithms that facilitate human collaboration, interaction and engagement – with each other, with society, and with nature.

The algorithmic world

World gross economic productThe basic insight is that the geosphere, ecosphere, anthroposphere and technosphere are getting deeply entwined, and algorithms are becoming a key force in regulating this global system.

Some algorithms enable new activities (multimedia is impossible without FFT and CRC), change how activities are done (data centres happen because virtualization and MapReduce make them scale well), or enable faster algorithmic development (compilers and libraries). Algorithms used for decision support are particularly important. Logistics algorithms (routing, linear programming, scheduling, and optimization) affect the scope and efficiency of the material economy. Financial algorithms the scope and efficiency of the economy itself. Intelligence algorithms (data collection, warehousing, mining, network analysis but also human expert judgement combination methods), statistics gathering and risk models affect government policy. Recommender systems (“You May Also Enjoy…”) and advertising influence consumer demand.

Since these algorithms are shared, their properties will affect a multitude of decisions and individuals in the same way even if they think they are acting independently. There are spillover effects from the groups that use algorithms to other stakeholders from the algorithm-caused  actions. And algorithms have a multitude of non-trivial failure modes: machine learning can create opaque bias or sudden emergent misbehaviour, human over-reliance on algorithms can cause accidents or large-scale misallocation of resources, some algorithms produce systemic risks, and others embody malicious behaviours. In short, code – whether in computers or as a formal praxis in an organisation – matters morally.

What is the point?

Photo codeCould a code like the Biosphere Code actually do anything useful? Isn’t this yet another splashy “wouldn’t it be nice if everybody were moral and rational in engineering/politics/international relations?”

I think it is a first step towards something useful.

There are engineering ethics codes, even for software engineers. But algorithms are created in many domains, including by non-engineers. We can not and should not prevent people from thinking, proposing, and trying new algorithms: that would be like attempts to regulate science, art, and thought. But we can as societies create incentives to do constructive things and avoid known destructive things. In order to do so, we should recognize that we need to work on the incentives and start gathering information.

Algorithms and their large-scale results must be studied and measured: we cannot rely on theory, despite its seductive power since there are profound theoretical limitations about our predictive abilities in the world of algorithms, as well as obvious practical limitations. Algorithms also do not exist in a vacuum: the human or biosphere context is an active part of what is going on. An algorithm can be totally correct and yet be misused in a harmful way because of its framing.

But even in the small, if we can make one programmer think a bit more about what they are doing and choosing a better algorithm than they otherwise would have done, the world is better off. In fact, a single programmer can have surprisingly large impact.

I am more optimistic than that. Recognizing algorithms as the key building blocks that they are for our civilization, what peculiarities they have, and learning better ways of designing and using them has transformative power. There are disciplines dealing with parts of this, but the whole requires considering interdisciplinary interactions that are currently rarely explored.

Let’s get started!

Universal principles?

Essence of ethicsI got challenged on the extropian list, which is a fun reason to make a mini-lecture.

On 2015-10-02 17:12, William Flynn Wallace wrote:
> ​Anders says above that we have discovered universal timeless principles.​ I’d like to know what they are and who proposed them, because that’s chutzpah of the highest order. Oh boy – let’s discuss that one.

Here is one: a thing is identical to itself. (1)

Here is another one: “All human beings are born free and equal in dignity and rights.” (2)

Here is a third one: “Act only according to that maxim whereby you can, at the same time, will that it should become a universal law.” (3)

(1) was first explicitly mentioned by Plato (in Theaetetus). I think you also agree with it – things that are not identical to themselves are unlikely to even be called “things”, and without the principle very little thinking makes sense.

I am not sure whether it is chutzpah of the highest order or a very humble observation.

(2) is from the UN declaration of universal human rights. This sentence needs enormous amounts of unpacking – “free”, “equal”, “dignity”, “rights”… these words can (and are) used in very different ways. Yet I think it makes sense to say that according to a big chunk of Western philosophy this sentence is a true sentence (in the sense that ethical propositions are true), that it is universal (the truth is not contingent on when and where you are, although the applications may change), and we know historically that we have not known this principle forever. Now *why* it is true quickly branches out into different answers depending on what metaethical positions you hold, not to mention the big topic of what kind of truth moral truth actually is (if anything). The funny thing is that the universal part is way less contentious, because of the widely accepted (and rarely stated) formal ethical principle that if it is moral to P in situation X, then the location in time and space where X happens does not matter.

Chutzpah of the highest order? Totally. So is the UN.

(3) is Immanuel Kant, and he argued that any rational moral agent could through pure reason reach this principle. It is in many ways like (1) almost a consistency requirement of moral will (not action, since he doesn’t actually care about the consequences – we cannot fully control those, but we can control what we decide to do). There is a fair bit of unpacking of the wording, but unlike the UN case he defines his terms fairly carefully in the preceding text. His principle is, if he is right, the supreme principle of morality.

Chuzpah auf höchstem Niveau? Total!

Note that (1) is more or less an axiom: there is no argument for why it is true, because there is little point in even trying. (3) is intended to be like a theorem in geometry: from some axioms and the laws of logic, we end up with the categorical imperative. It is just as audacious or normal as the Pythagorean theorem. (2) is a kind of compromise between different ethical systems: the Kantians would defend it based on their system, while consequentialists could make a rule utilitarian argument for why it is true, and contractualists would say it is true because the UN agrees on it. They agree on the mid-level meaning, but not on the other’s derivations. It is thick, messy and political, yet also represents fairly well what most educated people would conclude (of course, they would then show off by disagreeing loudly with each other about details, obscuring the actual agreement).

Philosopher’s views

Do people who think about these things actually believe in universal principles? One fun source is David Bourget and David J. Chalmers’ survey of professional philosophers (data). 56.4% of the respondents were moral realists (there are moral facts and moral values, and that these are objective and independent of our views), 65.7% were moral cognitivists (ethical sentences can be true or false); these were correlated to 0.562. 25.9% were deontologists, which means that they would hold somewhat Kant-like views that some actions are always or never right (some of the rest of course also believe in principles, but the survey cannot tell us anything more). 71.1% thought there was a priori knowledge (things we know by virtue of being thinking beings rather than experience).

My views

Do I believe in timeless principles? Kind of. There are statements in physics that are invariant of translations, rotations, Lorenz boosts and other transformations, and of course math remains math. Whether physics and math are “out there” or just in minds is hard to tell (I lean towards that at least physics is out there in some form), but clearly any minds that know some subset of correct, invariant physics and math can derive other correct conclusions from it. And other minds with the same information can make the same derivations and reach the same conclusions – no matter when or where. So there are knowable principles in these domains every sufficiently informed and smart mind would know. Things get iffy with values, since they might be far more linked to the entities experiencing them, but clearly we can do analyse game theory and make statements like “If agent A is trying to optimize X, agent B optimizes Y, and X and Y do not interact, then they can get more of X and Y by cooperating”. So I think we can get pretty close to universal principles in this framework, even if it turns out that they merely reside inside minds knowing about the outside world.

A clean well-lighted challenge: those eyes

On Extropy-chat my friend Spike suggested a fun writing challenge:

“So now I have a challenge for you.  Write a Hemmingway-esque story (or a you-esque story if you are better than Papa) which will teach me something, anything.  The Hemmingway story has memorable qualities, but taught me nada.  I am looking for a short story that is memorable and instructive, on any subject that interests you. Since there is so much to learn in this tragically short life, the shorter the story the better, but it should create memorable images like Hemmingway’s Clean, it must teach me something, anything. “

Here is my first attempt. (V 1.1, slightly improved from my list post and with some links). References and comments below.

Those eyes

“Customers!”
“Ah, yes, customers.”
“Cannot live with them, cannot live without them.”
“So, who?”
“The optics guys.”
“Those are the worst.”
“I thought that was the security guys.”
“Maybe. What’s the deal?”
Antireflective coatings. Dirt repelling.”
“That doesn’t sound too bad.”
“Some of the bots need to have diffraction spread, some should not. Ideally determined just when hatching.”
“Hatching? Self-assembling bots?”
“Yes. Can not do proper square root index matching in those. No global coordination.”
“Crawly bugbots?”
“Yes. Do not even think about what they want them for.”
“I was thinking of insect eyes.”
“No. The design is not faceted. The optics people have some other kind of sensor.”
“Have you seen reflections from insect eyes?”
“If you shine a flashlight in the garden at night you can see jumping spiders looking back at you.”
“That’s their tapeta, like a cat’s. I was talking about reflections from the surface.”
“I have not looked, to be honest.”
“There aren’t any glints when light glance across fly eyes. And dirt doesn’t stick.”
“They polish them a lot.”
“Sure. Anyway, they have nipples on their eyes.”
“Nipples?”
“Nipple like nanostructures. A whole field of them on the cornea.”
“Ah, lotus coatings. Superhydrophobic. But now you get diffraction and diffraction glints.”
“Not if they are sufficiently randomly distributed.”
“It needs to be an even density. Some kind of Penrose pattern.”
“That needs global coordination. Think Turing pattern instead.”
“Some kind of tape?”
“That’s Turing machine. This is his last work from ’52, computational biology.”
“Never heard of it.”
“It uses two diffusing signal substances: one that stimulates production of itself and an inhibitor, and the inhibitor diffuses further.”
“So a blob of the first will be self-supporting, but have a moat where other blobs cannot form.”
“Yep. That is the classic case. It all depends on the parameters: spots, zebra stripes, labyrinths, even moving leopard spots and oscillating modes.”
“All generated by local rules.”
“You see them all over the place.”
“Insect corneas?”
“Yes. Some Russians catalogued the patterns on insect eyes. They got the entire Turing catalogue.”
“Changing the parameters slightly presumably changes the pattern?”
“Indeed. You can shift from hexagonal nipples to disordered nipples to stripes or labyrinths, and even over to dimples.”
“Local interaction, parameters easy to change during development or even after, variable optics effects.”
“Stripes or hexagons would do diffraction spread for the bots.”
“Bingo.”

References and comments

Blagodatski, A., Sergeev, A., Kryuchkov, M., Lopatina, Y., & Katanaev, V. L. (2015). Diverse set of Turing nanopatterns coat corneae across insect lineages. Proceedings of the National Academy of Sciences, 112(34), 10750-10755.

My old notes on models of development for a course, with a section on Turing patterns. There are many far better introductions, of course.

Nanostructured chitin can do amazing optics stuff, like the wings of the Morpho butterflyP. Vukusic, J.R. Sambles, C.R. Lawrence, and R.J. Wootton (1999). “Quantified interference and diffraction in single Morpho butterfly scales“. Proceedings of the Royal Society B 266 (1427): 1403–11.

Another cool example of insect nano-optics: Land, M. F., Horwood, J., Lim, M. L., & Li, D. (2007). Optics of the ultraviolet reflecting scales of a jumping spider. Proceedings of the Royal Society of London B: Biological Sciences, 274(1618), 1583-1589.

One point Blagodatski et al. make is that the different eye patterns are scattered all over the insect phylogenetic tree: since it is easy to change parameters one can get whatever surface is needed by just turning a few genetic knobs (for example in snake skins or number of digits in mammals). I found a local paper looking at figuring out phylogenies based on maximum likelihood inference from pattern settings. While that paper was pretty optimistic on being able to figure out phylogenies this way, I suspect the Blagodatski paper shows that they can change so quickly that this will only be applicable to closely related species.

It is fun to look at how the Fourier transform changes as the parameters of the pattern change:
Leopard spot pattern

Random spot pattern

Zebra stripe pattern

Hexagonal dimple pattern

In this case I move the parameter b up from a low value to a higher one. At first I get “leopard spots” that divide and repel each other (very fun to watch), arraying themselves to fit within the boundary. This produces the vertical and horizontal stripes in the Fourier transform. As b increases the spots form a more random array, and there is no particular direction favoured in the transform: there is just an annulus around the center, representing the typical inter-blob distance. As b increases more, the blobs merge into stripes. For these parameters they snake around a bit, producing an annulus of uneven intensity. At higher values they merge into a honeycomb, and now the annulus collapses to six peaks (plus artefacts from the low resolution).

Brewing bad policy

Weak gravityThe New York Times reports that yeast has been modified to make THC. The paper describing the method uses precursor molecules, so it is not a hugely radical step forward. Still, it dovetails nicely with the recent paper in Science about full biosynthesis of opiates from sugar (still not very efficient compared to plants, though). Already this spring there was a comment piece in Nature about how to regulate the possibly imminent drug microbrewing, which I commented on at Practical Ethics.

Rob Carlsson has an excellent commentary on the problems with the regulatory reflex about new technology. He is basically arguing a “first, do no harm” principle for technology policy.

Policy conversations at all levels regularly make these same mistakes, and the arguments are nearly uniform in structure. “Here is something we don’t know about, or are uncertain about, and it might be bad – really, really bad – so we should most certainly prepare policy options to prevent the hypothetical worst!” Exclamation points are usually just implied throughout, but they are there nonetheless. The policy options almost always involve regulation and restriction of a technology or process that can be construed as threatening, usually with little or no consideration of what that threatening thing might plausibly grow into, nor of how similar regulatory efforts have fared historically.

This is such a common conversation that in many fields like AI even bringing up that there might be a problem makes practitioners think you are planning to invoke regulation. It fits with the hyperbolic tendency of many domains. For the record, if there is one thing we in the AI safety research community agree on, it is that more research is needed before we can give sensible policy recommendations.

Figuring out what policies can work requires understanding both what the domain actually is about (including what it can actually do, what it likely will be able to do one day, and what it cannot do), how different policy options have actually worked in the past, and what policy options actually exist in policy-making. This requires a fair bit of interdisciplinary work between researchers and policy professionals. Clearly we need more forums where this can happen.

And yes, even existential risks need to be handled carefully like this. If their importance overshadows everything, then getting policies that actually reduce the risk is a top priority: dramatic, fast policies doesn’t guarantee working risk reduction, and once a policy is in place it is hard to shift. For most low-probability threats we do not gain much survival by rushing policies into place compared to getting better policies.

Why Cherry 2000 should not be banned, Terminator should, and what this has to do with Oscar Wilde

Binary curious[This is what happens when I blog after two glasses of wine. Trigger warning for possibly stupid cultural criticism and misuse of Oscar Wilde.]

From robots to artificiality

On practical ethics I discuss what kind of robots we ought to campaign against. I have signed up against autonomous military robots, but I think sex robots are fine. The dividing line is that the harm done (if any) is indirect and victimless, and best handled through sociocultural means rather than legislation.

I think the campaign against sex robots has a point in that there are some pretty creepy ideas floating around in the world of current sex bots. But I also think it assumes these ideas are the only possible motivations. As I pointed out in my comments on another practical ethics post, there are likely people turned on by pure artificiality – human sexuality can be far queerer than most think.

Going off on a tangent, I am reminded of Oscar Wilde’s epigram

“The first duty in life is to be as artificial as possible. What the second duty is no one has as yet discovered.”

Being artificial is not the same thing as being an object. As noted by Barris, Wilde’s artificiality actually fits in with pluralism and liberalism. Things could be different. Yes, in the artificial world nothing is absolutely given, everything is the result of some design choices. But assuming some eternal Essence/Law/God is necessary for meaning or moral exposes one to a fruitless search for that Thing (or worse, a premature assumption one has found It, typically when looking in the mirror). Indeed, as Dorian Gray muses, “Is insincerity such a terrible thing? I think not. It is merely a method by which we can multiply our personalities.” We are not single personas with unitary identities and well defined destinies, and this is most clearly visible in our social plays.

Sex, power and robots

Continuing on my Wildean binge, I encountered another epigram:

“Everything in the world is about sex except sex. Sex is about power.”

I think this cuts close to the Terminator vs. Cherry 2000 debate. Most modern theorists of gender and sex are of course power-obsessed (let’s blame Foucault). The campaign against sex robots clearly see the problem as the robots embodying and perpetuating a problematic unequal power structure. I detect a whiff of paternalism there, where women and children – rather than people – seem to be assumed to be the victims and in the need of being saved from this new technology (at least it is not going as far as some other campaigns that fully assume they are also suffering from false consciousness and must be saved from themselves, the poor things). But sometimes a cigar is just a cigar… I mean sex is sex: it is important to recognize that one of the reasons for sex robots (and indeed prostitution) is the desire for sex and the sometimes awkward social or biological constraints of experiencing it.

The problem with autonomous weapons is that power really comes out of a gun. (Must resist making a Zardoz reference…) It might be wielded arbitrarily by an autonomous system with unclear or bad orders, or it might wielded far too efficiently by an automated armed force perfectly obedient to its commanders – removing the constraint that soldiers might turn against their rulers if being aimed against their citizenry. Terminator is far more about unequal and dangerous power than sex (although I still have fond memories of seeing a naked Arnie back in 1984). The cultural critic may argue that the power games in the bedroom are more insidious and affect more of our lives than some remote gleaming gun-metal threat, but I think I’d rather have sexism than killing and automated totalitarianism. The uniforms of the killer robots are not even going to look sexy.

It is for your own good

Trying to ban sex robots is about trying to shape society into an appealing way – the goal of the campaign is to support “development of ethical technologies that reflect human principles of dignity, mutuality and freedom” and the right for everybody to have their subjectivity recognized without coercion. But while these are liberal principles when stated like this, I suspect the campaign or groups like it will have a hard time keeping out of our bedrooms. After all, they need to ensure that there is no lack of mutuality or creepy sex robots there. The liberal respect for mutuality can become a very non-liberal worship of Mutuality, embodied in requiring partners to sign consent forms, demanding trigger warnings, and treating everybody who is not responding right to its keywords as suspects of future crimes. The fact that this absolutism comes from a very well-meaning impulse to protect something fine makes it even more vicious, since any criticism is easily mistaken as an attack on the very core Dignity/Mutuality/Autonomy of humanity (and hence any means of defence are OK). And now we have all the ingredients for a nicely self-indulgent power trip.

This is why Wilde’s pluralism is healthy. Superficiality, accepting the contrived and artificial nature of our created relationships, means that we become humble in asserting their truth and value. Yes, absolute relativism is stupid and self defeating. Yes, we need to treat each other decently, but I think it is better to start from the Lockean liberalism that allows people to have independent projects rather than assume that society and its technology must be designed to embody the Good Values. Replacing “human dignity” with the word “respect” usually makes ethics clearer.

Instead of assuming we can a priori figure out how technology will change us and then select the right technology, we try and learn. We can make some predictions with reasonable accuracy, which is why trying to rein in autonomous weapons makes sense (the probability that they lead to a world of stability and peace seems remote). But predicting cultural responses to technology is not something we have any good track record of: most deliberate improvements of our culture have come from social means and institutions, not banning technology.

“The fact is, that civilisation requires slaves. The Greeks were quite right there. Unless there are slaves to do the ugly, horrible, uninteresting work, culture and contemplation become almost impossible. Human slavery is wrong, insecure, and demoralising. On mechanical slavery, on the slavery of the machine, the future of the world depends.”

Living forever

Benjamin Zand has made a neat little documentary about transhumanism, attempts to live forever and the posthuman challenge. I show up of course as soon as ethics is being mentioned.

Benjamin and me had a much, much longer (and very fun) conversation about ethics than could even be squeezed into a TV documentary. Everything from personal identity to overpopulation to the meaning of life. Plus the practicalities of cryonics, transhuman compassion and how to test if brain emulation actually works.

I think the inequality and control issues are interesting to develop further.

Would human enhancement boost inequality?

There is a trivial sense in which just inventing an enhancement produces profound inequality since one person has it, and the rest of mankind lacks it. But this is clearly ethically uninteresting: what we actually care about is whether everybody gets to share something good eventually.

However, the trivial example shows an interesting aspect of inequality: it has a timescale. An enhancement that will eventually benefit everyone but is unequally distributed may be entirely OK if it is spreading fast enough. In fact, by being expensive at the start it might even act as a kind of early adopter/rich tax, since they first versions will pay for R&D of consumer versions – compare computers and smartphones. While one could argue that it is bad to get temporary inequality, long-term benefits would outweigh this for most enhancements and most value theories: we should not sacrifice the poor of tomorrow for the poor of today by delaying the launch of beneficial technologies (especially since it is unlikely that R&D to make them truly cheap will happen just due to technocrats keeping technology in their labs – making tech cheap and useful is actually one area where we know empirically the free market is really good).

If the spread of some great enhancement could be faster though, then we may have a problem.

I often encounter people who think that the rich will want to keep enhancements to themselves. I have never encountered any evidence for this being actually true except for status goods or elites in authoritarian societies.

There are enhancements like height that are merely positional: it is good to be taller than others (if male, at least), but if everybody gets taller nobody benefits and everybody loses a bit (more banged heads and heart problems). Other enhancements are absolute: living healthy longer or being smarter is good for nearly all people regardless of how long other people live or how smart they are (yes, there might be some coordination benefits if you live just as long as your spouse or have a society where you can participate intellectually, but these hardly negate the benefit of joint enhancement – in fact, they support it). Most of the interesting enhancements are in this category: while they might be great status goods at first, I doubt they will remain that for long since there are other reasons than status to get them. In fact, there are likely network effects from some enhanchements like intelligence: the more smart people working together in a society, the greater the benefits.

In the video, I point out that limiting enhancement to the elite means the society as a whole will not gain the benefit. Since elites actually reap rents from their society, this means that from their perspective it is actually in their best interest to have a society growing richer and more powerful (as long as they are in charge). This will mean they lose out in the long run to other societies that have broader spreads of enhancement. We know that widespread schooling, free information access and freedom to innovate tend to produce way wealthier and more powerful societies than those where only elites have access to these goods. I have strong faith in the power of diverse societies, despite their messiness.

My real worry is that enhancements may be like services rather than gadgets or pills (which come down exponentially in price). That would keep them harder to reach, and might hold back adoption (especially since we have not been as good at automating services as manufacturing). Still, we do subsidize education at great cost, and if an enhancement is desirable democratic societies are likely to scramble for a way of supplying it widely, even if it is only through an enhancement lottery.

However, even a world with unequal distribution is not necessarily unjust. Beside the standard Nozickian argument that a distribution is just if it was arrived at through just means there is the Rawlsian argument that if the unequal distribution actually produces benefits for the weakest it is OK. This is likely very true for intelligence amplification and maybe brain emulation since they are likely to cause strong economic growth an innovations that produce spillover effects – especially if there is any form of taxation or even mild redistribution.

Who controls what we become? Nobody, we/ourselves/us

The second issue is who gets a say in this.

As I respond in the interview, in a way nobody gets a say. Things just happen.

People innovate, adopt technologies and change, and attempts to control that means controlling creativity, business and autonomy – you better have a very powerful ethical case to argue for limitations in these, and an even better political case to implement any. A moral limitation of life extension needs to explain how it averts consequences worse than 100,000 dead people per day. Even if we all become jaded immortals that seems less horrible than a daily pile of corpses 12.3 meters high and 68 meters across (assuming an angle of repose of 20 degrees – this was the most gruesome geometry calculation I have done so far). Saying we should control technology is a bit like saying society should control art: it might be more practically useful, but it springs from the same well of creativity and limiting it is as suffocating as limiting what may be written or painted.

Technological determinism is often used as an easy out for transhumanists: the future will arrive no matter what you do, so the choice is just between accepting or resisting it. But this is not the argument I am making. That nobody is in charge doesn’t mean the future is not changeable.

The very creativity, economics and autonomy that creates the future is by its nature something individual and unpredictable. While we can relatively safely assume that if something can be done it will be done, what actually matters is whether it will be done early or late, or seldom or often. We can try to hurry beneficial or protective technologies so they arrive before the more problematic ones. We can try to aim at beneficial directions in favour over more problematic ones. We can create incentives that make fewer want to use the bad ones. And so on. The “we” in this paragraph is not so much a collective coordinated “us” as the sum of individuals, companies and institutions, “ourselves”: there is no requirement to get UN permission before you set out to make safe AI or develop life extension. It just helps if a lot of people support your aims.

John Stuart Mill’s harm principle allows society to step in an limit freedom when it causes harms to others, but most enhancements look unlikely to produce easily recognizable harms. This is not a ringing endorsement: as Nick Bostrom has pointed out, there are some bad directions of evolution we might not want to go down, yet it is individually rational for each of us to go slightly in that direction. And existential risk is so dreadful that it actually does provide a valid reason to stop certain human activities if we cannot find alternative solutions. So while I think we should not try to stop people from enhancing themselves we should want to improve our collective coordination ability to restrain ourselves. This is the “us” part. Restraint does not just have to happen in the form of rules: we restrain ourselves already using socialization, reputations, and incentive structures. Moral and cognitive enhancement could add restraints we currently do not have: if you can clearly see the consequences of your actions it becomes much harder to do bad things. The long-term outlook fostered by radical life extension may also make people more risk aversive and willing to plan for long-term sustainability.

One could dream of some enlightened despot or technocrat deciding. A world government filled with wise, disinterested and skilled members planning our species future. But this suffers from essentially the economic calculation problem: while a central body might have a unified goal, it will lack information about the preferences and local states among the myriad agents in the world. Worse, the cognitive abilities of the technocrat will be far smaller than the total cognitive abilities of the other agents. This is why rules and laws tend to get gamed – there are many diverse entities thinking about ways around them. But there are also fundamental uncertainties and emergent phenomena that will bubble up from the surrounding agents and mess up the technocratic plans. As Virginia Postrel noted, the typical solution is to try to browbeat society into a simpler form that can be managed more easily… which might be acceptable if the stakes are the very survival of the species, but otherwise just removes what makes a society worth living in. So we better maintain our coordination ourselves, all of us, in our diverse ways.

 

ET, phone for you!

TelescopeI have been in the media recently since I became the accidental spokesperson for UKSRN at the British Science Festival in Bradford:

BBC / The Telegraph / The Guardian / Iol SciTech / The Irish Times / Bt.com

(As well as BBC 5 Live, BBC Newcastle and BBC Berkshire… so my comments also get sent to space as a side effect).

My main message is that we are going to send in something for the Breakthrough Message initiative: a competition to write a good message to be sent to aliens. The total pot is a million dollars (it seems that was misunderstood in some reporting: it is likely not going to be a huge prize, but rather several). The message will not actually be sent to the stars: this is an intellectual exercise rather than a practical one.

(I also had some comments about the link between Langsec and SETI messages – computer security is actually a bit of an issue for fun reasons. Watch this space.)

Should we?

One interesting issue is whether there are any good reasons not to signal. Stephen Hawking famously argued against it (but he is a strong advocate of SETI), as does David Brin. A recent declaration argues that we should not signal unless there was a widespread agreement about it. Yet others have made the case that we should signal, perhaps a bit cautiously. In fact, an eminent astronomer just told he could not take concerns about sending a message seriously.

Some of the arguments are (in no particular order):

Pro Con
SETI will not work if nobody speaks. Malign ETI.
ETI is likely to be far more advanced than us and could help us. Past meetings between different civilizations have often ended badly.
Knowing if there is intelligence out there is important. Giving away information about ourselves may expose us to accidental or deliberate hacking.
Hard to prevent transmissions.  Waste of resources.
 Radio transmissions are already out there.  If the ETI is quiet, it is for a reason.
 Maybe they are waiting for us to make the first move.  We should listen carefully first, then transmit.

It is actually an interesting problem: how do we judge the risks and benefits in a situation like this? Normal decision theory runs into trouble (not that it stops some of my colleagues). The problem here is that the probability and potential gain/loss are badly defined. We may have our own personal views on the likelihood of intelligence within radio reach and its nature, but we should be extremely uncertain given the paucity of evidence.

[ Even the silence in the sky is some evidence, but it is somewhat tricky to interpret given that it is compatible with both no intelligence (because of rarity or danger), intelligence not communicating or looking in spectra we see, cultural convergence towards quietness (the zoo hypothesis, everybody hiding, everybody becoming Jupiter brains), or even the simulation hypothesis. The first category is at least somewhat concise, while the later categories have endless room for speculation. One could argue that since later categories can fit any kind of evidence they are epistemically weak and we should not trust them much.]

Existential risks also tends to take precedence over almost anything. If we can avoid doing something that could cause existential risk the maxiPOK principle tells us not to do it: we can avoid sending and sending might bring down the star wolves on us, so we should avoid it.

There is also a unilateralist curse issue. It is enough that one group somewhere thinks transmitting is a good idea and hence do it to get the consequences, whatever they are. So the more groups that consider transmitting, even if they are all rational, well-meaning and consider the issue at length the more likely it is that somebody will do it even if it is a stupid thing to do. In situations like this we have argued it behoves us to be more conservative individually than we would otherwise have been – we should simply think twice just because sending messages is in the unilateralist curse category. We also argue in that paper that it is even better to share information and make collectively coordinated decisions.

That these arguments strengthen the con side – but largely independently of what the actual anti-message arguments are. They are general arguments that we should be careful, not final arguments.

Conversely, Alan Penny argued that given the high existential risk to humanity we may actually have little to lose: if our risk per century is 12-40% of extinction, then adding a small ETI risk has little effect on the overall risk level, yet a small chance of friendly ETI advice (“By the way, you might want to know about this…”) that decreases existential risk may be an existential hope. Suppose we think it is 50% likely that ETI is friendly, and 1% chance it is out there. If it is friendly it might give us advice that reduces our existential risk by 50%, otherwise it will eat us with 1% probability. So if we do nothing our risk is (say) 12%. If we signal, then the risk is 0.12*0.99 + 0.01*(0.5*0.12*0.5 + 0.5*(0.12*0.99+0.01))=11.9744% – a slight improvement. Like the Drake equation one can of course plug in different numbers and get different effects.

Truth to the stars

Considering the situation over time, sending a message now may also be irrelevant since we could wipe ourselves out before any response will arrive. That brings to mind a discussion we had at the press conference yesterday about what the point of sending messages far away would be: wouldn’t humanity be gone by then? Also, we were discussing what to present to ETI: an honest or whitewashed version of ourselves? (my co-panelist Dr Jill Stuart made some great points about the diversity issues in past attempts).

My own view is that I’d rather have an honest epitaph for our species than a polished but untrue one. This is both relevant to us, since we may want to be truthful beings even if we cannot experience the consequences of the truth, and relevant to ETI, who may find the truth more useful than whatever our culture currently would like to present.

Ethics of brain emulations, New Scientist edition

Si elegansI have an opinion piece in New Scientist about the ethics of brain emulation. The content is similar to what I was talking about at IJCNN and in my academic paper (and the comic about it). Here are a few things that did not fit the text:

Ethics that got left out

Due to length constraints I had to cut the discussion about why animals might be moral patients. That made the essay look positively Benthamite in its focus on pain. In fact, I am agnostic on whether experience is necessary for being a moral patient. Here is the cut section:

Why should we care about how real animals are treated? Different philosophers have given different answers. Immanuel Kant did not think animals matter in themselves, but our behaviour towards them matters morally: a human who kicks a dog is cruel and should not do it. Jeremy Bentham famously argued that thinking does not matter, but the capacity to suffer: “…the question is not, Can they reason? nor, Can they talk? but, Can they suffer?” . Other philosophers have argued that it matters that animals experience being subjects of their own life, with desires and goals that make sense to them. While there is a fair bit of disagreement of what this means for our responsibilities to animals and what we may use them for, there is a widespread agreement that they are moral patients, something we ought to treat with some kind of care.

This is of course a super-quick condensation of a debate that fills bookshelves. It also leaves out Christine Korsgaard’s interesting Kantian work on animal rights, which as far as I can tell does not need to rely on particular accounts of consciousness and pain but rather interests. Most people would say that without consciousness or experience there is nobody that is harmed, but I am not entirely certain unconscious systems cannot be regarded as moral patients. There are for example people working in environmental ethics that ascribe moral patient-hood and partial rights to species or natural environments.

Big simulations: what are they good for?

Another interesting thing that had to be left out is comparisons of different large scale neural simulations.

(I am a bit uncertain about where the largest model in the Human Brain Project is right now; they are running more realistic models, so they will be smaller in terms of neurons. But they clearly have the ambition to best the others in the long run.)

Of course, one can argue which approach matters. Spaun is a model of cognition using low resolution neurons, while the slightly larger (in neurons) simulation from the Lansner lab was just a generic piece of cortex, showing some non-trivial alpha and gamma rhythms, and the even larger ones showing some interesting emergent behavior despite the lack of biological complexity in the neurons. Conversely, Cotterill’s CyberChild that I worry about in the opinion piece had just 21 neurons in each region but they formed a fairly complex network with many brain regions that in a sense is more meaningful as an organism than the near-disembodied problem-solver Spaun. Meanwhile SpiNNaker is running rings around the others in terms of speed, essentially running in real-time while the others have slowdowns by a factor of a thousand or worse.

The core of the matter is defining what one wants to achieve. Lots of neurons, biological realism, non-trivial emergent behavior, modelling a real neural system, purposeful (even conscious) behavior, useful technology, or scientific understanding? Brain emulation aims at getting purposeful, whole-organism behavior from running a very large, very complete biologically realistic simulation. Many robotics and AI people are happy without the biological realism and would prefer as small simulation as possible. Neuroscientists and cognitive scientists care about what they can learn and understand based on the simulations, rather than their completeness. They are all each pursuing something useful, but it is very different between the fields. As long as they remember that others are not pursuing the same aim they can get along.

What I hope: more honest uncertainty

What I hope happens is that computational neuroscientists think a bit about the issue of suffering (or moral patient-hood) in their simulations rather than slip into the comfortable “It is just a simulation, it cannot feel anything” mode of thinking by default.

It is easy to tell oneself that simulations do not matter because not only do we know how they work when we make them (giving us the illusion that we actually know everything there is to know about the system – obviously not true since we at least need to run them to see what happens), but institutionally it is easier to regard them as non-problems in terms of workload, conflicts and complexity (let’s not rock the boat at the planning meeting, right?) And once something is in the “does not matter morally” category it becomes painful to move it out of it – many will now be motivated to keep it there.

I rather have people keep an open mind about these systems. We do not understand experience. We do not understand consciousness. We do not understand brains and organisms as wholes, and there is much we do not understand about the parts either. We do not have agreement on moral patient-hood. Hence the rational thing to do, even when one is pretty committed to a particular view, is to be open to the possibility that it might be wrong. The rational response to this uncertainty is to get more information if possible, to hedge our bets, and try to avoid actions we might regret in the future.