Ethics of brain emulations, New Scientist edition

Si elegansI have an opinion piece in New Scientist about the ethics of brain emulation. The content is similar to what I was talking about at IJCNN and in my academic paper (and the comic about it). Here are a few things that did not fit the text:

Ethics that got left out

Due to length constraints I had to cut the discussion about why animals might be moral patients. That made the essay look positively Benthamite in its focus on pain. In fact, I am agnostic on whether experience is necessary for being a moral patient. Here is the cut section:

Why should we care about how real animals are treated? Different philosophers have given different answers. Immanuel Kant did not think animals matter in themselves, but our behaviour towards them matters morally: a human who kicks a dog is cruel and should not do it. Jeremy Bentham famously argued that thinking does not matter, but the capacity to suffer: “…the question is not, Can they reason? nor, Can they talk? but, Can they suffer?” . Other philosophers have argued that it matters that animals experience being subjects of their own life, with desires and goals that make sense to them. While there is a fair bit of disagreement of what this means for our responsibilities to animals and what we may use them for, there is a widespread agreement that they are moral patients, something we ought to treat with some kind of care.

This is of course a super-quick condensation of a debate that fills bookshelves. It also leaves out Christine Korsgaard’s interesting Kantian work on animal rights, which as far as I can tell does not need to rely on particular accounts of consciousness and pain but rather interests. Most people would say that without consciousness or experience there is nobody that is harmed, but I am not entirely certain unconscious systems cannot be regarded as moral patients. There are for example people working in environmental ethics that ascribe moral patient-hood and partial rights to species or natural environments.

Big simulations: what are they good for?

Another interesting thing that had to be left out is comparisons of different large scale neural simulations.

(I am a bit uncertain about where the largest model in the Human Brain Project is right now; they are running more realistic models, so they will be smaller in terms of neurons. But they clearly have the ambition to best the others in the long run.)

Of course, one can argue which approach matters. Spaun is a model of cognition using low resolution neurons, while the slightly larger (in neurons) simulation from the Lansner lab was just a generic piece of cortex, showing some non-trivial alpha and gamma rhythms, and the even larger ones showing some interesting emergent behavior despite the lack of biological complexity in the neurons. Conversely, Cotterill’s CyberChild that I worry about in the opinion piece had just 21 neurons in each region but they formed a fairly complex network with many brain regions that in a sense is more meaningful as an organism than the near-disembodied problem-solver Spaun. Meanwhile SpiNNaker is running rings around the others in terms of speed, essentially running in real-time while the others have slowdowns by a factor of a thousand or worse.

The core of the matter is defining what one wants to achieve. Lots of neurons, biological realism, non-trivial emergent behavior, modelling a real neural system, purposeful (even conscious) behavior, useful technology, or scientific understanding? Brain emulation aims at getting purposeful, whole-organism behavior from running a very large, very complete biologically realistic simulation. Many robotics and AI people are happy without the biological realism and would prefer as small simulation as possible. Neuroscientists and cognitive scientists care about what they can learn and understand based on the simulations, rather than their completeness. They are all each pursuing something useful, but it is very different between the fields. As long as they remember that others are not pursuing the same aim they can get along.

What I hope: more honest uncertainty

What I hope happens is that computational neuroscientists think a bit about the issue of suffering (or moral patient-hood) in their simulations rather than slip into the comfortable “It is just a simulation, it cannot feel anything” mode of thinking by default.

It is easy to tell oneself that simulations do not matter because not only do we know how they work when we make them (giving us the illusion that we actually know everything there is to know about the system – obviously not true since we at least need to run them to see what happens), but institutionally it is easier to regard them as non-problems in terms of workload, conflicts and complexity (let’s not rock the boat at the planning meeting, right?) And once something is in the “does not matter morally” category it becomes painful to move it out of it – many will now be motivated to keep it there.

I rather have people keep an open mind about these systems. We do not understand experience. We do not understand consciousness. We do not understand brains and organisms as wholes, and there is much we do not understand about the parts either. We do not have agreement on moral patient-hood. Hence the rational thing to do, even when one is pretty committed to a particular view, is to be open to the possibility that it might be wrong. The rational response to this uncertainty is to get more information if possible, to hedge our bets, and try to avoid actions we might regret in the future.

The limits of the in vitro burger

New growthStepping on toes everywhere in our circles, Ben Levinstein and me have a post at Practical Ethics about the limitations of in vitro meat for reducing animal suffering.

The basic argument is that while factory farming produces a lot of suffering, a post-industrial world would likely have very few lives of the involved species. It would be better if they had better lives and larger populations instead. So, at least in some views of consequentialism, the ethical good of in vitro meat is reduced from a clear win to possibly even a second best to humane farming.

An analogy can be made with horses, whose population has declined precipitiously from the pre-tractor, pre-car days. Current horses live (I guess) nicer lives than the more work-oriented horses of 1900, but they have much fewer lives. So the current 3 million horses in the US might have lives (say) twice as good as the 25 million horses in the 1920s: the total value has still declined. However, factory farmed animals may have lives that are not worth living, holding negative value. If we assume the about 50 billion chickens in in the world all have lives of value -1 each, then replacing them with in vitro meat would give make the world 50 billion units better. But this could also be achieved by making their lives one unit better (and why stop there? maybe they could get two units more). Whether it matters how many entities are experiencing depends on your approach, as does whether it is an extra value if there is a chicken species around rather than not.

Now, I am not too troubled by this since I think in vitro meat is also very good from a health perspective, a climate perspective, and an existential risk reduction perspective (it is good for space colonization and survival if sunlight is interrupted). But I think most people come to in vitro meat from an ethical angle. And given just that perspective, we should not be too complacent that in the future we will become postagricultural: it may take time, and it might actually not increase total wellfare as much as we expected.

 

Ethics for neural networks

PosterI am currently attending IJCNN 2015 in Killarney. Yesterday I gave an invited talk “Ethics and large-scale neural networks: when do we need to start caring for neural networks, rather than about them?” The bulk of the talk was based on my previous WBE ethics paper, looking at the reasons we cannot be certain neural networks have experience or not, leading to my view that we hence ought to handle them with the same care as the biological originals they mimic. Yup, it is the one T&F made a lovely comic about – which incidentally gave me an awesome poster at the conference.

When I started, I looked a bit at ethics in neural network science/engineering. As I see it, there are three categories of ethical issues specific to the topic rather than being general professional ethics issues:

  • First, the issues surrounding applications such as privacy, big data, surveillance, killer robots etc.
  • Second, the issue that machine learning allows machines to learn the wrong things.
  • Third, machines as moral agents or patients.

The first category is important, but I leave that for others to discuss. It is not necessarily linked to neural networks per se, anyway. It is about responsibility for technology and what one works on.

Learning wrong

The second category is fun. Learning systems are not fully specified by their creators – which is the whole point! This means that their actual performance is open-ended (within the domain of possible responses). And from that follows that they can learn things we do not want.

One example is inadvertent discrimination, where the network learns something that would be called racism, sexism or something similar if it happened in a human. One can consider a credit rating neural network trained on customer data to estimate the probability of a customer defaulting. It may develop an internal representation that gets activated by customer’s race and is linked to a negative evaluation of the rating. There is no deliberate programming of racism, just something that emerges from the data – where the race:economy link may well be due to factors in society that are structurally racist.

A similar, real case is advertising algorithms selecting ads online for users in ways that shows some ads for some groups but not others – which, in the case of education, may serve to perpetuate disadvantages or prejudices.

A recent example was the Google Photo captioning system, which captioned a black couple as gorillas. Obvious outrage ensued, and a Google representative tweeted that this was “high on my list of bugs you *never* want to see happen ::shudder::”. The misbehaviour was quickly fixed.

Mislabelling somebody or something else might merely have been amusing: calling some people gorillas will often be met by laughter. But it becomes charged and ethically relevant in a culture like the current American one. This is nothing the recognition algorithm knows about: from its perspective mislabelling chairs is as bad as mislabelling humans. Adding a culturally sensitive loss function to the training is nontrivial. Ad hoc corrections against particular cases – like this one – will only help when a scandalous mislabelling already occurs: we will not know what is misbehaviour until we see it.

[ Incidentally, this suggests a way for automatic insult generation: use computer vision to find matching categories, and select the one that is closest but has the lowest social status (perhaps detected using sentiment analysis). It will be hilarious for the five seconds until somebody takes serious offence. ]

It has been suggested that the behavior was due to training data being biased towards white people, making the model subtly biased. If there are few examples of a category it might be suppressed or overused as a response. This can be very hard to fix, since many systems and data sources have a patchy spread in social space. But maybe we need to pay more attention to the issue of whether data is socially diverse enough. It is worth recognizing that since a machine learning system may be used by very many users once it has been trained, it has the power to project its biased view of the world to many: getting things right in a universal system, rather than something used by a few, may be far more important than it looks. We may also have to have enough online learning over time so such systems update their worldview based on how culture evolves.

Moral actors, proxies and patients

Si elegansMaking machines that act in a moral context is even iffier.

My standard example is of course the autonomous car, which may find itself in situations that would count as moral choices for a human. Here the issue is who sets the decision scheme: presumably they would be held accountable insofar they could predict the consequences of their code or be identified. I have argued that it is good to have the car try to behave as its “driver” would, but it will still be limited by the sensory and cognitive abilities of the vehicle. Moral proxies are doable, even if they are not moral agents.

The manufacture and behavior of killer robots is of course even more contentious. Even if we think they can be acceptable in principle and have a moral system that we think would be the right one to implement, actually implementing it for certain may prove exceedingly hard. Verification of robotics is hard; verification of morally important actions based on real-world data is even worse. And one cannot shirk the responsibility to do so if one deploys the system.

Note that none of this presupposes real intelligence or truly open-ended action abilities. They just make an already hard problem tougher. Machines that can only act within a well-defined set of constraints can be further constrained to not go into parts of state- or action-space we know are bad (but as discussed above, even captioning images is a sufficiently big space that we will find surprise bad actions).

As I mentioned above, the bulk of the talk was my argument that whole brain emulation attempts can produce systems we have good reasons to be careful with: we do not know if they are moral agents, but they are intentionally architecturally and behaviourally close to moral agents.

A new aspect I got the chance to discuss is the problem about non-emulation neural networks. When do we need to consider them? Brian Tomasik has written a paper about whether we should regard reinforcement learning agents as moral patients (see also this supplement). His conclusion is that these programs mimic core motivation/emotion cognitive systems that almost certainly matter for real moral patients’ patient-hood (an organism without a reward system or learning would presumably lose much or all of its patient-hood), and there is a nonzero chance that they are fully or partially sentient.

But things get harder for other architectures. A deep learning network with just a feedforward architecture is presumably unable to be conscious, since many theories of consciousness presuppose some forms of feedback – and that is not possible in that architecture. But at the conference there have been plenty of recurrent networks that have all sorts of feedback. Whether they can have experiential states appears tricky to answer. In some cases we may argue they are too small to matter, but again we do not know if level of consciousness (or moral considerability) necessarily has to follow brain size.

They also inhabit a potentially alien world where their representations could be utterly unrelated to what we humans understand or can express. One might say, paraphrasing Wittgenstein, that if a neural network could speak we would not understand it. However, there might be ways of making their internal representations less opaque. Methods such as inceptionism, deep visualization, or t-SNE can actually help discern some of what is going on on the inside. If we were to discover a set of concepts that were similar to human or animal concepts, we may have reason to thread a bit more carefully – especially if there were concepts linked to some of them in the same way “suffering concepts” may be linked to other concepts. This looks like a very relevant research area, both for debugging our learning systems, but also for mapping out the structures of animal, human and machine minds.

In the end, if we want safe and beneficial smart systems, we better start figuring out how to understand them better.

Harming virtual bodies

BodyI was recently interviewed by Anna Denejkina for Vertigo, and references to the article seems to be circulating around. Given the hot button topic – transhumanism and virtual rape – I thought it might be relevant to bring out what I said in the email interview.

(Slightly modified for clarity, grammar and links)

> How are bioethicists and philosophers coping with the ethical issues which may arise from transhumanist hacking, and what would be an outcome of hacking into the likes of full body haptic suit, a smart sex toy, e-spot implant, i.e.: would this be considered act of kidnapping, or rape, or another crime?

There is some philosophy of virtual reality and augmented reality, and a lot more about the ethics of cyberspace. The classic essay is this 1998 one, dealing with a text-based rape in the mid-90s.

My personal view is that our bodies are the interfaces between our minds and the world. The evil of rape is that it involves violating our ability to interact with the world in a sensual manner: it involves both coercion of bodies and inflicting a mental violation. So from this perspective it does not matter much if the rape happens to a biological body, or a virtual body connected via a haptic suit, or some brain implant. There might of course be lesser violations if the coercion is limited (you can easily log out) or if there is a milder violation (a hacked sex toy might infringe on privacy and ones sexual integrity, but it is not able to coerce): the key issue is that somebody is violating the body-mind interface system, and we are especially vulnerable when this involves our sexual, emotional and social sides.

Widespread use of virtual sex will no doubt produce many tricky ethical situations. (what about recording the activities and replaying them without the partner’s knowledge? what if the partner is not who I think it is? what mapping the sexual encounter onto virtual or robot bodies that look like children and animals? what about virtual sexual encounters that break the laws in one country but not another?)

Much of this will sort itself out like with any new technology: we develop norms for it, sometimes after much debate and anguish. I suspect we will become much more tolerant of many things that are currently weird and taboo. The issue ethicists may worry about is whether we would also become blasé about things that should not be accepted. I am optimistic about it: I think that people actually do react to things that are true violations.

> If such a violation was to occur, what can be done to ensure that today’s society is ready to treat this as a real criminal issue?
Criminal law tends to react slowly to new technology, and usually tries to map new crimes onto old ones (if I steal your World of Warcraft equipment I might be committing fraud rather than theft, although different jurisdictions have very different views – some even treat this as gambling debts). This is especially true for common law systems like the US and UK. In civil law systems like most of Europe laws tend to get passed when enough people convince politicians that There Ought To Be a Law Against It (sometimes unwisely).

So to sum up, look at whether people involuntarily actually suffer real psychological anguish, loss of reputations or lose control over important parts of their exoselves due to the actions of other people. If they do, then at least something immoral has happened. Whether laws, better software security, social norms or something else (virtual self defence? built-in safewords?) is the best remedy may depend on the technology and culture.

I think there is an interesting issue in what role the body plays here. As I said, the body is an interface between our minds and the world around us. It is also a nontrivial thing: it has properties and states of its own, and these affect how we function. Even if one takes a nearly cybergnostic view that we are merely minds interfacing with the world rather than a richer embodiment view this plays an important role. If I have a large, small, hard or vulnerable body, it will affect how I can act in the world – and this will undoubtedly affect how I think of myself. Our representations of ourselves are strongly tied to our bodies and the relationship between them and our environment. Our somatosensory cortex maps itself to how touch distributes itself on our skin, and our parietal cortex not only represents the body-environment geometry but seems involved in our actual sense of self.

This means that hacking the body is more serious than hacking other kinds of software or possessions. Currently it is our only way of existing in the world. Even in an advanced VR/transhuman society where people can switch bodies simply and freely, infringing on bodies has bigger repercussions than changing other software outside the mind – especially if it is subtle. The violations discussed in the article are crude, overt ones. But subtle changes to ourselves may fly under the radar of outrage, yet do harm.

Most people are no doubt more interested in the titillating combination of sex and tech – there is a 90’s cybersex vibe coming off this discussion, isn’t it? The promise of new technology to give us new things to be outraged or dream about. But the philosophical core is about the relation between the self, the other, and what actually constitutes harm – very abstract, and not truly amenable to headlines.

 

Baby interrupted

Car frostFrancesca Minerva and me have a new paper out: Cryopreservation of Embryos and Fetuses as a Future Option for Family Planning Purposes (Journal of Evolution and Technology – Vol. 25 Issue 1 – April 2015 – pgs 17-30).

Basically, we analyse the ethics of cryopreserving fetuses, especially as an alternative to abortion. While technologically we do not have any means to bring a separated (yet alone cryopreserved) fetus to term yet, it is not inconceivable that advances in ectogenesis (artificial wombs) or biotechnological production of artificial placentas allowing reinplantation could be achieved. And a cryopreserved fetus would have all the time in the world, just like an adult cryonics patient.

It is interesting to see how many of the standard ethical arguments against abortion fare when dealing with cryopreservation. There is no killing, personhood is not affected, there is no loss of value of the future – just a long delay. One might be concerned that fetuses will not be reinplanted but just left in limbo forever, but clearly this is a better state than being irreversibly aborted: cryopreservation can (eventually) be reversed. I think our paper shows that (regardless of what one thinks of cryonics) the irreversibility is the key ethical issue in abortion.

In the end, it will likely take a long time before this is a viable option. But it seems that there are good reasons to consider cryopreservation and reinplantation of fetuses: animal husbandry, space colonisation, various medical treatments (consider “interrupting” an ongoing pregnancy because the mother needs cytostatic treament), and now this family planning reason.

Crispy embryos

BabiesResearchers at Sun Yat-sen University in Guangzhou have edited the germline genome of human embryos (paper). They used the ever more popular CRISPR/Cas9 method to try to modify the gene involved in beta-thalassaemia in non-viable leftover embryos from a fertility clinic.

As usual there is a fair bit of handwringing, especially since there was a recent call for a moratorium on this kind of thing from one set of researchers, and a more liberal (yet cautious) response from another set. As noted by ethicists, many of the ethical concerns are actually somewhat confused.

That germline engineering can have unpredictable consequences for future generations is as true for normal reproduction. More strongly, somebody making the case that (say) race mixing should be hindred because of unknown future effects would be condemned as a racist: we have overarching reasons to allow people live and procreate freely that morally overrule worries about their genetic endowment – even if there actually were genetic issues (as far as I know all branches of the human family are equally interfertile, but this might just be a historical contingency). For a possible future effect to matter morally it needs to be pretty serious and we need to have some real reason to think it is more likely to happen because of the actions we take now. A vague unease or a mere possibility is not enough.

However, the paper actually gives a pretty good argument for why we should not try this method in humans. They found that the efficiency of the repair was about 50%, but more worryingly that there was off-target mutations and that a similar gene was accidentally modified. These are good reasons not to try it. Not unexpected, but very helpful in that we can actually make informed decisions both about whether to use it (clearly not until the problems have been fixed) and what needs to be investigated (how can it be done well? why does it work worse here than advertised?).

The interesting thing with the paper is that the fairly negative results which would reduce interest in human germline changes is anyway hailed as being unethical. It is hard to make this claim stick, unless one buys into the view that germline changes of human embryos is intrinsically bad. The embryos could not develop into persons and would have been discarded from the fertility clinic, so there was no possible future person being harmed (if one thinks fertilized but non-viable embryos deserve moral protection one has other big problems). The main fear seems to be that if the technology is demonstrated many others will follow, but an early negative result would seem to reduce this slippery slope argument.

I think the real reason people think there is an ethical problem is the association of germline engineering with “designer babies”, and the conditioning that designer babies are wrong. But they can’t be wrong for no reason: there has to be an ethics argument for their badness. There is no shortage of such arguments in the literature, ranging from ideas of the natural order, human dignity, accepting the given, the importance of an open-ended life to issues of equality, just to mention a few. But none of these are widely accepted as slam-dunk arguments that conclusively show designer babies are wrong: each of them also have vigorous criticisms. One can believe one or more of them to be true, but it would be rather premature to claim that settles the debate. And even then, most of these designer baby arguments are irrelevant for the case at hand.

All in all, it was a useful result that probably will reduce both risky and pointless research and focus on what matters. I think that makes it quite ethical.

Do we want the enhanced military?

8 of Information: Trillicom Arms Inc.Some notes on Practical Ethics inspired by Jonathan D. Moreno’s excellent recent talk.

My basic argument is that enhancing the capabilities of military forces (or any other form of state power) is risky if the probability that they can be misused (or the amount of expected/maximal damage in such cases) does not decrease more strongly. This would likely correspond to some form of moral enhancement, but even the morally enhanced army may act in a bad manner because the values guiding it or the state commanding it are bad: moral enhancement as we normally think about it is all about coordination, the ability to act according to given values and to reflect on these values. But since moral enhancement itself is agnostic about the right values those values will be provided by the state or society. So we need to ensure that states/societies have good values, and that they are able to make their forces implement them. A malicious or stupid head commanding a genius army is truly dangerous. As is tails wagging dogs, or keeping the head unaware (in the name of national security) of what is going on.

In other news: an eclipse in a teacup:
Eclipse in a cup

Consequentialist world improvement

I just rediscovered an old response to the Extropians List that might be worth reposting. Slight edits.

Communal values

On 06/10/2012 16:17, Tomaz Kristan wrote:

>> If you want to reduce death tolls, focus on self-driving cars.
> Instead of answering terror attacks, just mend you cars?

Sounds eminently sensible. Charlie makes a good point: if we want to make the world better, it might be worth prioritizing fixing the stuff that makes it worse according to the damage it actually makes. Toby Ord and me have been chatting quite a bit about this.

Death

In terms of death (~57 million people per year), the big causes are cardiovascular disease (29%), infectious and parasitic diseases (23%) and cancer (12%). At least the first and last are to a sizeable degree caused or worsened by ageing, which is a massive hidden problem. It has been argued that malnutrition is similarly indirectly involved in 15-60% of the total number of deaths: often not the direct cause, but weakening people so they become vulnerable to other risks. Anything that makes a dent in these saves lives on a scale that is simply staggering; any threat to our ability to treat them (like resistance to antibiotics or anthelmintics) is correspondingly bad.

Unintentional injuries are responsible for 6% of deaths, just behind respiratory diseases 6.5%. Road traffic alone is responsible for 2% of all deaths: even 1% safer cars would save 11,400 lives per year. If everybody reached Swedish safety (2.9 deaths per 100,000 people per year) it would save around 460,000 lives per year – one Antwerp per year.

Now, intentional injuries are responsible for 2.8% of all deaths. Of these suicide is responsible for 1.53% of total death rate, violence 0.98% and war 0.3%. Yes, all wars killed about the same number of people as were killed by meningitis, and slightly more than the people who died of syphilis. In terms of absolute numbers we might be much better off improving antibiotic treatments and suicide hotlines than trying to stop the wars. And terrorism is so small that it doesn’t really show up: even the highest estimates put the median fatalities per year in the low thousands.

So in terms of deaths, fixing (or even denting) ageing, malnutrition, infectious diseases and lifestyle causes is a far more important activity than winning wars or stopping terrorists. Hypertension, tobacco, STDs, alcohol, indoor air pollution and sanitation are all far, far more pressing in terms of saving lives. If we had a choice between ending all wars in the world and fixing indoor air pollution the rational choice would be to fix those smoky stoves: they kill nine times more people.

Existential risk

There is of course more to improving the world than just saving lives. First there is the issue of outbreak distributions: most wars are local and small affairs, but some become global. Same thing for pandemic respiratory disease. We actually do need to worry about them more than their median sizes suggest (and again the influenza totally dominates all wars). Incidentally, the exponent for the power law distribution of terrorism is safely strongly negative at -2.5, so it is less of a problem than ordinary wars with exponent -1.41 (where the expectation diverges: wait long enough and you get a war larger than any stated size).

There are reasons to think that existential risk should be weighed extremely strongly: even a tiny risk that we loose all our future is much worse than many standard risks (since the future could be inconceivably grand and involve very large numbers of people). This has convinced me that fixing the safety of governments needs to be boosted a lot: democides have been larger killers than wars in the 20th century and both seems to have most of the tail risk, especially when you start thinking nukes. It is likely a far more pressing problem than climate change, and quite possibly (depending on how you analyse xrisk weighting) beats disease.

How to analyse xrisk, especially future risks, in this kind of framework is a big part of our ongoing research at FHI.

Happiness

If instead of lives lost we look at the impact on human stress and happiness wars (and violence in general) look worse: they traumatize people, and terrorism by its nature is all about causing terror. But again, they happen to a small set of people. So in terms of happiness it might be more important to make the bulk of people happier. Life satisfaction correlates to 0.7 with health and 0.6 with wealth and basic education. Boost those a bit, and it outweighs the horrors of war.

In fact, when looking at the value of better lives, it looks like an enhancement in life quality might be worth much more than fixing a lot of the deaths discussed above: make everybody’s life 1% better, and it corresponds to more quality adjusted life years than is lost to death every year! So improving our wellbeing might actually matter far, far more than many diseases. Maybe we ought to spend more resources on applied hedonism research than trying to cure Alzheimers.

Morality

The real reason people focus so much about terrorism is of course the moral outrage. Somebody is responsible, people are angry and want revenge. Same thing for wars. And the horror tends to strike certain people: my kind of global calculations might make sense on the global scale, but most of us think that the people suffering the worst have a higher priority. While it might make more utilitarian sense to make everybody 1% happier rather than stop the carnage in Syria, I suspect most people would say morality is on the other side (exactly why is a matter of some interesting ethical debate, of course). Deontologists might think we have moral duties we must implement no matter what the cost. I disagree: burning villages in order to save them doesn’t make sense. It makes sense to risk lives in order to save lives, both directly and indirectly (by reducing future conflicts).

But this requires proportionality: going to war in order to avenge X deaths by causing 10X deaths is not going to be sustainable or moral. The total moral weight of one unjust death might be high, but it is finite. Given the typical civilian causality ratio of 10:1 any war will also almost certainly produce far more collateral unjust deaths than the justified deaths of enemy soldiers: avenging X deaths by killing exactly X enemies will still lead to around 10X unjust deaths. So achieving proportionality is very, very hard (and the Just War Doctrine is broken anyway, according to the war ethicists I talk to). This means that if you want to leave the straightforward utilitarian approach and add some moral/outrage weighting, you risk making the problem far worse by your own account. In many cases it might indeed be the moral thing to turn the other cheek… ideally armoured and barbed with suitable sanctions.

Conclusion

To sum up, this approach of just looking at consequences and ignoring who is who is of course a bit too cold for most people. Most people have Tetlockian sacred values and get very riled up if somebody thinks about cost-effectiveness in terrorism fighting (typical US bugaboo) or development (typical warmhearted donor bugaboo) or healthcare (typical European bugaboo). But if we did, we would make the world a far better place.

Bring on the robot cars and happiness pills!

Born this way

On Practical Ethics I blog about the ethics of attempts to genetically select sexual preferences.

Basically, it can only tilt probabilities and people develop preferences in individual and complex ways. I am not convinced selection is inherently bad, but it can embody bad societal norms. However, those norms are better dealt with on a societal/cultural level than by trying to regulate technology. This essay is very much a tie-in with our brave new love paper.

My pet problem: Kim

Kim doesnt want to leaveSometimes a pet selects you – or perhaps your home – and moves in. In my case, I have been adopted by a small tortoiseshell butterfly (Aglais urticae).

When it arrived last week I did the normal thing and opened the window, trying to shoo the little thing out. It refused. I tried harder. I caught it on my hand and tried to wave it out: I have never experienced a butterfly holding on for dear life like that. It very clearly did not want to fly off into the rainy cold of British autumn. So I relented and let it stay.

I call it Kim, since I cannot tell whether it is a male or female. It seems to only have four legs. Yes, I know this is probably the gayest possible pet.

Kim looks outOver the past days I have occasionally opened the window when it has been fluttering against it, but it has always quickly settled down on the windowsill when it felt the open air. It is likely planning to hibernate in my flat.

This poses an interesting ethical problem: I know that if it hibernates at my home it will likely not survive, since the environment is far too warm and dry for it. Yet it looks like it is making a deliberate decision to stay. In the case of a human I would have tried to inform them of the problems with their choice, but then would generally have accepted their decision under informed consent (well, maybe not letting they live in my home, but you get the idea, dear reader). But butterflies have just a few hundred thousand neurons: they do not ‘know’ many things. Their behaviour is largely preprogrammed instincts with little flexibility. So there is not any choice to be respected, just behaviour. I am a superintelligence relative to Kim, and I know what would be best for it. I ought to overcome my anthropomorphising of its behaviour and release it in the wild.

Kim eatsYet if I buy this argument, what value does Kim have? Kim’s “life projects” are simple programs that do not have much freedom (beyond some chaotic behaviour) or complexity. So what does it matter whether they will fail? It might matter in regards to me: I might show the virtue of compassion by making the gesture of saving it – except that it is not clear that it matters whether I do it by letting it out or feeding it orange juice. I might be benefiting in an abstract way from the aesthetic or intellectual pleasure from this tricky encounter – indeed, by blogging about it I am turning a simple butterfly life into something far beyond itself.

Another approach is of course to consider pain or other forms of suffering. Maybe insect welfare does matter (I sincerely hope it does not, since it would turn Earth into a hell-world). But again either choice is problematic: outside Kim would likely become bird- or spider-food, or die from exposure. Inside it will likely die from failed hibernation. In terms of suffering both seem about likely bad. If I was more pessimistic I might consider that killing Kim painlessly might be the right course of action. But while I do think we should minimize unnecessary suffering I suspect – given the structure of the insect nervous system – that there is not much integrated experience going on there. Pain, quite likely, but not much phenomenology.

So where does this leave me? I cannot defend any particular line of action. So I just fall back on a behavioural program myself, the pet program – adopting individuals of other species, no doubt based on overly generalized child-rearing routines (which historically turned out to be a great boon to our species through domestication). I will give it fruit juice until it hibernates, and hope for the best.