Do we want the enhanced military?

8 of Information: Trillicom Arms Inc.Some notes on Practical Ethics inspired by Jonathan D. Moreno’s excellent recent talk.

My basic argument is that enhancing the capabilities of military forces (or any other form of state power) is risky if the probability that they can be misused (or the amount of expected/maximal damage in such cases) does not decrease more strongly. This would likely correspond to some form of moral enhancement, but even the morally enhanced army may act in a bad manner because the values guiding it or the state commanding it are bad: moral enhancement as we normally think about it is all about coordination, the ability to act according to given values and to reflect on these values. But since moral enhancement itself is agnostic about the right values those values will be provided by the state or society. So we need to ensure that states/societies have good values, and that they are able to make their forces implement them. A malicious or stupid head commanding a genius army is truly dangerous. As is tails wagging dogs, or keeping the head unaware (in the name of national security) of what is going on.

In other news: an eclipse in a teacup:
Eclipse in a cup

Consequentialist world improvement

I just rediscovered an old response to the Extropians List that might be worth reposting. Slight edits.

Communal values

On 06/10/2012 16:17, Tomaz Kristan wrote:

>> If you want to reduce death tolls, focus on self-driving cars.
> Instead of answering terror attacks, just mend you cars?

Sounds eminently sensible. Charlie makes a good point: if we want to make the world better, it might be worth prioritizing fixing the stuff that makes it worse according to the damage it actually makes. Toby Ord and me have been chatting quite a bit about this.

Death

In terms of death (~57 million people per year), the big causes are cardiovascular disease (29%), infectious and parasitic diseases (23%) and cancer (12%). At least the first and last are to a sizeable degree caused or worsened by ageing, which is a massive hidden problem. It has been argued that malnutrition is similarly indirectly involved in 15-60% of the total number of deaths: often not the direct cause, but weakening people so they become vulnerable to other risks. Anything that makes a dent in these saves lives on a scale that is simply staggering; any threat to our ability to treat them (like resistance to antibiotics or anthelmintics) is correspondingly bad.

Unintentional injuries are responsible for 6% of deaths, just behind respiratory diseases 6.5%. Road traffic alone is responsible for 2% of all deaths: even 1% safer cars would save 11,400 lives per year. If everybody reached Swedish safety (2.9 deaths per 100,000 people per year) it would save around 460,000 lives per year – one Antwerp per year.

Now, intentional injuries are responsible for 2.8% of all deaths. Of these suicide is responsible for 1.53% of total death rate, violence 0.98% and war 0.3%. Yes, all wars killed about the same number of people as were killed by meningitis, and slightly more than the people who died of syphilis. In terms of absolute numbers we might be much better off improving antibiotic treatments and suicide hotlines than trying to stop the wars. And terrorism is so small that it doesn’t really show up: even the highest estimates put the median fatalities per year in the low thousands.

So in terms of deaths, fixing (or even denting) ageing, malnutrition, infectious diseases and lifestyle causes is a far more important activity than winning wars or stopping terrorists. Hypertension, tobacco, STDs, alcohol, indoor air pollution and sanitation are all far, far more pressing in terms of saving lives. If we had a choice between ending all wars in the world and fixing indoor air pollution the rational choice would be to fix those smoky stoves: they kill nine times more people.

Existential risk

There is of course more to improving the world than just saving lives. First there is the issue of outbreak distributions: most wars are local and small affairs, but some become global. Same thing for pandemic respiratory disease. We actually do need to worry about them more than their median sizes suggest (and again the influenza totally dominates all wars). Incidentally, the exponent for the power law distribution of terrorism is safely strongly negative at -2.5, so it is less of a problem than ordinary wars with exponent -1.41 (where the expectation diverges: wait long enough and you get a war larger than any stated size).

There are reasons to think that existential risk should be weighed extremely strongly: even a tiny risk that we loose all our future is much worse than many standard risks (since the future could be inconceivably grand and involve very large numbers of people). This has convinced me that fixing the safety of governments needs to be boosted a lot: democides have been larger killers than wars in the 20th century and both seems to have most of the tail risk, especially when you start thinking nukes. It is likely a far more pressing problem than climate change, and quite possibly (depending on how you analyse xrisk weighting) beats disease.

How to analyse xrisk, especially future risks, in this kind of framework is a big part of our ongoing research at FHI.

Happiness

If instead of lives lost we look at the impact on human stress and happiness wars (and violence in general) look worse: they traumatize people, and terrorism by its nature is all about causing terror. But again, they happen to a small set of people. So in terms of happiness it might be more important to make the bulk of people happier. Life satisfaction correlates to 0.7 with health and 0.6 with wealth and basic education. Boost those a bit, and it outweighs the horrors of war.

In fact, when looking at the value of better lives, it looks like an enhancement in life quality might be worth much more than fixing a lot of the deaths discussed above: make everybody’s life 1% better, and it corresponds to more quality adjusted life years than is lost to death every year! So improving our wellbeing might actually matter far, far more than many diseases. Maybe we ought to spend more resources on applied hedonism research than trying to cure Alzheimers.

Morality

The real reason people focus so much about terrorism is of course the moral outrage. Somebody is responsible, people are angry and want revenge. Same thing for wars. And the horror tends to strike certain people: my kind of global calculations might make sense on the global scale, but most of us think that the people suffering the worst have a higher priority. While it might make more utilitarian sense to make everybody 1% happier rather than stop the carnage in Syria, I suspect most people would say morality is on the other side (exactly why is a matter of some interesting ethical debate, of course). Deontologists might think we have moral duties we must implement no matter what the cost. I disagree: burning villages in order to save them doesn’t make sense. It makes sense to risk lives in order to save lives, both directly and indirectly (by reducing future conflicts).

But this requires proportionality: going to war in order to avenge X deaths by causing 10X deaths is not going to be sustainable or moral. The total moral weight of one unjust death might be high, but it is finite. Given the typical civilian causality ratio of 10:1 any war will also almost certainly produce far more collateral unjust deaths than the justified deaths of enemy soldiers: avenging X deaths by killing exactly X enemies will still lead to around 10X unjust deaths. So achieving proportionality is very, very hard (and the Just War Doctrine is broken anyway, according to the war ethicists I talk to). This means that if you want to leave the straightforward utilitarian approach and add some moral/outrage weighting, you risk making the problem far worse by your own account. In many cases it might indeed be the moral thing to turn the other cheek… ideally armoured and barbed with suitable sanctions.

Conclusion

To sum up, this approach of just looking at consequences and ignoring who is who is of course a bit too cold for most people. Most people have Tetlockian sacred values and get very riled up if somebody thinks about cost-effectiveness in terrorism fighting (typical US bugaboo) or development (typical warmhearted donor bugaboo) or healthcare (typical European bugaboo). But if we did, we would make the world a far better place.

Bring on the robot cars and happiness pills!

Born this way

On Practical Ethics I blog about the ethics of attempts to genetically select sexual preferences.

Basically, it can only tilt probabilities and people develop preferences in individual and complex ways. I am not convinced selection is inherently bad, but it can embody bad societal norms. However, those norms are better dealt with on a societal/cultural level than by trying to regulate technology. This essay is very much a tie-in with our brave new love paper.

My pet problem: Kim

Kim doesnt want to leaveSometimes a pet selects you – or perhaps your home – and moves in. In my case, I have been adopted by a small tortoiseshell butterfly (Aglais urticae).

When it arrived last week I did the normal thing and opened the window, trying to shoo the little thing out. It refused. I tried harder. I caught it on my hand and tried to wave it out: I have never experienced a butterfly holding on for dear life like that. It very clearly did not want to fly off into the rainy cold of British autumn. So I relented and let it stay.

I call it Kim, since I cannot tell whether it is a male or female. It seems to only have four legs. Yes, I know this is probably the gayest possible pet.

Kim looks outOver the past days I have occasionally opened the window when it has been fluttering against it, but it has always quickly settled down on the windowsill when it felt the open air. It is likely planning to hibernate in my flat.

This poses an interesting ethical problem: I know that if it hibernates at my home it will likely not survive, since the environment is far too warm and dry for it. Yet it looks like it is making a deliberate decision to stay. In the case of a human I would have tried to inform them of the problems with their choice, but then would generally have accepted their decision under informed consent (well, maybe not letting they live in my home, but you get the idea, dear reader). But butterflies have just a few hundred thousand neurons: they do not ‘know’ many things. Their behaviour is largely preprogrammed instincts with little flexibility. So there is not any choice to be respected, just behaviour. I am a superintelligence relative to Kim, and I know what would be best for it. I ought to overcome my anthropomorphising of its behaviour and release it in the wild.

Kim eatsYet if I buy this argument, what value does Kim have? Kim’s “life projects” are simple programs that do not have much freedom (beyond some chaotic behaviour) or complexity. So what does it matter whether they will fail? It might matter in regards to me: I might show the virtue of compassion by making the gesture of saving it – except that it is not clear that it matters whether I do it by letting it out or feeding it orange juice. I might be benefiting in an abstract way from the aesthetic or intellectual pleasure from this tricky encounter – indeed, by blogging about it I am turning a simple butterfly life into something far beyond itself.

Another approach is of course to consider pain or other forms of suffering. Maybe insect welfare does matter (I sincerely hope it does not, since it would turn Earth into a hell-world). But again either choice is problematic: outside Kim would likely become bird- or spider-food, or die from exposure. Inside it will likely die from failed hibernation. In terms of suffering both seem about likely bad. If I was more pessimistic I might consider that killing Kim painlessly might be the right course of action. But while I do think we should minimize unnecessary suffering I suspect – given the structure of the insect nervous system – that there is not much integrated experience going on there. Pain, quite likely, but not much phenomenology.

So where does this leave me? I cannot defend any particular line of action. So I just fall back on a behavioural program myself, the pet program – adopting individuals of other species, no doubt based on overly generalized child-rearing routines (which historically turned out to be a great boon to our species through domestication). I will give it fruit juice until it hibernates, and hope for the best.

Objectively evil technology

Dangerous partGeorge Dvorsky has a post on io9: 10 Horrifying Technologies That Should Never Be Allowed To Exist. It is a nice clickbaity overview of some very bad technologies:

  1. Weaponized nanotechnology (he mainly mentions ecophagy, but one can easily come up with other nasties like ‘smart poisons’ that creep up on you or gremlin devices that prevent technology – or organisms – from functioning)
  2. Conscious machines (making devices that can suffer is not a good idea)
  3. Artificial superintelligence (modulo friendliness)
  4. Time travel
  5. Mind reading devices (because of totalitarian uses)
  6. Brain hacking devices
  7. Autonomous robots programmed to kill humans
  8. Weaponized pathogens
  9. Virtual prisons and punishment
  10. Hell engineering (that is, effective production of super-adverse experiences; consider Iain M. Banks’ Surface Detail, or the various strange/silly/terrifying issues linked to Roko’s basilisk)

Some of the these technologies exist, like weaponized pathogens. Others might be impossible, like time travel. Some are embryonic like mind reading (we can decode some brainstates, but it requires spending a while in a big scanner as the input-output mapping is learned).

A commenter on the post asked “Who will have the responsibility of classifying and preventing “objectively evil” technology?” The answer is of course People Who Have Ph.D.s in Philosophy.

Unfortunately I haven’t got one, but that will not stop me.

Existential risk as evil?

I wonder what unifies this list. Let’s see: 1, 3, 7, and 8 are all about danger: either the risk of a lot of death, or the risk of extinction. 2, 9 and 10 are all about disvalue: the creation of very negative states of experience. 5 and 6 are threats to autonomy.

4, time travel, is the odd one out: George suggests that it is dangerous, but this is based on fictional examples, and that contact between different civilizations has never ended well (which is arguable: Japan). I can imagine a consistent universe with time travel might be bad for people’s sense of free will, and if you have time loops you can do super-powerful computation (getting superintelligence risk), but I do not think of any kind of plausible physics where time travel itself is dangerous. Fiction just makes up dangers to make the plot move on.

In the existential risk framework, it is worth noting that extinction is not the only kind of existential risk. We could mess things up so that humanity’s full potential never gets realized (for example by being locked into a perennial totalitarian system that is actually resistant to any change), or that we make the world hellish. These are axiological existential risks. So the unifying aspect of these technologies is that they could cause existential risk, or at least bad enough approximations.

Ethically, existential threats count a lot. They seem to have priority over mere disasters and other moral problems in a wide range of moral systems (not just consequentialism). So technologies that strongly increase existential risk without giving a commensurate benefit (for example by reducing other existential risks more – consider a global surveillance state, which might be a decent defence against people developing bio-, nano- and info-risks at the price of totalitarian risk) are indeed impermissible. In reality technologies have dual uses and the eventual risk impact can be hard to estimate, but the principle is reasonable even if implementation will be a nightmare.

Messy values

However, extinction risk is an easy category – even if some of the possible causes like superintelligence are weird and controversial, at least extinct means extinct. The value and autonomy risks are far trickier. First, we might be wrong about value: maybe suffering doesn’t actually count morally, we just think it does. So a technology that looks like it harms value badly like hell engineering actually doesn’t. This might seem crazy, but we should recognize that some things might be important but we do not recognize them. Francis Fukuyama thought transhumanist enhancement might harm some mysterious ‘Factor X’ (i.e. a “soul) giving us dignity that is not widely recognized. Nick Bostrom (while rejecting the Factor X argument) has suggested that there might be many “quiet values” important for diginity, taking second seat to the “loud” values like alleviation of suffering but still being important – a world where all quiet values disappear could be a very bad world even if there was no suffering (think Aldous Huxley’s Brave New World, for example). This is one reason why many superintelligence scenarios end badly: transmitting the full nuanced world of human values – many so quiet that we do not even recognize them ourselves before we lose them – is very hard. I suspect that most people find it unlikely that loud values like happiness or autonomy actually are parochial and worthless, but we could be wrong. This means that there will always be a fair bit of moral uncertainty about axiological existential risks, and hence about technologies that may threaten value. Just consider the argument between Fukuyama and us transhumanists.

Second, autonomy threats are also tricky because autonomy might not be all that it is cracked up to be in western philosophy. The atomic free-willed individual is rather divorced from the actual neural and social matrix creature. But even if one doesn’t buy autonomy as having intrinsic value, there are likely good cybernetic arguments for why maintaining individuals as individuals with their own minds is a good thing. I often point to David Brin’s excellent defence of the open society where he points out that societies where criticism and error correction are not possible will tend to become corrupt, inefficient and increasingly run by the preferences of the dominant cadre. In the end they will work badly for nearly everybody and have a fair risk of crashing. Tools like surveillance, thought reading or mind control would potentially break this beneficial feedback by silencing criticism. They might also instil identical preferences, which seems to be a recipe for common mode errors causing big breakdowns: monocultures are more vulnerable than richer ecosystems. Still, it is not obvious that these benefits could not exist in (say) a group-mind where individuality is also part of a bigger collective mind.

Criteria and weasel-words

These caveats aside, I think the criteria for “objectively evil technology” could be

(1) It predictably increases existential risk substantially without commensurate benefits,

or,

(2) it predictably increases the amount of death, suffering or other forms of disvalue significantly without commensurate benefits.

There are unpredictable bad technologies, but they are not immoral to develop. However, developers do have a responsibility to think carefully about the possible implications or uses of their technology. And if your baby-tickling machine involves black holes you have a good reason to be cautious.

Of course, “commensurate” is going to be the tricky word here. Is a halving of nuclear weapons and biowarfare risk good enough to accept a doubling of superintelligence risk? Is a tiny probability existential risk (say from a physics experiment) worth interesting scientific findings that will be known by humanity through the entire future? The MaxiPOK principle would argue that the benefits do not matter or weigh rather lightly. The current gain-of-function debate show that we can have profound disagreements – but also that we can try to construct institutions and methods that regulate the balance, or inventions that reduce the risk. This also shows the benefit of looking at larger systems than the technology itself: a potentially dangerous technology wielded responsibly can be OK if the responsibility is reliable enough, and if we can bring a safeguard technology into place before the risky technology it might no longer be unacceptable.

The second weasel word is “significantly”. Do landmines count? I think one can make the case. According to the UN they kill 15,000 to 20,000 people per year. The number of traffic fatalities per year worldwide is about 1.2 million deaths – but we might think cars are actually so beneficial that it outweighs the many deaths.

Intention?

The landmines are intended to harm (yes, the ideal use is to make people rationally stay the heck away from mined areas, but the harming is inherent in the purpose) while cars are not. This might lead to an amendment of the second criterion:

(2′) The technology  intentionally increases the amount of death, suffering or other forms of disvalue significantly without commensurate benefits.

This gets closer to how many would view things: technologies intended to cause harm are inherently evil. But being a consequentialist I think it let’s designers off the hook. Dr Guillotine believed his invention would reduce suffering (and it might have) but it also led to a lot more death. Dr Gatling invented his gun to “reduce the size of armies and so reduce the number of deaths by combat and disease, and to show how futile war is.” So the intention part is problematic.

Some people are concerned with autonomous weapons because they are non-moral agents making life-and-death decisions over people; they would use deontological principles to argue that making such amoral devices are wrong. But a landmine that has been designed to try to identify civilians and not blow up around them seems to be a better device than an indiscriminate device: the amorality of the decisionmaking is less of problematic than the general harmfulness of the device.

I suspect trying to bake in intentionality or other deontological concepts will be problematic. Just as human dignity (another obvious concept – “Devices intended to degrade human dignity are impermissible”) is likely a non-starter. They are still useful heuristics, though. We do not want too much brainpower spent on inventing better ways of harming or degrading people.

Policy and governance: the final frontier

In the end, this exercise can be continued indefinitely. And no doubt it will.

Given the general impotence of ethical arguments to change policy (it usually picks up the pieces and explains what went wrong once it does go wrong) a more relevant question might be how a civilization can avoid developing things it has a good reason to suspect are a bad idea. I suspect the answer to that is going to be not just improvements in coordination and the ability to predict consequences, but some real innovations in governance under empirical and normative uncertainty.

But that is for another day.

Somebody think of the electrons!

Atlas 6Brian Tomasik has a fascinating essay: Is there suffering in fundamental physics?

He admits from the start that “Any sufficiently advanced consequentialism is indistinguishable from its own parody.” And it would be easy to dismiss this as taking compassion way too far: not just caring about plants or rocks, but the possible suffering of electrons and positrons.

I think he has enough arguments to show that the idea is not entirely crazy: we do not understand the ontology of phenomenal experience well enough that we can easily rule out small systems having states, panpsychism is a view held by some rational people, it seems a priori unlikely that there is some mid-sized systems that have all the value in the universe rather than the largest or the smallest scale, we have strong biases towards our kind of system, and information physics might actually link consciousness with physics.

None of these are great arguments, but there are many of them. And the total number of atoms or particles is huge: even assigning a tiny fraction of human moral consideration to them or a tiny probability of them mattering morally will create a large expected moral value. The smallness of moral consideration or the probability needs to be far outside our normal reasoning comfort zone: if you assign a probability lower than 10^{-10^{56}} to a possibility you need amazingly strong reasons given normal human epistemic uncertainty.

I suspect most readers will regard this outside their “ultraviolett cutoff” for strange theories: just as physicists successfully invented/discovered a quantum cutoff to solve the ultraviolet catastrophe, most people have a limit where things are too silly or strange to count. Exactly how to draw it rationally (rather than just base it on conformism or surface characteristics) is a hard problem when choosing between the near infinity of odd but barely possible theories.

What is the mass of the question mark?One useful heuristic is to check whether the opposite theory is equally likely or important: in that case they balance each other (yes, the world could be destroyed by me dropping a pen – but it could also be destroyed by not dropping it). In this case giving greater weight to suffering than neutral states breaks the symmetry: we ought to investigate this possibility since the theory that there is no moral considerability in elementary physics implies no particular value is gained from discovering this fact, while the suffering theory implies it may matter a lot if we found out (and could do something about it). The heuristic is limited but at least a start.

Another way of getting a cutoff for theories of suffering is of course to argue that there must be a lower limit of the system that can have suffering (this is after all how physics very successfully solved the classical UV catastrophe). This gets tricky when we try to apply it to insects, small brains, or other information processing systems. But in physics there might be a better argument: if suffering happens on the elementary particle level, it is going to be quantum suffering. There would be literal superpositions of suffering/non-suffering of the same system. Normal suffering is classical: either it exists or not to some experiencing system, and hence there either is or isn’t a moral obligation to do something. It is not obvious how to evaluate quantum suffering. Maybe we ought to perform a quantum-action that moves the wavefunction to a pure non-suffering state (a bit like quantum game theory: just as game theory might have ties to morality, quantum game theory might link to quantum morality), but this is constrained by the tough limits in quantum mechanics on what can be sensed and done. Quantum suffering might simply be something different from suffering, just as quantum states do not have classical counterparts. Hence our classical moral obligations do not relate to it.

But who knows how molecules feel?

Cryonics: too rational, hence fair game?

CryotagOn Practical Ethics I blog about cryonics acceptance: Freezing critique: privileged views and cryonics. My argument is that cryonics tries to be a rational scientific approach, which means it is fair game for criticism. Meanwhile many traditional and anti-cryonic views are either directly religious or linked to religious views, which means people refrain from criticising them back. Since views that are criticised are seen as more questionable than non-criticised (if equally strange) views, this makes cryonics look less worth respecting.

Is love double-blind?

On Practical Ethics, I blog about whether there is a difference in the ethics of the dating site OKCupid and Facebook manipulating their users. My tentative conclusion is that there is a form of informal consent in using certain sites – when you use a dating site you assume there will be some magic algorithms matchmaking you, and that your responses will affect these algorithms. This consent might be enough to make many experiment ethically OK.