Consequentialist world improvement

I just rediscovered an old response to the Extropians List that might be worth reposting. Slight edits.

Communal values

On 06/10/2012 16:17, Tomaz Kristan wrote:

>> If you want to reduce death tolls, focus on self-driving cars.
> Instead of answering terror attacks, just mend you cars?

Sounds eminently sensible. Charlie makes a good point: if we want to make the world better, it might be worth prioritizing fixing the stuff that makes it worse according to the damage it actually makes. Toby Ord and me have been chatting quite a bit about this.

Death

In terms of death (~57 million people per year), the big causes are cardiovascular disease (29%), infectious and parasitic diseases (23%) and cancer (12%). At least the first and last are to a sizeable degree caused or worsened by ageing, which is a massive hidden problem. It has been argued that malnutrition is similarly indirectly involved in 15-60% of the total number of deaths: often not the direct cause, but weakening people so they become vulnerable to other risks. Anything that makes a dent in these saves lives on a scale that is simply staggering; any threat to our ability to treat them (like resistance to antibiotics or anthelmintics) is correspondingly bad.

Unintentional injuries are responsible for 6% of deaths, just behind respiratory diseases 6.5%. Road traffic alone is responsible for 2% of all deaths: even 1% safer cars would save 11,400 lives per year. If everybody reached Swedish safety (2.9 deaths per 100,000 people per year) it would save around 460,000 lives per year – one Antwerp per year.

Now, intentional injuries are responsible for 2.8% of all deaths. Of these suicide is responsible for 1.53% of total death rate, violence 0.98% and war 0.3%. Yes, all wars killed about the same number of people as were killed by meningitis, and slightly more than the people who died of syphilis. In terms of absolute numbers we might be much better off improving antibiotic treatments and suicide hotlines than trying to stop the wars. And terrorism is so small that it doesn’t really show up: even the highest estimates put the median fatalities per year in the low thousands.

So in terms of deaths, fixing (or even denting) ageing, malnutrition, infectious diseases and lifestyle causes is a far more important activity than winning wars or stopping terrorists. Hypertension, tobacco, STDs, alcohol, indoor air pollution and sanitation are all far, far more pressing in terms of saving lives. If we had a choice between ending all wars in the world and fixing indoor air pollution the rational choice would be to fix those smoky stoves: they kill nine times more people.

Existential risk

There is of course more to improving the world than just saving lives. First there is the issue of outbreak distributions: most wars are local and small affairs, but some become global. Same thing for pandemic respiratory disease. We actually do need to worry about them more than their median sizes suggest (and again the influenza totally dominates all wars). Incidentally, the exponent for the power law distribution of terrorism is safely strongly negative at -2.5, so it is less of a problem than ordinary wars with exponent -1.41 (where the expectation diverges: wait long enough and you get a war larger than any stated size).

There are reasons to think that existential risk should be weighed extremely strongly: even a tiny risk that we loose all our future is much worse than many standard risks (since the future could be inconceivably grand and involve very large numbers of people). This has convinced me that fixing the safety of governments needs to be boosted a lot: democides have been larger killers than wars in the 20th century and both seems to have most of the tail risk, especially when you start thinking nukes. It is likely a far more pressing problem than climate change, and quite possibly (depending on how you analyse xrisk weighting) beats disease.

How to analyse xrisk, especially future risks, in this kind of framework is a big part of our ongoing research at FHI.

Happiness

If instead of lives lost we look at the impact on human stress and happiness wars (and violence in general) look worse: they traumatize people, and terrorism by its nature is all about causing terror. But again, they happen to a small set of people. So in terms of happiness it might be more important to make the bulk of people happier. Life satisfaction correlates to 0.7 with health and 0.6 with wealth and basic education. Boost those a bit, and it outweighs the horrors of war.

In fact, when looking at the value of better lives, it looks like an enhancement in life quality might be worth much more than fixing a lot of the deaths discussed above: make everybody’s life 1% better, and it corresponds to more quality adjusted life years than is lost to death every year! So improving our wellbeing might actually matter far, far more than many diseases. Maybe we ought to spend more resources on applied hedonism research than trying to cure Alzheimers.

Morality

The real reason people focus so much about terrorism is of course the moral outrage. Somebody is responsible, people are angry and want revenge. Same thing for wars. And the horror tends to strike certain people: my kind of global calculations might make sense on the global scale, but most of us think that the people suffering the worst have a higher priority. While it might make more utilitarian sense to make everybody 1% happier rather than stop the carnage in Syria, I suspect most people would say morality is on the other side (exactly why is a matter of some interesting ethical debate, of course). Deontologists might think we have moral duties we must implement no matter what the cost. I disagree: burning villages in order to save them doesn’t make sense. It makes sense to risk lives in order to save lives, both directly and indirectly (by reducing future conflicts).

But this requires proportionality: going to war in order to avenge X deaths by causing 10X deaths is not going to be sustainable or moral. The total moral weight of one unjust death might be high, but it is finite. Given the typical civilian causality ratio of 10:1 any war will also almost certainly produce far more collateral unjust deaths than the justified deaths of enemy soldiers: avenging X deaths by killing exactly X enemies will still lead to around 10X unjust deaths. So achieving proportionality is very, very hard (and the Just War Doctrine is broken anyway, according to the war ethicists I talk to). This means that if you want to leave the straightforward utilitarian approach and add some moral/outrage weighting, you risk making the problem far worse by your own account. In many cases it might indeed be the moral thing to turn the other cheek… ideally armoured and barbed with suitable sanctions.

Conclusion

To sum up, this approach of just looking at consequences and ignoring who is who is of course a bit too cold for most people. Most people have Tetlockian sacred values and get very riled up if somebody thinks about cost-effectiveness in terrorism fighting (typical US bugaboo) or development (typical warmhearted donor bugaboo) or healthcare (typical European bugaboo). But if we did, we would make the world a far better place.

Bring on the robot cars and happiness pills!

Somebody think of the electrons!

Atlas 6Brian Tomasik has a fascinating essay: Is there suffering in fundamental physics?

He admits from the start that “Any sufficiently advanced consequentialism is indistinguishable from its own parody.” And it would be easy to dismiss this as taking compassion way too far: not just caring about plants or rocks, but the possible suffering of electrons and positrons.

I think he has enough arguments to show that the idea is not entirely crazy: we do not understand the ontology of phenomenal experience well enough that we can easily rule out small systems having states, panpsychism is a view held by some rational people, it seems a priori unlikely that there is some mid-sized systems that have all the value in the universe rather than the largest or the smallest scale, we have strong biases towards our kind of system, and information physics might actually link consciousness with physics.

None of these are great arguments, but there are many of them. And the total number of atoms or particles is huge: even assigning a tiny fraction of human moral consideration to them or a tiny probability of them mattering morally will create a large expected moral value. The smallness of moral consideration or the probability needs to be far outside our normal reasoning comfort zone: if you assign a probability lower than 10^{-10^{56}} to a possibility you need amazingly strong reasons given normal human epistemic uncertainty.

I suspect most readers will regard this outside their “ultraviolett cutoff” for strange theories: just as physicists successfully invented/discovered a quantum cutoff to solve the ultraviolet catastrophe, most people have a limit where things are too silly or strange to count. Exactly how to draw it rationally (rather than just base it on conformism or surface characteristics) is a hard problem when choosing between the near infinity of odd but barely possible theories.

What is the mass of the question mark?One useful heuristic is to check whether the opposite theory is equally likely or important: in that case they balance each other (yes, the world could be destroyed by me dropping a pen – but it could also be destroyed by not dropping it). In this case giving greater weight to suffering than neutral states breaks the symmetry: we ought to investigate this possibility since the theory that there is no moral considerability in elementary physics implies no particular value is gained from discovering this fact, while the suffering theory implies it may matter a lot if we found out (and could do something about it). The heuristic is limited but at least a start.

Another way of getting a cutoff for theories of suffering is of course to argue that there must be a lower limit of the system that can have suffering (this is after all how physics very successfully solved the classical UV catastrophe). This gets tricky when we try to apply it to insects, small brains, or other information processing systems. But in physics there might be a better argument: if suffering happens on the elementary particle level, it is going to be quantum suffering. There would be literal superpositions of suffering/non-suffering of the same system. Normal suffering is classical: either it exists or not to some experiencing system, and hence there either is or isn’t a moral obligation to do something. It is not obvious how to evaluate quantum suffering. Maybe we ought to perform a quantum-action that moves the wavefunction to a pure non-suffering state (a bit like quantum game theory: just as game theory might have ties to morality, quantum game theory might link to quantum morality), but this is constrained by the tough limits in quantum mechanics on what can be sensed and done. Quantum suffering might simply be something different from suffering, just as quantum states do not have classical counterparts. Hence our classical moral obligations do not relate to it.

But who knows how molecules feel?