Consequentialist world improvement

I just rediscovered an old response to the Extropians List that might be worth reposting. Slight edits.

Communal values

On 06/10/2012 16:17, Tomaz Kristan wrote:

>> If you want to reduce death tolls, focus on self-driving cars.
> Instead of answering terror attacks, just mend you cars?

Sounds eminently sensible. Charlie makes a good point: if we want to make the world better, it might be worth prioritizing fixing the stuff that makes it worse according to the damage it actually makes. Toby Ord and me have been chatting quite a bit about this.

Death

In terms of death (~57 million people per year), the big causes are cardiovascular disease (29%), infectious and parasitic diseases (23%) and cancer (12%). At least the first and last are to a sizeable degree caused or worsened by ageing, which is a massive hidden problem. It has been argued that malnutrition is similarly indirectly involved in 15-60% of the total number of deaths: often not the direct cause, but weakening people so they become vulnerable to other risks. Anything that makes a dent in these saves lives on a scale that is simply staggering; any threat to our ability to treat them (like resistance to antibiotics or anthelmintics) is correspondingly bad.

Unintentional injuries are responsible for 6% of deaths, just behind respiratory diseases 6.5%. Road traffic alone is responsible for 2% of all deaths: even 1% safer cars would save 11,400 lives per year. If everybody reached Swedish safety (2.9 deaths per 100,000 people per year) it would save around 460,000 lives per year – one Antwerp per year.

Now, intentional injuries are responsible for 2.8% of all deaths. Of these suicide is responsible for 1.53% of total death rate, violence 0.98% and war 0.3%. Yes, all wars killed about the same number of people as were killed by meningitis, and slightly more than the people who died of syphilis. In terms of absolute numbers we might be much better off improving antibiotic treatments and suicide hotlines than trying to stop the wars. And terrorism is so small that it doesn’t really show up: even the highest estimates put the median fatalities per year in the low thousands.

So in terms of deaths, fixing (or even denting) ageing, malnutrition, infectious diseases and lifestyle causes is a far more important activity than winning wars or stopping terrorists. Hypertension, tobacco, STDs, alcohol, indoor air pollution and sanitation are all far, far more pressing in terms of saving lives. If we had a choice between ending all wars in the world and fixing indoor air pollution the rational choice would be to fix those smoky stoves: they kill nine times more people.

Existential risk

There is of course more to improving the world than just saving lives. First there is the issue of outbreak distributions: most wars are local and small affairs, but some become global. Same thing for pandemic respiratory disease. We actually do need to worry about them more than their median sizes suggest (and again the influenza totally dominates all wars). Incidentally, the exponent for the power law distribution of terrorism is safely strongly negative at -2.5, so it is less of a problem than ordinary wars with exponent -1.41 (where the expectation diverges: wait long enough and you get a war larger than any stated size).

There are reasons to think that existential risk should be weighed extremely strongly: even a tiny risk that we loose all our future is much worse than many standard risks (since the future could be inconceivably grand and involve very large numbers of people). This has convinced me that fixing the safety of governments needs to be boosted a lot: democides have been larger killers than wars in the 20th century and both seems to have most of the tail risk, especially when you start thinking nukes. It is likely a far more pressing problem than climate change, and quite possibly (depending on how you analyse xrisk weighting) beats disease.

How to analyse xrisk, especially future risks, in this kind of framework is a big part of our ongoing research at FHI.

Happiness

If instead of lives lost we look at the impact on human stress and happiness wars (and violence in general) look worse: they traumatize people, and terrorism by its nature is all about causing terror. But again, they happen to a small set of people. So in terms of happiness it might be more important to make the bulk of people happier. Life satisfaction correlates to 0.7 with health and 0.6 with wealth and basic education. Boost those a bit, and it outweighs the horrors of war.

In fact, when looking at the value of better lives, it looks like an enhancement in life quality might be worth much more than fixing a lot of the deaths discussed above: make everybody’s life 1% better, and it corresponds to more quality adjusted life years than is lost to death every year! So improving our wellbeing might actually matter far, far more than many diseases. Maybe we ought to spend more resources on applied hedonism research than trying to cure Alzheimers.

Morality

The real reason people focus so much about terrorism is of course the moral outrage. Somebody is responsible, people are angry and want revenge. Same thing for wars. And the horror tends to strike certain people: my kind of global calculations might make sense on the global scale, but most of us think that the people suffering the worst have a higher priority. While it might make more utilitarian sense to make everybody 1% happier rather than stop the carnage in Syria, I suspect most people would say morality is on the other side (exactly why is a matter of some interesting ethical debate, of course). Deontologists might think we have moral duties we must implement no matter what the cost. I disagree: burning villages in order to save them doesn’t make sense. It makes sense to risk lives in order to save lives, both directly and indirectly (by reducing future conflicts).

But this requires proportionality: going to war in order to avenge X deaths by causing 10X deaths is not going to be sustainable or moral. The total moral weight of one unjust death might be high, but it is finite. Given the typical civilian causality ratio of 10:1 any war will also almost certainly produce far more collateral unjust deaths than the justified deaths of enemy soldiers: avenging X deaths by killing exactly X enemies will still lead to around 10X unjust deaths. So achieving proportionality is very, very hard (and the Just War Doctrine is broken anyway, according to the war ethicists I talk to). This means that if you want to leave the straightforward utilitarian approach and add some moral/outrage weighting, you risk making the problem far worse by your own account. In many cases it might indeed be the moral thing to turn the other cheek… ideally armoured and barbed with suitable sanctions.

Conclusion

To sum up, this approach of just looking at consequences and ignoring who is who is of course a bit too cold for most people. Most people have Tetlockian sacred values and get very riled up if somebody thinks about cost-effectiveness in terrorism fighting (typical US bugaboo) or development (typical warmhearted donor bugaboo) or healthcare (typical European bugaboo). But if we did, we would make the world a far better place.

Bring on the robot cars and happiness pills!

9 thoughts on “Consequentialist world improvement

  1. Re Happiness, I would not just count the number of deaths. e.g. road deaths and war deaths have roughly ten times more serious injuries with lifetime consequences. It is a bit misleading (though a convenient statistic) to just analyse by number of deaths.
    Similarly terrorism causes few deaths, but just look at the non-fatal consequences – TSA, billions wasted on security theatre, total surveillance of millions in the hope that two or three terrorists might be detected, etc.
    Just weighting risks by number of deaths might lead to millions more living lives of misery.

    1. Yes, one has to be careful in calculating. It is also nontrivial how much a life is worsened by an impairment (or an improvement): people tend to return to their mood setpoints remarkably quickly even after major disabilities. A person who is permanently crippled with a 50% loss of life quality but lives for decades can be equivalent to somebody surviving half the time – it may be far preferable to be in the crippled state than dead.

      One of the problem of many anti-terrorism efforts is that they actually do more harm than good. The classic observation is that post-911 increases in car travel claimed more victims than the attacks themselves. And the low-grade fear, paranoia and government waste in the war on terror is doing far more harm than the actual terrorism going on: just removing security theatre and ignoring most terrorism would actually make us better off and equally safe.

    2. According to IHME, war caused just 0.03 % of total deaths in 2010. However, the rate between disability-adjusted life years (DALY) for ageing-related conditions such as cancer and cardiovascular disease compared with war is lower than the rate between deaths, and this is probably because war tends to cause disability and death at a much younger age (but then, DALY is computed relative to an assumed maximal life expectancy in today’s world, and this would perhaps not be relevant if we could cure ageing-related health problems more effectively).

  2. When calculating ethical goodness by lives lost, it seems pretty relevant to remember humans are not a finite resource. Instead of saving more people, we can make more people. Modulo the cost of replacement, bereavement and the dieing process, both economic and hedonic.

    It also seems crazy to make hed.-utilitarian arguments and ignore nonhuman sentient systems, which dwarf humans in number. There is some deontological implicit extra baggage underneath that.

    I am also still not convinced we know how to ethically weigh agony and the worst-case sufferings, as well as the risk that spreading utilitarianism might be harmful: In unfriendly AI, a near-miss might be far worse than not aiming, i.e. ill-designed systems that care about things like “human happiness” might be far more dangerous than random other systems, in terms of misery generation.

  3. Given the forced advances in medicine, e.g. antibiotics, surgical techniques, etc, in the setting of wars perhaps on balance more people exist than would in the absence of wars. If WW1 didn’t happen, would the world population be 7+ billion?

    1. The tech stimulating effects of wars is debatable: the biggest general purpose techs of the late 20th century – computers, the Internet, biotechnology – were not developed as a direct response to any war. Sure, there was some Cold War motivation for the first two, but most development happened broadly and uncorrelated with any conflict.

  4. If you don’t mind me ignoring the forest for one of the trees . . .

    I was playing around with the -1.41 power law you give for wars, and noticed that for certain levels of growth, that can be arbitrarily safe. In particular, if population is growing as time-cubed (say, fixed velocity 3-dimensional expansion), and if a civilization can either start out large enough or last long enough, then the chance of succumbing to a sufficiently big war can be made arbitrarily low. That’s true all the way down to -1&1/3. However, with 2-D expansion, the chance of eventual death is always 100% (you need a power law better than -1.5 if you’re going to survive with 2-dimensional expansion). And finally, with exponential growth, anything above -1.0 is fair game.

    I was just wondering if the above has been explored in the existential risk literature. (and could you be so kind as to provide a pointer?)

Leave a Reply to Karl Cancel reply

Your email address will not be published. Required fields are marked *