Happy Petrov Day!

Nuclear war is badOn Practical Ethics I blog about Petrov Day: the anniversary of an avoided nuclear cataclysm.

The lovely thing about this incident is that there is a person to focus on, making existential risk dramatically real. The LessWrong community has developed a ritual to commemorate the event and make our individual responsibility for reducing existential risk more vivid.

Averted disasters are hard to see, so we need more and bigger monuments to people who averted things.

Rational fractal distributions

Up among the peaksMost of the time we encounter probability distributions over the reals, the positive reals, or integers. But one can use the rational numbers as a probability space too.

Recently I found the paper Vladimir Trifonov, Laura Pasqualucci, Riccardo Dalla-Favera & Raul Rabadan. Fractal-like Distributions over the Rational Numbers in High-throughput Biological and Clinical Data. Scientific Reports 1:191 DOI: 10.1038/srep00191. They discuss the distribution of ratios of the number of reads from the same spot of DNA that come from each chromosome in a pair: the number of reads is an integer, so the ratio is rational. They get a peaky, self-similar distribution empirically, and the paper explains why. 

If you take positive independent integers from some distribution f(n) and generate ratios q=a/(a+b), then those ratios will have a distribution that is a convolution over the rational numbers: g(q) = g(a/(a+b)) = \sum_{m=0}^\infty \sum_{n=0}^\infty f(m) g(n) \delta \left(\frac{a}{a+b} - \frac{m}{m+n} \right ) = \sum_{t=0}^\infty f(ta)f(tb)

One can of course do the same for non-independent and different distributions of the integers. Oh, and by the way: this whole thing has little to do with ratio distributions (alias slash distributions), which is what happens in the real case.

The authors found closed form solutions for integers distributed as a power-law with an exponential cut-off and for the uniform distribution; unfortunately the really interesting case, the Poisson distribution, doesn’t seem to have a neat closed form solution.

In the case of a uniform distributions on the set \{1,2,\ldots , L\} they get g(a/(a+b)) = (1/L^2) \lfloor L/\max(a,b) \rfloor .

The rational distribution g(a/(a+b))=1/max(a,b) of Trifonov et al.
The rational distribution g(a/(a+b))=1/max(a,b) of Trifonov et al.

They note that this is similar to Thomae’s function, a somewhat well-known (and multiply named) counterexample in real analysis. That function is defined as f(p/q)=1/q (where the fraction is in lowest terms). In fact, both graphs have the same fractal dimension of 1.5.

It is easy to generate other rational distributions this way. Using a power law as an input produces a sparser pattern, since the integers going into the ratio tend to be small numbers, putting more probability at simple ratios:

The rational distribution g(a/(a+b))=C(ab)^-2 of Trifonov et al.
The rational distribution g(a/(a+b))=C(ab)^-2 (rational convolution of two index -2 power-law distributed integers).

If we use exponential distributions the pattern is fairly similar, but we can of course change the exponent to get something that ranges over a lot of numbers, putting more probability at nonsimple ratios p/q where p+q \gg 1:

The rational distribution of two convolved Exp[0.1] distributions.
The rational distribution of two convolved Exp[0.1] distributions.
Not everything has to be neat and symmetric. Taking the ratio of two unequal Poisson distributions can produce a rather appealing pattern:

Rational distribution of ratio between a Poisson[10] and a Poisson[5] variable.
Rational distribution of ratio between a Poisson[10] and a Poisson[5] variable.
Of course, full generality would include ratios of non-positive numbers. Taking ratios of normal variates rounded to the nearest integer produces a fairly sparse distribution since high numerators or denominators are rare.

Rational distribution of normal variates rounded to nearest integer.
Rational distribution of a/(a+b) ratios of normal variates rounded to nearest integer.

But multiplying the variates by 10 produces a nice distribution.

Rational distribution of ratios of normal variates multiplied by 10 and rounded.
Rational distribution of a/(a+b) ratios of normal variates that have been multiplied by 10 and rounded.

This approaches the Chauchy distribution as the discretisation gets finer. But note the fun microstructure (very visible in the Poisson case above too), where each peak at a simple ratio is surrounded by a “moat” of low probability. This is reminiscent of the behaviour of roots of random polynomials with integer coefficients (see also John Baez page on the topic).

The rational numbers do tend to induce a fractal recursive structure on things, since most measures on them will tend to put more mass at simple ratios than at complex ratios, but when plotting the value of the ratio everything gets neatly folded together. The lower approximability of numbers near the simple ratios produce moats. Which also suggests a question to ponder further: what role does the über-unapproximable golden ratio have in distributions like these?

Objectively evil technology

Dangerous partGeorge Dvorsky has a post on io9: 10 Horrifying Technologies That Should Never Be Allowed To Exist. It is a nice clickbaity overview of some very bad technologies:

  1. Weaponized nanotechnology (he mainly mentions ecophagy, but one can easily come up with other nasties like ‘smart poisons’ that creep up on you or gremlin devices that prevent technology – or organisms – from functioning)
  2. Conscious machines (making devices that can suffer is not a good idea)
  3. Artificial superintelligence (modulo friendliness)
  4. Time travel
  5. Mind reading devices (because of totalitarian uses)
  6. Brain hacking devices
  7. Autonomous robots programmed to kill humans
  8. Weaponized pathogens
  9. Virtual prisons and punishment
  10. Hell engineering (that is, effective production of super-adverse experiences; consider Iain M. Banks’ Surface Detail, or the various strange/silly/terrifying issues linked to Roko’s basilisk)

Some of the these technologies exist, like weaponized pathogens. Others might be impossible, like time travel. Some are embryonic like mind reading (we can decode some brainstates, but it requires spending a while in a big scanner as the input-output mapping is learned).

A commenter on the post asked “Who will have the responsibility of classifying and preventing “objectively evil” technology?” The answer is of course People Who Have Ph.D.s in Philosophy.

Unfortunately I haven’t got one, but that will not stop me.

Existential risk as evil?

I wonder what unifies this list. Let’s see: 1, 3, 7, and 8 are all about danger: either the risk of a lot of death, or the risk of extinction. 2, 9 and 10 are all about disvalue: the creation of very negative states of experience. 5 and 6 are threats to autonomy.

4, time travel, is the odd one out: George suggests that it is dangerous, but this is based on fictional examples, and that contact between different civilizations has never ended well (which is arguable: Japan). I can imagine a consistent universe with time travel might be bad for people’s sense of free will, and if you have time loops you can do super-powerful computation (getting superintelligence risk), but I do not think of any kind of plausible physics where time travel itself is dangerous. Fiction just makes up dangers to make the plot move on.

In the existential risk framework, it is worth noting that extinction is not the only kind of existential risk. We could mess things up so that humanity’s full potential never gets realized (for example by being locked into a perennial totalitarian system that is actually resistant to any change), or that we make the world hellish. These are axiological existential risks. So the unifying aspect of these technologies is that they could cause existential risk, or at least bad enough approximations.

Ethically, existential threats count a lot. They seem to have priority over mere disasters and other moral problems in a wide range of moral systems (not just consequentialism). So technologies that strongly increase existential risk without giving a commensurate benefit (for example by reducing other existential risks more – consider a global surveillance state, which might be a decent defence against people developing bio-, nano- and info-risks at the price of totalitarian risk) are indeed impermissible. In reality technologies have dual uses and the eventual risk impact can be hard to estimate, but the principle is reasonable even if implementation will be a nightmare.

Messy values

However, extinction risk is an easy category – even if some of the possible causes like superintelligence are weird and controversial, at least extinct means extinct. The value and autonomy risks are far trickier. First, we might be wrong about value: maybe suffering doesn’t actually count morally, we just think it does. So a technology that looks like it harms value badly like hell engineering actually doesn’t. This might seem crazy, but we should recognize that some things might be important but we do not recognize them. Francis Fukuyama thought transhumanist enhancement might harm some mysterious ‘Factor X’ (i.e. a “soul) giving us dignity that is not widely recognized. Nick Bostrom (while rejecting the Factor X argument) has suggested that there might be many “quiet values” important for diginity, taking second seat to the “loud” values like alleviation of suffering but still being important – a world where all quiet values disappear could be a very bad world even if there was no suffering (think Aldous Huxley’s Brave New World, for example). This is one reason why many superintelligence scenarios end badly: transmitting the full nuanced world of human values – many so quiet that we do not even recognize them ourselves before we lose them – is very hard. I suspect that most people find it unlikely that loud values like happiness or autonomy actually are parochial and worthless, but we could be wrong. This means that there will always be a fair bit of moral uncertainty about axiological existential risks, and hence about technologies that may threaten value. Just consider the argument between Fukuyama and us transhumanists.

Second, autonomy threats are also tricky because autonomy might not be all that it is cracked up to be in western philosophy. The atomic free-willed individual is rather divorced from the actual neural and social matrix creature. But even if one doesn’t buy autonomy as having intrinsic value, there are likely good cybernetic arguments for why maintaining individuals as individuals with their own minds is a good thing. I often point to David Brin’s excellent defence of the open society where he points out that societies where criticism and error correction are not possible will tend to become corrupt, inefficient and increasingly run by the preferences of the dominant cadre. In the end they will work badly for nearly everybody and have a fair risk of crashing. Tools like surveillance, thought reading or mind control would potentially break this beneficial feedback by silencing criticism. They might also instil identical preferences, which seems to be a recipe for common mode errors causing big breakdowns: monocultures are more vulnerable than richer ecosystems. Still, it is not obvious that these benefits could not exist in (say) a group-mind where individuality is also part of a bigger collective mind.

Criteria and weasel-words

These caveats aside, I think the criteria for “objectively evil technology” could be

(1) It predictably increases existential risk substantially without commensurate benefits,


(2) it predictably increases the amount of death, suffering or other forms of disvalue significantly without commensurate benefits.

There are unpredictable bad technologies, but they are not immoral to develop. However, developers do have a responsibility to think carefully about the possible implications or uses of their technology. And if your baby-tickling machine involves black holes you have a good reason to be cautious.

Of course, “commensurate” is going to be the tricky word here. Is a halving of nuclear weapons and biowarfare risk good enough to accept a doubling of superintelligence risk? Is a tiny probability existential risk (say from a physics experiment) worth interesting scientific findings that will be known by humanity through the entire future? The MaxiPOK principle would argue that the benefits do not matter or weigh rather lightly. The current gain-of-function debate show that we can have profound disagreements – but also that we can try to construct institutions and methods that regulate the balance, or inventions that reduce the risk. This also shows the benefit of looking at larger systems than the technology itself: a potentially dangerous technology wielded responsibly can be OK if the responsibility is reliable enough, and if we can bring a safeguard technology into place before the risky technology it might no longer be unacceptable.

The second weasel word is “significantly”. Do landmines count? I think one can make the case. According to the UN they kill 15,000 to 20,000 people per year. The number of traffic fatalities per year worldwide is about 1.2 million deaths – but we might think cars are actually so beneficial that it outweighs the many deaths.


The landmines are intended to harm (yes, the ideal use is to make people rationally stay the heck away from mined areas, but the harming is inherent in the purpose) while cars are not. This might lead to an amendment of the second criterion:

(2′) The technology  intentionally increases the amount of death, suffering or other forms of disvalue significantly without commensurate benefits.

This gets closer to how many would view things: technologies intended to cause harm are inherently evil. But being a consequentialist I think it let’s designers off the hook. Dr Guillotine believed his invention would reduce suffering (and it might have) but it also led to a lot more death. Dr Gatling invented his gun to “reduce the size of armies and so reduce the number of deaths by combat and disease, and to show how futile war is.” So the intention part is problematic.

Some people are concerned with autonomous weapons because they are non-moral agents making life-and-death decisions over people; they would use deontological principles to argue that making such amoral devices are wrong. But a landmine that has been designed to try to identify civilians and not blow up around them seems to be a better device than an indiscriminate device: the amorality of the decisionmaking is less of problematic than the general harmfulness of the device.

I suspect trying to bake in intentionality or other deontological concepts will be problematic. Just as human dignity (another obvious concept – “Devices intended to degrade human dignity are impermissible”) is likely a non-starter. They are still useful heuristics, though. We do not want too much brainpower spent on inventing better ways of harming or degrading people.

Policy and governance: the final frontier

In the end, this exercise can be continued indefinitely. And no doubt it will.

Given the general impotence of ethical arguments to change policy (it usually picks up the pieces and explains what went wrong once it does go wrong) a more relevant question might be how a civilization can avoid developing things it has a good reason to suspect are a bad idea. I suspect the answer to that is going to be not just improvements in coordination and the ability to predict consequences, but some real innovations in governance under empirical and normative uncertainty.

But that is for another day.

Plotting morality

Pew Research has posted their Morality Interactive Topline Results for their spring 2013 and winter 2013-2014 survey of moral views around the world. These are national samples, so for each moral issue the survey gives how many thinks it is morally unacceptable, morally acceptable, not a moral issue or whether it depends on the situation.

Plotting countries by whether issues are morally acceptable, morally unacceptable or morally irrelevant gives the following distributions.

Traingular plot of Pew Morality Survey

Overall, there are many countries that are morally against everything, and a tail pointing towards some balance between acceptable or morally irrelevant.

The situation-dependence scores tended to be low: most people do think there are moral absolutes. The highest situation-dependency scores tended to be in the middle between the morally unacceptable point and the OK side; I suspect there was just a fair bit of confusion going on.


Looking at the correlations between morally unacceptable answers suggested that unmarried sex and homosexuality stands out: views there were firmly correlated but not strongly influenced by views on other things. I regard this as a “sex for fun” factor. However, it should be noted that almost everything is firmly correlated: if a country is against X, it is likely against Y too. Looking at correlations between acceptable or no issue answers did not show any clear picture.

pewpca2d pewpca3d

The real sledgehammer is of course principal component analysis. Running it for the whole data produces a firm conclusion: the key factor is something we could call “moral conservatism”, which explains 73% of the variance. Countries that score high find unmarried sex, homosexuality, alcohol, gambling, abortion and divorce unacceptable.

The second factor, explaining 9%, seems to denote whether things are morally acceptable or simply morally not an issue. However, it has some unexpected interaction with whether unmarried sex is unacceptable. This links to the third factor, explaining 7%, which seems to be linked to views on divorce and contraception. Looking at the 3D plot of the data, it becomes clear that for countries scoring low on the moral conservatism scale (“modern countries”) there is a negative correlation between these two factors, while for conservative countries there is a positive correlation.

Plotting the most conservative (red) and least (blue) countries supports this. The lower blue corner is the typical Western countries (France, Canada, US, Australia) while the upper blue corner is more traditionalist (?) countries (Czech republic, Chile, Spain). The lower red corner has Ghana, Uganda, Pakistan and Nigeria, while the upper red is clearly Arab: Egypt, the Palestinian territories, Jordan.

In the end, I guess the data doesn’t tell us that much truly new. A large part of the world hold traditional conservative moral views. Perhaps the most interesting part is that the things people regard as morally salient or not interacts in a complicated manner with local culture. There are also noticeable differences even within the same cultural sphere: Tunisia has very different views from Egypt on divorce.

For those interested, here is my somewhat messy Matlab code and data to generate these pictures.