Objectively evil technology

Dangerous partGeorge Dvorsky has a post on io9: 10 Horrifying Technologies That Should Never Be Allowed To Exist. It is a nice clickbaity overview of some very bad technologies:

  1. Weaponized nanotechnology (he mainly mentions ecophagy, but one can easily come up with other nasties like ‘smart poisons’ that creep up on you or gremlin devices that prevent technology – or organisms – from functioning)
  2. Conscious machines (making devices that can suffer is not a good idea)
  3. Artificial superintelligence (modulo friendliness)
  4. Time travel
  5. Mind reading devices (because of totalitarian uses)
  6. Brain hacking devices
  7. Autonomous robots programmed to kill humans
  8. Weaponized pathogens
  9. Virtual prisons and punishment
  10. Hell engineering (that is, effective production of super-adverse experiences; consider Iain M. Banks’ Surface Detail, or the various strange/silly/terrifying issues linked to Roko’s basilisk)

Some of the these technologies exist, like weaponized pathogens. Others might be impossible, like time travel. Some are embryonic like mind reading (we can decode some brainstates, but it requires spending a while in a big scanner as the input-output mapping is learned).

A commenter on the post asked “Who will have the responsibility of classifying and preventing “objectively evil” technology?” The answer is of course People Who Have Ph.D.s in Philosophy.

Unfortunately I haven’t got one, but that will not stop me.

Existential risk as evil?

I wonder what unifies this list. Let’s see: 1, 3, 7, and 8 are all about danger: either the risk of a lot of death, or the risk of extinction. 2, 9 and 10 are all about disvalue: the creation of very negative states of experience. 5 and 6 are threats to autonomy.

4, time travel, is the odd one out: George suggests that it is dangerous, but this is based on fictional examples, and that contact between different civilizations has never ended well (which is arguable: Japan). I can imagine a consistent universe with time travel might be bad for people’s sense of free will, and if you have time loops you can do super-powerful computation (getting superintelligence risk), but I do not think of any kind of plausible physics where time travel itself is dangerous. Fiction just makes up dangers to make the plot move on.

In the existential risk framework, it is worth noting that extinction is not the only kind of existential risk. We could mess things up so that humanity’s full potential never gets realized (for example by being locked into a perennial totalitarian system that is actually resistant to any change), or that we make the world hellish. These are axiological existential risks. So the unifying aspect of these technologies is that they could cause existential risk, or at least bad enough approximations.

Ethically, existential threats count a lot. They seem to have priority over mere disasters and other moral problems in a wide range of moral systems (not just consequentialism). So technologies that strongly increase existential risk without giving a commensurate benefit (for example by reducing other existential risks more – consider a global surveillance state, which might be a decent defence against people developing bio-, nano- and info-risks at the price of totalitarian risk) are indeed impermissible. In reality technologies have dual uses and the eventual risk impact can be hard to estimate, but the principle is reasonable even if implementation will be a nightmare.

Messy values

However, extinction risk is an easy category – even if some of the possible causes like superintelligence are weird and controversial, at least extinct means extinct. The value and autonomy risks are far trickier. First, we might be wrong about value: maybe suffering doesn’t actually count morally, we just think it does. So a technology that looks like it harms value badly like hell engineering actually doesn’t. This might seem crazy, but we should recognize that some things might be important but we do not recognize them. Francis Fukuyama thought transhumanist enhancement might harm some mysterious ‘Factor X’ (i.e. a “soul) giving us dignity that is not widely recognized. Nick Bostrom (while rejecting the Factor X argument) has suggested that there might be many “quiet values” important for diginity, taking second seat to the “loud” values like alleviation of suffering but still being important – a world where all quiet values disappear could be a very bad world even if there was no suffering (think Aldous Huxley’s Brave New World, for example). This is one reason why many superintelligence scenarios end badly: transmitting the full nuanced world of human values – many so quiet that we do not even recognize them ourselves before we lose them – is very hard. I suspect that most people find it unlikely that loud values like happiness or autonomy actually are parochial and worthless, but we could be wrong. This means that there will always be a fair bit of moral uncertainty about axiological existential risks, and hence about technologies that may threaten value. Just consider the argument between Fukuyama and us transhumanists.

Second, autonomy threats are also tricky because autonomy might not be all that it is cracked up to be in western philosophy. The atomic free-willed individual is rather divorced from the actual neural and social matrix creature. But even if one doesn’t buy autonomy as having intrinsic value, there are likely good cybernetic arguments for why maintaining individuals as individuals with their own minds is a good thing. I often point to David Brin’s excellent defence of the open society where he points out that societies where criticism and error correction are not possible will tend to become corrupt, inefficient and increasingly run by the preferences of the dominant cadre. In the end they will work badly for nearly everybody and have a fair risk of crashing. Tools like surveillance, thought reading or mind control would potentially break this beneficial feedback by silencing criticism. They might also instil identical preferences, which seems to be a recipe for common mode errors causing big breakdowns: monocultures are more vulnerable than richer ecosystems. Still, it is not obvious that these benefits could not exist in (say) a group-mind where individuality is also part of a bigger collective mind.

Criteria and weasel-words

These caveats aside, I think the criteria for “objectively evil technology” could be

(1) It predictably increases existential risk substantially without commensurate benefits,

or,

(2) it predictably increases the amount of death, suffering or other forms of disvalue significantly without commensurate benefits.

There are unpredictable bad technologies, but they are not immoral to develop. However, developers do have a responsibility to think carefully about the possible implications or uses of their technology. And if your baby-tickling machine involves black holes you have a good reason to be cautious.

Of course, “commensurate” is going to be the tricky word here. Is a halving of nuclear weapons and biowarfare risk good enough to accept a doubling of superintelligence risk? Is a tiny probability existential risk (say from a physics experiment) worth interesting scientific findings that will be known by humanity through the entire future? The MaxiPOK principle would argue that the benefits do not matter or weigh rather lightly. The current gain-of-function debate show that we can have profound disagreements – but also that we can try to construct institutions and methods that regulate the balance, or inventions that reduce the risk. This also shows the benefit of looking at larger systems than the technology itself: a potentially dangerous technology wielded responsibly can be OK if the responsibility is reliable enough, and if we can bring a safeguard technology into place before the risky technology it might no longer be unacceptable.

The second weasel word is “significantly”. Do landmines count? I think one can make the case. According to the UN they kill 15,000 to 20,000 people per year. The number of traffic fatalities per year worldwide is about 1.2 million deaths – but we might think cars are actually so beneficial that it outweighs the many deaths.

Intention?

The landmines are intended to harm (yes, the ideal use is to make people rationally stay the heck away from mined areas, but the harming is inherent in the purpose) while cars are not. This might lead to an amendment of the second criterion:

(2′) The technology  intentionally increases the amount of death, suffering or other forms of disvalue significantly without commensurate benefits.

This gets closer to how many would view things: technologies intended to cause harm are inherently evil. But being a consequentialist I think it let’s designers off the hook. Dr Guillotine believed his invention would reduce suffering (and it might have) but it also led to a lot more death. Dr Gatling invented his gun to “reduce the size of armies and so reduce the number of deaths by combat and disease, and to show how futile war is.” So the intention part is problematic.

Some people are concerned with autonomous weapons because they are non-moral agents making life-and-death decisions over people; they would use deontological principles to argue that making such amoral devices are wrong. But a landmine that has been designed to try to identify civilians and not blow up around them seems to be a better device than an indiscriminate device: the amorality of the decisionmaking is less of problematic than the general harmfulness of the device.

I suspect trying to bake in intentionality or other deontological concepts will be problematic. Just as human dignity (another obvious concept – “Devices intended to degrade human dignity are impermissible”) is likely a non-starter. They are still useful heuristics, though. We do not want too much brainpower spent on inventing better ways of harming or degrading people.

Policy and governance: the final frontier

In the end, this exercise can be continued indefinitely. And no doubt it will.

Given the general impotence of ethical arguments to change policy (it usually picks up the pieces and explains what went wrong once it does go wrong) a more relevant question might be how a civilization can avoid developing things it has a good reason to suspect are a bad idea. I suspect the answer to that is going to be not just improvements in coordination and the ability to predict consequences, but some real innovations in governance under empirical and normative uncertainty.

But that is for another day.

6 thoughts on “Objectively evil technology

  1. I think 2 is weirdly biased. It would make sense for an antinatalist or negative utilitarian, but not from someone who wants humanity to flourish because [some positive thing about human consciousness].

    Perhaps artificial suffering could be much worse, or much more likely, than human suffering.

    But on the flip side, artificial consciousness could implement the positive values like SWB much more effectively. Or it could implement human values with less average suffering than human brains.

    If we are willing to risk future human babies to suffer, there is no obvious reason why the same tradeoffs shouldn’t apply to artificial consciousness.

    1. I agree that it is a tricky one. George argues that some of the technologies may have good sides, but the risks may simply outweigh them. So if one thinks negative states have moral priority (one does not have to be negative utilitarian for that, just having a convex utility curve), then there might be an a priori argument for not bringing new conscious entities into existence. Especially if we have reason to think that there are going to be a lot of these entities, perhaps even experiencing a million times faster than us.

      A friend also argued that making a super-positive entity might be bad: from a utilitarian perspective we ought to treat it like a utility monster, but from a deontological perspective (which was her position) that would be *bad*. Not sure how far that argument actually gets (we never got around to write the paper together), but it was amusing.

      Overall, creating entities that have moral value is not something one should do lightly. The total amount of value at stake may be so large[*] that small mistakes mean huge consequences, and we know we are bad at doing it right. So delaying the tech until a later date when we suspect we are likely to be better at doing the right thing might actually be the safe thing to do. Which is of course a deeply annoying conclusion to techno-optimists like me.

      [* Back of the envelope calculation: Cisco estimates 8.7 billion Internet-linked devices in 2012. So if the next generation of computers had a moral importance in some domain (like pain) on par with humans we would have to care for our device ecology about as much as our human ecology.

      Continuing Moore’s-law trends, we should expect the amount of experience-seconds to double every 18 months, and the installed base likely grows at a rate around 2% per year (judging by sales and overall economic growth rate). That together means the amount of experience-seconds in machines grows like N0*2^(t/1.5)*(1.02)^t=N0*2^(t*0.69), or a doubling rate every 17 months (the machine base is growing way less than the power of the new equipment). So in a decade machines become 133 times more important than they originally were. If Moore’s law lasts three decades more we get a factor of about 2.3 million times. Botching things even slightly means we might lose, say, 1% of that. That might be 23000 times the total moral value of humanity today.

      Still, this kind of calculation mainly leads to anti-xrisk considerations a la the astronomical waste argument rather than that we should not do things. We ought to safeguard this huge potential value, and delays in realizing it might be OK as long as it is not permanently reduced. ]

  2. Ummm I am pretty sure I left a comment here. I assume you deleted it because it was not directly related to the conversation (just saying a few good things about the blog) but still it left me wondering if I violated any posting regulation without knowing. Anyway, drop me a line saying why if that’s not too much trouble for you
    I am commenting again here because I could not find any contact address to send you an email. Feel free to delete. Thanks
    Giorgos (g.chaziris@gmail.com)

  3. That is not a list I would expect from a transhumanist.

    2) Suffering, of at least not have one’s wants met, is commensurate with being conscious much less self-aware at all. Do we seek to snuff out our own consciousness and that of our children? No? Then this is obviously nonsense.

    3) Artificial Intelligence: It was not even specified the author was speaking of AGI versus more narrow AI. The lattel is ubiquitous and critical today. The former is much to be desired as we require much more intelligence to resolve the challenges of our time much less what is to come. Unfortunately many thing AGI means near instantaneous absolute power. So they get very afraid. One can no more, actually much much less, guarantee “Friendly” AGI than one can guarantee one’s child will grow up to be a fine human being.

    5) Mind reading devices. Hmm. You mean like really good BCI or up to the second brain scans for backups eventually? That is an awful lot to give up out of fear of misuse.

    6) Brain hacking devices. So all those transhumanist dreams of self modification and improvement at a brain/mental level need to be avoided? What about speed learning machines?

    Some of the others I agree with. If we can build a virtual prison then we can also likely cure anything psychological/physiological or build fast personal growth environments towards cure. We may need some of (5) and (6) to do it.

    On (7) I fail to see the difference between humans programmed and trained to kill people and machines tasked with doing so. Both are equally repugnant except in defense against aggressors.

    1. Imagining giving computers a sense of frustration or impatience. It might improve their performance and ability to solve certain problems, but since most of the time they are waiting for users to do something and errors occur frequently there would be a lot of bad emotions. If there was no phenomenological consciousness, just appropriate behavior, then things would be fine. Adding consciousness to certain things that don’t need it seems to be stupid. We want our children to be conscious because we care about *them*. But it is rare that we buy tools because we want the individual tool itself; usually we want them for instrumental use.

      AGI safety: whether one can give safety guarantees is an unsolved problem. A lot of people have opinions, but more good research is needed. Assuming that it actually is simple (a lot of AI people do this) is just as stupid as assuming that it cannot be done. We simply don’t know. But we have fairly good reasons to suspect that it is an *important* problem because of the implications of AGI power.

      Mind reading: yes, how much is too much to give up for fear of mis-use? What metrics do we use? The problem also have the converse: if I mention some horrible technology (say, a new kind of torture) but it could be used for some good purposes (the classic bomb case, or maybe it has the side effect of curing cancer), how do you judge these apples and oranges? Some technologies are entirely OK when handled by responsible people, but non-OK when accessible to random people. Yet we can also predict that most technologies will become available to anyone in the long run.

      Brain hacking: I think you are reflexively reading a banning mentality into George’s essay that is not really there. Just because we transhumanists like an area of technology in general doesn’t mean that (1) certain uses can be very bad, and (2) we might find that their badness is so significant that we rationally should try to avoid the field or at least wait until we are wiser/have better oversight/safeguard technologies. Brain hacking is potentially far more problematic than mind reading, since it can seriously undermine our autonomy if used badly or maliciously. Governments have always wanted mind control monopolies, and if citizen views become externally decideable then we may have a tough democracy and freedom problem. Editing your own motivation seems likely to lead to instabilities, and we do not yet know how to handle that well. Maybe we can learn it from experience with the technology (a not uncommon phenomenon), but if it looks like a large number of people will end up with ruined lives we might have a good reason to avoid developing it.

Leave a Reply to Anders Sandberg Cancel reply

Your email address will not be published. Required fields are marked *