Arguing against killer robot janissaries

Military robot being shown to families at New Scientist Live 2017.
Military robot being shown to families at New Scientist Live 2017.

I have a piece in Dagens Samhälle with Olle Häggström, Carin Ism, Max Tegmark and Markus Anderljung urging the Swedish parliament to consider banning lethal autonomous weapons.

This is of course mostly symbolic; the real debate is happening right now over in Geneva at the CCW. I also participated in a round-table with the Red Cross that led to their report on the issue, which is one of the working papers presented there.

I am not particularly optimistic that we will get a ban – nor that a ban would actually achieve much. However, I am much more optimistic that this debate may force a general agreement about the importance of getting meaningful human control. This is actually an area where most military and peace groups would agree: nobody wants systems that are unaccountable and impossible to control. Making sure there are international agreements that using such systems is irresponsible and maybe even a war crime would be a big win. But there are lots of devils in the details.

When it comes to arguments for why LAWs are morally bad I am personally not so convinced that the bad comes from a machine making the decision to kill a person. Clearly some machine possible decisionmaking does improve proportionality and reduce arbitrariness. Similarly arguments about whether they would increase or reduce the risk of military action and how this would play out in terms of human suffering and death are interesting empirical arguments but we should not be overconfident in that we know the answers. Given that once LAWs are in use it will be hard to roll them back if the answers are bad, we might find it prudent to try to avoid them (but consider the opposing scenario where since time immemorial robots have fought our wars and somebody now suggests using humans too – there is a status quo bias here).

My main reason for being opposed to LAWs is not that they would be inherently immoral, nor that they would necessarily or even likely make war worse or more likely. My view is that the problem is that they give states too much power. Basically they make their monopoly on violence independent of the wishes of the citizens. Once a sufficiently potent LAW military (or police force) exist it will be able to exert coercive and lethal power as ordered without any mediation through citizens. While having humans in the army certainly doesn’t guarantee moral behavior, if ordered to turn against the citizenry or act in a grossly immoral way they can exert moral agency and resist (with varying levels of overtness). The LAW army will instead implement the orders as long as they are formally lawful (assuming there is at least a constraint against unlawful commands). States know that if they mistreat their population too much their army might side with the population, a reason why some of the nastier governments make use of mercenaries or a special separate class of soldier to reduce the risk. If LAWs become powerful enough they might make dictatorships far more stable by removing a potentially risky key component of state power from the internal politics.

Bans and moral arguments are unlikely to work against despots. But building broad moral consensuses on what is acceptable in war does have effects. If R&D emphasis is directed towards finding solutions to how to manage responsibility for autonomous device decisions that will develop a lot of useful technologies for making such systems at least safer – and one can well imagine similar legal and political R&D into finding better solutions to citizen-independent state power.

In fact, far more important than LAWs is what to do about Lethal Autonomous States. Bad governance kills, many institutions/corporations/states behave just as badly as the worst AI risk visions and have a serious value alignment problem, and we do not have great mechanisms for handling responsibility in inter-state conflicts. The UN system is a first stab at the problem but obviously much, much more can be done. In the meantime, we can try avoiding going too quickly down a risky path while we try to find safe-making technologies and agreements.

Law-abiding robots?

HumanoidOver on the Oxford Martin School blog I have an essay about law-abiding robots, triggered by a report to the EU committee of legal affairs. Basically, it asks what legal rules we want to have to make robots usable in society, in particular how to handle liability when autonomous machines do bad things.

(Dr Yueh-Hsuan Weng has an interview with the rapporteur)

Were robots thinking, moral beings liability would be easy: they would presumably be legal subjects and handled like humans and corporations. But now they have an uneasy position as legal objects yet endowed with the ability to perform complex actions on behalf of others, or with emergent behaviors nobody can predict. The challenge may be to design not just the robots or laws, but robots and laws that fit each other (and real social practices): social robotics.

But it is early days. It is actually hard to tell where robotics will truly shine or matter legally, and premature laws can stifle innovation. We also do not really know what principles we ought to use to underpin the social robotics – more research is needed. And if you thought AI safety was hard, now consider getting machines to fit into the even less well defined human social landscape.

Why Cherry 2000 should not be banned, Terminator should, and what this has to do with Oscar Wilde

Binary curious[This is what happens when I blog after two glasses of wine. Trigger warning for possibly stupid cultural criticism and misuse of Oscar Wilde.]

From robots to artificiality

On practical ethics I discuss what kind of robots we ought to campaign against. I have signed up against autonomous military robots, but I think sex robots are fine. The dividing line is that the harm done (if any) is indirect and victimless, and best handled through sociocultural means rather than legislation.

I think the campaign against sex robots has a point in that there are some pretty creepy ideas floating around in the world of current sex bots. But I also think it assumes these ideas are the only possible motivations. As I pointed out in my comments on another practical ethics post, there are likely people turned on by pure artificiality – human sexuality can be far queerer than most think.

Going off on a tangent, I am reminded of Oscar Wilde’s epigram

“The first duty in life is to be as artificial as possible. What the second duty is no one has as yet discovered.”

Being artificial is not the same thing as being an object. As noted by Barris, Wilde’s artificiality actually fits in with pluralism and liberalism. Things could be different. Yes, in the artificial world nothing is absolutely given, everything is the result of some design choices. But assuming some eternal Essence/Law/God is necessary for meaning or moral exposes one to a fruitless search for that Thing (or worse, a premature assumption one has found It, typically when looking in the mirror). Indeed, as Dorian Gray muses, “Is insincerity such a terrible thing? I think not. It is merely a method by which we can multiply our personalities.” We are not single personas with unitary identities and well defined destinies, and this is most clearly visible in our social plays.

Sex, power and robots

Continuing on my Wildean binge, I encountered another epigram:

“Everything in the world is about sex except sex. Sex is about power.”

I think this cuts close to the Terminator vs. Cherry 2000 debate. Most modern theorists of gender and sex are of course power-obsessed (let’s blame Foucault). The campaign against sex robots clearly see the problem as the robots embodying and perpetuating a problematic unequal power structure. I detect a whiff of paternalism there, where women and children – rather than people – seem to be assumed to be the victims and in the need of being saved from this new technology (at least it is not going as far as some other campaigns that fully assume they are also suffering from false consciousness and must be saved from themselves, the poor things). But sometimes a cigar is just a cigar… I mean sex is sex: it is important to recognize that one of the reasons for sex robots (and indeed prostitution) is the desire for sex and the sometimes awkward social or biological constraints of experiencing it.

The problem with autonomous weapons is that power really comes out of a gun. (Must resist making a Zardoz reference…) It might be wielded arbitrarily by an autonomous system with unclear or bad orders, or it might wielded far too efficiently by an automated armed force perfectly obedient to its commanders – removing the constraint that soldiers might turn against their rulers if being aimed against their citizenry. Terminator is far more about unequal and dangerous power than sex (although I still have fond memories of seeing a naked Arnie back in 1984). The cultural critic may argue that the power games in the bedroom are more insidious and affect more of our lives than some remote gleaming gun-metal threat, but I think I’d rather have sexism than killing and automated totalitarianism. The uniforms of the killer robots are not even going to look sexy.

It is for your own good

Trying to ban sex robots is about trying to shape society into an appealing way – the goal of the campaign is to support “development of ethical technologies that reflect human principles of dignity, mutuality and freedom” and the right for everybody to have their subjectivity recognized without coercion. But while these are liberal principles when stated like this, I suspect the campaign or groups like it will have a hard time keeping out of our bedrooms. After all, they need to ensure that there is no lack of mutuality or creepy sex robots there. The liberal respect for mutuality can become a very non-liberal worship of Mutuality, embodied in requiring partners to sign consent forms, demanding trigger warnings, and treating everybody who is not responding right to its keywords as suspects of future crimes. The fact that this absolutism comes from a very well-meaning impulse to protect something fine makes it even more vicious, since any criticism is easily mistaken as an attack on the very core Dignity/Mutuality/Autonomy of humanity (and hence any means of defence are OK). And now we have all the ingredients for a nicely self-indulgent power trip.

This is why Wilde’s pluralism is healthy. Superficiality, accepting the contrived and artificial nature of our created relationships, means that we become humble in asserting their truth and value. Yes, absolute relativism is stupid and self defeating. Yes, we need to treat each other decently, but I think it is better to start from the Lockean liberalism that allows people to have independent projects rather than assume that society and its technology must be designed to embody the Good Values. Replacing “human dignity” with the word “respect” usually makes ethics clearer.

Instead of assuming we can a priori figure out how technology will change us and then select the right technology, we try and learn. We can make some predictions with reasonable accuracy, which is why trying to rein in autonomous weapons makes sense (the probability that they lead to a world of stability and peace seems remote). But predicting cultural responses to technology is not something we have any good track record of: most deliberate improvements of our culture have come from social means and institutions, not banning technology.

“The fact is, that civilisation requires slaves. The Greeks were quite right there. Unless there are slaves to do the ugly, horrible, uninteresting work, culture and contemplation become almost impossible. Human slavery is wrong, insecure, and demoralising. On mechanical slavery, on the slavery of the machine, the future of the world depends.”

More robots, and how to take over the world with guaranteed minimum income

I was just watching “Humans Need Not apply” by CGPGrey,

when I noticed a tweet from Wendy Grossman, who I participated with in a radio panel about robotics (earlier notes on the discussion). She has some good points inspired by our conversation in her post, robots without software.

I think she has a key observation: much of the problem lies in the interaction between the automation and humans. On the human side, that means getting the right information and feedback into the machine side. From the machine side, it means figuring out what humans – those opaque and messy entities who change behaviour for internal reasons – want. At the point where the second demand is somehow resolved we will not only have really useful automation, but also essentially a way of resolving AI safety/ethics. But before that, we will have a situation of only partial understanding , and plenty of areas where either side will not be able to mesh well. Which either forces humans to adapt to machines, or machines to get humans to think that what they really wanted was what they got served. That is risky.

Global GMI stability issues

Incidentally, I have noted that many people hearing the current version of the machines will take our jobs story bring up the idea of a guaranteed minimum income as a remedy. If nobody has a job but there is a GMI we can still live a good life (especially since automation would make most things rather cheap). This idea has a long history, and Hans Moravec suggested it in his book Robot (1998) in regard to a future where AI-run corporations would be running the economy. It can be appealing even from a libertarian standpoint since it does away with a lot of welfare and tax bureaucracy (even Hayek might have been a fan).

I’m not enough of an economist to analyse it properly, but I suspect the real problem is stability when countries compete on tax: if Foobonia has a lower corporate tax rate than Baristan and the Democratic Republic of Baaz, then companies will move there – still making money by selling stuff to people in Baristan and Baaz. The more companies there are in Foobonia, the less taxes are needed to keep the citizens wealthy. In fact, as I mentioned in my earlier post, having fewer citizens might make the remaining more well off (things like this have happened on a smaller scale). The ideal situation would be to have the lowest taxes in the world and just one citizen. Or none, so the AI parliament can use the entire budget to improve the future prosperity and safety of Foobonia.

In our current world tax competition is only one factor determining where companies go. Not every company moves to Bahamas, Chile, Estonia or the UAE. One factor is other legal issues and logistics, but a big part is that you need to have people actually working in your company. Human capital is distributed very unevenly, and it is rarely where you want it (and the humans often do not want to move, for social reasons). But in an automated world machine capital will exist wherever you buy it so it can be placed where the taxes are cheaper. There will be a need to perform some services and transport goods in other areas, but unless they are taxed (hence driving up the price for your citizens) this is going to be a weaker constraint than now. How much weaker, I do not know – it would be interesting to see it investigated properly.

The core problem remains that if humans are largely living off the rents from a burgeoning economy there better exist stabilizing safeguards so these rents remain, and stabilizers that keep the safeguards stable. This is a non-trivial legal/economical problem, especially since one failure mode might be that some countries become zero citizen countries with huge economic growth and gradually accumulating investments everywhere (a kind of robotic Piketty situation, where everything in the end ends up owned by the AI consortium/sovereign wealth fund with the strongest growth). In short, it seems to require something just as tricky to develop as the friendly superintelligence program.

In any case, I suspect much of the reason people suggest GMI is that it is an already existing idea and not too strange. Hence it is thinkable and proposable. But there might be far better ideas out there for how to handle a world with powerful automation. One should not just stick with a local optimum idea when there might be way more stable and useful ideas further out.

Risky and rewarding robots

Robot playpenYesterday I participated in recording a radio program about robotics, and I noted that the participants were approaching the issue from several very different angles:

  • Robots as symbols: what we project things on them, what this says about humanity, how we change ourselves in respect to them, the role of hype and humanity in our thinking about them.
  • Robots as practical problem: how do you make a safe and trustworthy autonomous device that hangs around people? How do we handle responsibility for complex distributed systems that can generate ‘new’ behaviour?
  • Automation and jobs: what kinds of jobs are threatened or changed by automation? How does it change society, and how do we steer it in desirable directions – and what are they?
  • Long-term risks: how do we handle the potential risks from artificial general intelligence, especially given that many people think there are absolutely no problem and others are convinced that this could be existential if we do not figure out enough before it emerges?

In many cases the discussion got absurd because we talked past each other due to our different perspectives, but there were also some nice synergies. Trying to design automation without taking the anthropological and cultural aspects into account will lead to something that either does not work well with people or forces people to behave more machinelike. Not taking past hype cycles into account when trying to estimate future impact leads to overconfidence. Assuming that just because there has been hype in the past nothing will change is equally overconfident. The problems of trustworthiness and responsibility distribution become truly important when automating many jobs: when the automation is an essential part of the organisation, there needs to be mechanisms to trust it and to avoid dissolution of responsibility. Currently robot ethics is more about how humans are impacted by robots rather than ethics for robots, but the latter will become quite essential if we get closer to AGI.

Jobs

Robot on break

I focused on jobs, starting from the Future of Employment paper. Maarten Goos and Alan Manning pointed out that automation seems to lead to a polarisation into “lovely and lousy jobs“: more non-routine manual jobs (lousy), more non-routine cognitive jobs (lovely). The paper strongly supports this, showing that a large chunk of occupations that rely on routine tasks might be possible to automate but things requiring hand-eye coordination, human dexterity, social ability, creativity and intelligence – especially applied flexibly – are pretty safe.

Overall, the economist’s view is relatively clear: automation that embodies skills and ability to do labour can only affect the distribution of jobs and how much certain skills are valued and paid compared with others. There is no rule that if task X can be done by a machine it will be done by a machine: handmade can still pay premium, and the law of comparative advantage might mean it is not worth using the machine to do X when it can do the even more profitable task Y. Still, being entirely dependent on doing X for your living is likely a bad situation.

Also, we often underestimate the impact of “small” parts of tasks that in formal analysis don’t seem to matter. Underwriters are on paper eminently replaceable… except that the ability to notice “Hey! Those numbers don’t make sense” or judge the reliability of risk models is quite hard to implement, and actually may constitute most of their value. We care about hard to automate things like social interaction and style. And priests, politicians, prosecutors and prostitutes are all fairly secure because their jobs might inherently require being a human or representing a human.

However, the development of AI ability is not a continuous predictable curve. We get sudden surprises like the autonomous cars (just a few years ago most people believed autonomous cars were a very hard, nearly impossible problem) or statistical translation. Confluences of technology conspire to change things radically (consider the digital revolution of printing, both big and small, in the 80s that upended the world for human printers). And since we know we are simultaneously overhyping and missing trends, this should not give us a sense of complacency at all. Just because we have always failed to automate X in the past doesn’t mean X might not suddenly turn out to be automateable tomorrow: relying on X being stably in the human domain is a risky assumption, especially when thinking about career choices.

Scaling

Robin, supply, demand and robots

Robots also have another important property: we can make a lot of them if we have a reason. If there is a huge demand for humans doing X we need to retrain or have children who grow up to be Xers. That makes the price go up a lot. Robots can be manufactured relatively easily, and scaling up the manufacturing is cheaper: even if X-robots are fairly expensive, making a lot more X-robots might be cheaper than trying to get humans if X suddenly matters.

This scaling is a bit worrisome, since robots implement somebody’s action plan (maybe badly, maybe dangerously creatively): they are essentially an extension of somebody or something’s preferences. So if we could make robot soldiers, the group or side that could make the most would have a potential huge strategic advantage. Making innovations in fast manufacture becomes important, in turn leading to a situation where there is an incentive for an arms race in being able to get an army by a press of a button. This is where I think atomically precise manufacturing is potentially risky: it might enable very quick builds, and that is potentially destabilizing. But even just automatic production (remember, this is a scenario where some robotics is good enough to implement useful military action, so manufacturing robotics will be advanced too). Also, countries running mostly on export on raw materials, if they automate the production there might not be much of a need of most of the population… An economist would say the population might be used for other profitable activities, but many nasty resource-driven governments do not invest in their human capital very much. In fact, they tend to see it as a security problem.

Of course, if we ever get to the level where intellectual tasks and services close to the human scale can be done, the same might apply to more developed economies too. But at that point we are so close to automating the task of making robots and AI better that I expect an intelligence explosion to occur before any social explosions. A society where nobody needs to work might sound nice and might be very worth striving for, but in order to get there we need at the very least get close to general AI and solve its safety problems.

See also this essay: commercializing the robot ecosystem in the anthropocene.