I have a piece in Dagens Samhälle with Olle Häggström, Carin Ism, Max Tegmark and Markus Anderljung urging the Swedish parliament to consider banning lethal autonomous weapons.
This is of course mostly symbolic; the real debate is happening right now over in Geneva at the CCW. I also participated in a round-table with the Red Cross that led to their report on the issue, which is one of the working papers presented there.
I am not particularly optimistic that we will get a ban – nor that a ban would actually achieve much. However, I am much more optimistic that this debate may force a general agreement about the importance of getting meaningful human control. This is actually an area where most military and peace groups would agree: nobody wants systems that are unaccountable and impossible to control. Making sure there are international agreements that using such systems is irresponsible and maybe even a war crime would be a big win. But there are lots of devils in the details.
When it comes to arguments for why LAWs are morally bad I am personally not so convinced that the bad comes from a machine making the decision to kill a person. Clearly some machine possible decisionmaking does improve proportionality and reduce arbitrariness. Similarly arguments about whether they would increase or reduce the risk of military action and how this would play out in terms of human suffering and death are interesting empirical arguments but we should not be overconfident in that we know the answers. Given that once LAWs are in use it will be hard to roll them back if the answers are bad, we might find it prudent to try to avoid them (but consider the opposing scenario where since time immemorial robots have fought our wars and somebody now suggests using humans too – there is a status quo bias here).
My main reason for being opposed to LAWs is not that they would be inherently immoral, nor that they would necessarily or even likely make war worse or more likely. My view is that the problem is that they give states too much power. Basically they make their monopoly on violence independent of the wishes of the citizens. Once a sufficiently potent LAW military (or police force) exist it will be able to exert coercive and lethal power as ordered without any mediation through citizens. While having humans in the army certainly doesn’t guarantee moral behavior, if ordered to turn against the citizenry or act in a grossly immoral way they can exert moral agency and resist (with varying levels of overtness). The LAW army will instead implement the orders as long as they are formally lawful (assuming there is at least a constraint against unlawful commands). States know that if they mistreat their population too much their army might side with the population, a reason why some of the nastier governments make use of mercenaries or a special separate class of soldier to reduce the risk. If LAWs become powerful enough they might make dictatorships far more stable by removing a potentially risky key component of state power from the internal politics.
Bans and moral arguments are unlikely to work against despots. But building broad moral consensuses on what is acceptable in war does have effects. If R&D emphasis is directed towards finding solutions to how to manage responsibility for autonomous device decisions that will develop a lot of useful technologies for making such systems at least safer – and one can well imagine similar legal and political R&D into finding better solutions to citizen-independent state power.
In fact, far more important than LAWs is what to do about Lethal Autonomous States. Bad governance kills, many institutions/corporations/states behave just as badly as the worst AI risk visions and have a serious value alignment problem, and we do not have great mechanisms for handling responsibility in inter-state conflicts. The UN system is a first stab at the problem but obviously much, much more can be done. In the meantime, we can try avoiding going too quickly down a risky path while we try to find safe-making technologies and agreements.
Isn’t there a case for non-lethal ‘guard dog’ AI robots?
They would need a range of non-lethal weaponry in case the intruder(s) had protected themselves against some types of weapons. Also a range of defenses to protect against being disabled. Communication capability to report the attack or interrogate the intruders would also be required.
Unfortunately these devices would only require the military to replace non-lethal with lethal weapons to create LAWS.
Somewhat late comment, but… There’s no such thing as a “non-lethal weapon”, there’s only “less-lethal weapons”.
Pepper sprays can cause complete breathing failure. Likewise all varieties of tear gas. Electric stunning can provoke heart failure. Etc, etc, etc…