Law-abiding robots?

HumanoidOver on the Oxford Martin School blog I have an essay about law-abiding robots, triggered by a report to the EU committee of legal affairs. Basically, it asks what legal rules we want to have to make robots usable in society, in particular how to handle liability when autonomous machines do bad things.

(Dr Yueh-Hsuan Weng has an interview with the rapporteur)

Were robots thinking, moral beings liability would be easy: they would presumably be legal subjects and handled like humans and corporations. But now they have an uneasy position as legal objects yet endowed with the ability to perform complex actions on behalf of others, or with emergent behaviors nobody can predict. The challenge may be to design not just the robots or laws, but robots and laws that fit each other (and real social practices): social robotics.

But it is early days. It is actually hard to tell where robotics will truly shine or matter legally, and premature laws can stifle innovation. We also do not really know what principles we ought to use to underpin the social robotics – more research is needed. And if you thought AI safety was hard, now consider getting machines to fit into the even less well defined human social landscape.

2 thoughts on “Law-abiding robots?

  1. And then there are robots that are designed to do bad things.
    Kill suspected terrorists, armed criminals, suicide bombers, etc.
    Presumably ‘Following orders’ would be a good defense until robot intelligence was deemed to be good enough that the Nuremberg defense would not apply.

    But then you have problems with robots that decide for themselves what orders to follow and to what extent. That makes the tool much less useful to the owner. In fact high robot intelligence would be deliberately avoided for that reason. And if the robot was involved in criminal behavior then single-use robots and procedures to avoid the ‘owner’ being detected would be developed.

    1. Exactly. A badly behaving tool-robot is just an extension of whoever ordered it to do something, or the culpability lies with manufacturers that made a tool that misbehaves. A more autonomous system has behavior not determined fully by programming but by experience, and there culpability becomes a mess. I don’t think you need much complexity to make simple legal approaches like distinguishing “domestic” or “wild” robots the only practical possibility.

Leave a Reply

Your email address will not be published. Required fields are marked *