Over on the Oxford Martin School blog I have an essay about law-abiding robots, triggered by a report to the EU committee of legal affairs. Basically, it asks what legal rules we want to have to make robots usable in society, in particular how to handle liability when autonomous machines do bad things.
(Dr Yueh-Hsuan Weng has an interview with the rapporteur)
Were robots thinking, moral beings liability would be easy: they would presumably be legal subjects and handled like humans and corporations. But now they have an uneasy position as legal objects yet endowed with the ability to perform complex actions on behalf of others, or with emergent behaviors nobody can predict. The challenge may be to design not just the robots or laws, but robots and laws that fit each other (and real social practices): social robotics.
But it is early days. It is actually hard to tell where robotics will truly shine or matter legally, and premature laws can stifle innovation. We also do not really know what principles we ought to use to underpin the social robotics – more research is needed. And if you thought AI safety was hard, now consider getting machines to fit into the even less well defined human social landscape.