Yesterday I participated in recording a radio program about robotics, and I noted that the participants were approaching the issue from several very different angles:
- Robots as symbols: what we project things on them, what this says about humanity, how we change ourselves in respect to them, the role of hype and humanity in our thinking about them.
- Robots as practical problem: how do you make a safe and trustworthy autonomous device that hangs around people? How do we handle responsibility for complex distributed systems that can generate ‘new’ behaviour?
- Automation and jobs: what kinds of jobs are threatened or changed by automation? How does it change society, and how do we steer it in desirable directions – and what are they?
- Long-term risks: how do we handle the potential risks from artificial general intelligence, especially given that many people think there are absolutely no problem and others are convinced that this could be existential if we do not figure out enough before it emerges?
In many cases the discussion got absurd because we talked past each other due to our different perspectives, but there were also some nice synergies. Trying to design automation without taking the anthropological and cultural aspects into account will lead to something that either does not work well with people or forces people to behave more machinelike. Not taking past hype cycles into account when trying to estimate future impact leads to overconfidence. Assuming that just because there has been hype in the past nothing will change is equally overconfident. The problems of trustworthiness and responsibility distribution become truly important when automating many jobs: when the automation is an essential part of the organisation, there needs to be mechanisms to trust it and to avoid dissolution of responsibility. Currently robot ethics is more about how humans are impacted by robots rather than ethics for robots, but the latter will become quite essential if we get closer to AGI.
I focused on jobs, starting from the Future of Employment paper. Maarten Goos and Alan Manning pointed out that automation seems to lead to a polarisation into “lovely and lousy jobs“: more non-routine manual jobs (lousy), more non-routine cognitive jobs (lovely). The paper strongly supports this, showing that a large chunk of occupations that rely on routine tasks might be possible to automate but things requiring hand-eye coordination, human dexterity, social ability, creativity and intelligence – especially applied flexibly – are pretty safe.
Overall, the economist’s view is relatively clear: automation that embodies skills and ability to do labour can only affect the distribution of jobs and how much certain skills are valued and paid compared with others. There is no rule that if task X can be done by a machine it will be done by a machine: handmade can still pay premium, and the law of comparative advantage might mean it is not worth using the machine to do X when it can do the even more profitable task Y. Still, being entirely dependent on doing X for your living is likely a bad situation.
Also, we often underestimate the impact of “small” parts of tasks that in formal analysis don’t seem to matter. Underwriters are on paper eminently replaceable… except that the ability to notice “Hey! Those numbers don’t make sense” or judge the reliability of risk models is quite hard to implement, and actually may constitute most of their value. We care about hard to automate things like social interaction and style. And priests, politicians, prosecutors and prostitutes are all fairly secure because their jobs might inherently require being a human or representing a human.
However, the development of AI ability is not a continuous predictable curve. We get sudden surprises like the autonomous cars (just a few years ago most people believed autonomous cars were a very hard, nearly impossible problem) or statistical translation. Confluences of technology conspire to change things radically (consider the digital revolution of printing, both big and small, in the 80s that upended the world for human printers). And since we know we are simultaneously overhyping and missing trends, this should not give us a sense of complacency at all. Just because we have always failed to automate X in the past doesn’t mean X might not suddenly turn out to be automateable tomorrow: relying on X being stably in the human domain is a risky assumption, especially when thinking about career choices.
Robots also have another important property: we can make a lot of them if we have a reason. If there is a huge demand for humans doing X we need to retrain or have children who grow up to be Xers. That makes the price go up a lot. Robots can be manufactured relatively easily, and scaling up the manufacturing is cheaper: even if X-robots are fairly expensive, making a lot more X-robots might be cheaper than trying to get humans if X suddenly matters.
This scaling is a bit worrisome, since robots implement somebody’s action plan (maybe badly, maybe dangerously creatively): they are essentially an extension of somebody or something’s preferences. So if we could make robot soldiers, the group or side that could make the most would have a potential huge strategic advantage. Making innovations in fast manufacture becomes important, in turn leading to a situation where there is an incentive for an arms race in being able to get an army by a press of a button. This is where I think atomically precise manufacturing is potentially risky: it might enable very quick builds, and that is potentially destabilizing. But even just automatic production (remember, this is a scenario where some robotics is good enough to implement useful military action, so manufacturing robotics will be advanced too). Also, countries running mostly on export on raw materials, if they automate the production there might not be much of a need of most of the population… An economist would say the population might be used for other profitable activities, but many nasty resource-driven governments do not invest in their human capital very much. In fact, they tend to see it as a security problem.
Of course, if we ever get to the level where intellectual tasks and services close to the human scale can be done, the same might apply to more developed economies too. But at that point we are so close to automating the task of making robots and AI better that I expect an intelligence explosion to occur before any social explosions. A society where nobody needs to work might sound nice and might be very worth striving for, but in order to get there we need at the very least get close to general AI and solve its safety problems.
See also this essay: commercializing the robot ecosystem in the anthropocene.