I was just watching “Humans Need Not apply” by CGPGrey,
when I noticed a tweet from Wendy Grossman, who I participated with in a radio panel about robotics (earlier notes on the discussion). She has some good points inspired by our conversation in her post, robots without software.
I think she has a key observation: much of the problem lies in the interaction between the automation and humans. On the human side, that means getting the right information and feedback into the machine side. From the machine side, it means figuring out what humans – those opaque and messy entities who change behaviour for internal reasons – want. At the point where the second demand is somehow resolved we will not only have really useful automation, but also essentially a way of resolving AI safety/ethics. But before that, we will have a situation of only partial understanding , and plenty of areas where either side will not be able to mesh well. Which either forces humans to adapt to machines, or machines to get humans to think that what they really wanted was what they got served. That is risky.
Global GMI stability issues
Incidentally, I have noted that many people hearing the current version of the machines will take our jobs story bring up the idea of a guaranteed minimum income as a remedy. If nobody has a job but there is a GMI we can still live a good life (especially since automation would make most things rather cheap). This idea has a long history, and Hans Moravec suggested it in his book Robot (1998) in regard to a future where AI-run corporations would be running the economy. It can be appealing even from a libertarian standpoint since it does away with a lot of welfare and tax bureaucracy (even Hayek might have been a fan).
I’m not enough of an economist to analyse it properly, but I suspect the real problem is stability when countries compete on tax: if Foobonia has a lower corporate tax rate than Baristan and the Democratic Republic of Baaz, then companies will move there – still making money by selling stuff to people in Baristan and Baaz. The more companies there are in Foobonia, the less taxes are needed to keep the citizens wealthy. In fact, as I mentioned in my earlier post, having fewer citizens might make the remaining more well off (things like this have happened on a smaller scale). The ideal situation would be to have the lowest taxes in the world and just one citizen. Or none, so the AI parliament can use the entire budget to improve the future prosperity and safety of Foobonia.
In our current world tax competition is only one factor determining where companies go. Not every company moves to Bahamas, Chile, Estonia or the UAE. One factor is other legal issues and logistics, but a big part is that you need to have people actually working in your company. Human capital is distributed very unevenly, and it is rarely where you want it (and the humans often do not want to move, for social reasons). But in an automated world machine capital will exist wherever you buy it so it can be placed where the taxes are cheaper. There will be a need to perform some services and transport goods in other areas, but unless they are taxed (hence driving up the price for your citizens) this is going to be a weaker constraint than now. How much weaker, I do not know – it would be interesting to see it investigated properly.
The core problem remains that if humans are largely living off the rents from a burgeoning economy there better exist stabilizing safeguards so these rents remain, and stabilizers that keep the safeguards stable. This is a non-trivial legal/economical problem, especially since one failure mode might be that some countries become zero citizen countries with huge economic growth and gradually accumulating investments everywhere (a kind of robotic Piketty situation, where everything in the end ends up owned by the AI consortium/sovereign wealth fund with the strongest growth). In short, it seems to require something just as tricky to develop as the friendly superintelligence program.
In any case, I suspect much of the reason people suggest GMI is that it is an already existing idea and not too strange. Hence it is thinkable and proposable. But there might be far better ideas out there for how to handle a world with powerful automation. One should not just stick with a local optimum idea when there might be way more stable and useful ideas further out.