Additive, multiplicative, and exponential economics

(Based on this Twitter thread. Epistemic status: fairly plausible, I would teach this to kids)Image

A simple, and hardly unique economic observation: when you are poor, money is additive. As you get more, it becomes multiplicative. And eventually exponential.

To a kid or a very poor person every coin is a treasure, worth something just by being itself.

When you get a few coins they instead sum into fungible numbers. These are additive – you add or subtract to your wallet or account, and if the inflow is larger than the outflow your number will get bigger. You dream of finding the big pile of gold. Saving makes sense.

Then loans, investment, and interest show up. You can buy something and sell it for more, and the more you can buy and resell it the more profit. You can use existing money as security to borrow more. Good quality things you can now afford save money. It is multiplicative.

Eventually you get into the exponential domain where the money keeps on growing since it is being invested and the time horizon is long. A short horizon means that compound interest or reinvestment has little effect, a long one leads to a fairly predictable growth.

Obvious complications from things like risk and uncertainty. Many people move between these thoughts depending on context, saving cents on coffee filters while losing time and money driving across town to get them on a sale… while having massive invested savings.

There can also be “boots theory” that being poor increases your expenses and this acts as a trap. As you get richer you get easier or free access to a lot of things (airport lounges, better service), not necessarily making up for the higher price goods you buy but reducing some forms of friction. The nature of difficulties shift from finding money to maintaining an income stream to managing a system of growth.

I think this division maps roughly to mindset and social class. Additive thinking is about getting more input or paying less, mercantilism, and avoiding going to zero. Everything is a zero-sum game. Very much a precariat situation.

Multiplicative thinking is about finding big multipliers – the realm of the smart business idea, the good investment, removing bottlenecks and improving efficiency. This is where win-win trades begin to make sense. Classical rising middle class.

Exponential thinking is all about maximizing growth rates (or their growth), long time horizons, boosting GDP growth. Modern upper class. (Premodern feudal upper classes were far closer to multiplicative or even additive modes – they were poor by our standards!)

Could there be a realm beyond this, an economic world of tetration? It might be about creating entirely new opportunities. This is the from zero to one VC/entrepreneur world, although in practice it often descends into the previous ones most of the time. The cryptocurrency people think they are here, of course.

Note that in fiction and games the creators often want to keep things simple and understandable. Many (board and computer) games start out in the clear additive world and you are gathering coins and optimizing what to buy. The treasure is added to your wealth, not increasing it by a percentage. Often they eventually move into the multiplicative world (better equipped RPG characters can buy +1 longsword of monster killing, giving higher revenues; in Saint Petersburg your workers leverage into buildings leveraging into nobles), where computer games typically lose interest (unless they are trading games) and it acts as an endgame in the boardgame since the exponential state usually doesn’t make for an interesting competitive dynamics. Still, MMoRPGs often show heavyish tails of wealthy player characters.

What kind of “rich get richer” phenomena do the different domains produce? Obviously, exponential growth amplifies differences enormously: the growth rate of the wealthiest will be highest. But even in the additive domain wealth tends to accumulate. If people randomly meet and exchange random amounts, wealth tends to accumulate into an exponential distribution. It seems that the power-law tails one gets in real economies are due to non-conservation of money: there is new wealth flowing in.

Are there limiting factors to economic growth in general, or individually? In the additive world it is all set by the amount of resources – and your ability to grab them and hold on to them. Typically diminishing returns emerge soon. In the multiplicative world you want to boost productivity, and this goes even further in the exponential world. Considering an endogenous growth model, you start out by using more labour to get higher yields, then use capital, and eventually might boost the knowledge/tech… and typically this last step makes the model blow up to infinity if you can reinvest the yield. What actually happens is that there is a limited absorptive capacity of the economy and diminishing returns or delays set in. Whether this always holds is an important question, but for practical everyday economics it is worth keeping in mind.

Plus the very important issue of what you want the money for. Money is instrumentally useful, but not a final value. When you start treating it as the goal, Goodhart bites you.

More robots, and how to take over the world with guaranteed minimum income

I was just watching “Humans Need Not apply” by CGPGrey,

when I noticed a tweet from Wendy Grossman, who I participated with in a radio panel about robotics (earlier notes on the discussion). She has some good points inspired by our conversation in her post, robots without software.

I think she has a key observation: much of the problem lies in the interaction between the automation and humans. On the human side, that means getting the right information and feedback into the machine side. From the machine side, it means figuring out what humans – those opaque and messy entities who change behaviour for internal reasons – want. At the point where the second demand is somehow resolved we will not only have really useful automation, but also essentially a way of resolving AI safety/ethics. But before that, we will have a situation of only partial understanding , and plenty of areas where either side will not be able to mesh well. Which either forces humans to adapt to machines, or machines to get humans to think that what they really wanted was what they got served. That is risky.

Global GMI stability issues

Incidentally, I have noted that many people hearing the current version of the machines will take our jobs story bring up the idea of a guaranteed minimum income as a remedy. If nobody has a job but there is a GMI we can still live a good life (especially since automation would make most things rather cheap). This idea has a long history, and Hans Moravec suggested it in his book Robot (1998) in regard to a future where AI-run corporations would be running the economy. It can be appealing even from a libertarian standpoint since it does away with a lot of welfare and tax bureaucracy (even Hayek might have been a fan).

I’m not enough of an economist to analyse it properly, but I suspect the real problem is stability when countries compete on tax: if Foobonia has a lower corporate tax rate than Baristan and the Democratic Republic of Baaz, then companies will move there – still making money by selling stuff to people in Baristan and Baaz. The more companies there are in Foobonia, the less taxes are needed to keep the citizens wealthy. In fact, as I mentioned in my earlier post, having fewer citizens might make the remaining more well off (things like this have happened on a smaller scale). The ideal situation would be to have the lowest taxes in the world and just one citizen. Or none, so the AI parliament can use the entire budget to improve the future prosperity and safety of Foobonia.

In our current world tax competition is only one factor determining where companies go. Not every company moves to Bahamas, Chile, Estonia or the UAE. One factor is other legal issues and logistics, but a big part is that you need to have people actually working in your company. Human capital is distributed very unevenly, and it is rarely where you want it (and the humans often do not want to move, for social reasons). But in an automated world machine capital will exist wherever you buy it so it can be placed where the taxes are cheaper. There will be a need to perform some services and transport goods in other areas, but unless they are taxed (hence driving up the price for your citizens) this is going to be a weaker constraint than now. How much weaker, I do not know – it would be interesting to see it investigated properly.

The core problem remains that if humans are largely living off the rents from a burgeoning economy there better exist stabilizing safeguards so these rents remain, and stabilizers that keep the safeguards stable. This is a non-trivial legal/economical problem, especially since one failure mode might be that some countries become zero citizen countries with huge economic growth and gradually accumulating investments everywhere (a kind of robotic Piketty situation, where everything in the end ends up owned by the AI consortium/sovereign wealth fund with the strongest growth). In short, it seems to require something just as tricky to develop as the friendly superintelligence program.

In any case, I suspect much of the reason people suggest GMI is that it is an already existing idea and not too strange. Hence it is thinkable and proposable. But there might be far better ideas out there for how to handle a world with powerful automation. One should not just stick with a local optimum idea when there might be way more stable and useful ideas further out.

Risky and rewarding robots

Robot playpenYesterday I participated in recording a radio program about robotics, and I noted that the participants were approaching the issue from several very different angles:

  • Robots as symbols: what we project things on them, what this says about humanity, how we change ourselves in respect to them, the role of hype and humanity in our thinking about them.
  • Robots as practical problem: how do you make a safe and trustworthy autonomous device that hangs around people? How do we handle responsibility for complex distributed systems that can generate ‘new’ behaviour?
  • Automation and jobs: what kinds of jobs are threatened or changed by automation? How does it change society, and how do we steer it in desirable directions – and what are they?
  • Long-term risks: how do we handle the potential risks from artificial general intelligence, especially given that many people think there are absolutely no problem and others are convinced that this could be existential if we do not figure out enough before it emerges?

In many cases the discussion got absurd because we talked past each other due to our different perspectives, but there were also some nice synergies. Trying to design automation without taking the anthropological and cultural aspects into account will lead to something that either does not work well with people or forces people to behave more machinelike. Not taking past hype cycles into account when trying to estimate future impact leads to overconfidence. Assuming that just because there has been hype in the past nothing will change is equally overconfident. The problems of trustworthiness and responsibility distribution become truly important when automating many jobs: when the automation is an essential part of the organisation, there needs to be mechanisms to trust it and to avoid dissolution of responsibility. Currently robot ethics is more about how humans are impacted by robots rather than ethics for robots, but the latter will become quite essential if we get closer to AGI.

Jobs

Robot on break

I focused on jobs, starting from the Future of Employment paper. Maarten Goos and Alan Manning pointed out that automation seems to lead to a polarisation into “lovely and lousy jobs“: more non-routine manual jobs (lousy), more non-routine cognitive jobs (lovely). The paper strongly supports this, showing that a large chunk of occupations that rely on routine tasks might be possible to automate but things requiring hand-eye coordination, human dexterity, social ability, creativity and intelligence – especially applied flexibly – are pretty safe.

Overall, the economist’s view is relatively clear: automation that embodies skills and ability to do labour can only affect the distribution of jobs and how much certain skills are valued and paid compared with others. There is no rule that if task X can be done by a machine it will be done by a machine: handmade can still pay premium, and the law of comparative advantage might mean it is not worth using the machine to do X when it can do the even more profitable task Y. Still, being entirely dependent on doing X for your living is likely a bad situation.

Also, we often underestimate the impact of “small” parts of tasks that in formal analysis don’t seem to matter. Underwriters are on paper eminently replaceable… except that the ability to notice “Hey! Those numbers don’t make sense” or judge the reliability of risk models is quite hard to implement, and actually may constitute most of their value. We care about hard to automate things like social interaction and style. And priests, politicians, prosecutors and prostitutes are all fairly secure because their jobs might inherently require being a human or representing a human.

However, the development of AI ability is not a continuous predictable curve. We get sudden surprises like the autonomous cars (just a few years ago most people believed autonomous cars were a very hard, nearly impossible problem) or statistical translation. Confluences of technology conspire to change things radically (consider the digital revolution of printing, both big and small, in the 80s that upended the world for human printers). And since we know we are simultaneously overhyping and missing trends, this should not give us a sense of complacency at all. Just because we have always failed to automate X in the past doesn’t mean X might not suddenly turn out to be automateable tomorrow: relying on X being stably in the human domain is a risky assumption, especially when thinking about career choices.

Scaling

Robin, supply, demand and robots

Robots also have another important property: we can make a lot of them if we have a reason. If there is a huge demand for humans doing X we need to retrain or have children who grow up to be Xers. That makes the price go up a lot. Robots can be manufactured relatively easily, and scaling up the manufacturing is cheaper: even if X-robots are fairly expensive, making a lot more X-robots might be cheaper than trying to get humans if X suddenly matters.

This scaling is a bit worrisome, since robots implement somebody’s action plan (maybe badly, maybe dangerously creatively): they are essentially an extension of somebody or something’s preferences. So if we could make robot soldiers, the group or side that could make the most would have a potential huge strategic advantage. Making innovations in fast manufacture becomes important, in turn leading to a situation where there is an incentive for an arms race in being able to get an army by a press of a button. This is where I think atomically precise manufacturing is potentially risky: it might enable very quick builds, and that is potentially destabilizing. But even just automatic production (remember, this is a scenario where some robotics is good enough to implement useful military action, so manufacturing robotics will be advanced too). Also, countries running mostly on export on raw materials, if they automate the production there might not be much of a need of most of the population… An economist would say the population might be used for other profitable activities, but many nasty resource-driven governments do not invest in their human capital very much. In fact, they tend to see it as a security problem.

Of course, if we ever get to the level where intellectual tasks and services close to the human scale can be done, the same might apply to more developed economies too. But at that point we are so close to automating the task of making robots and AI better that I expect an intelligence explosion to occur before any social explosions. A society where nobody needs to work might sound nice and might be very worth striving for, but in order to get there we need at the very least get close to general AI and solve its safety problems.

See also this essay: commercializing the robot ecosystem in the anthropocene.