Popper vs. Macrohistory: what can we really say about the long-term future?

Talk I gave at the Oxford Karl Popper Society:

The quick summary: Physical eschatology, futures studies and macrohistory try to talk about the long-term future in different ways. Karl Popper launched a broadside against historicism, the approach to the social sciences which assumes that historical prediction is their principal aim. While the main target was the historicism supporting socialism and fascism, the critique has also scared away many from looking at the future – a serious problem for making the social sciences useful. In the talk I look at various aspects of Popper’s critique and how damaging they are. Some parts are fairly unproblematic because they demand too high precision or determinism, and can be circumvented by using a more Bayesian approach. His main point about knowledge growth making the future impossible to determine still stands and is a major restriction on what we can say – yet there are some ways to reason about the future even with this restriction. The lack of ergodicity of history may be a new problem to recognize: we should not think it would repeat if we re-run it. That does not rule out local patterns, but the overall endpoint appears random… or perhaps selectable. Except that doing it may turn out to be very, very hard.

My main conclusions are that longtermist views like most Effective Altruism are not affected much by the indeterminacy of Popper’s critique (or the non-ergodicity issue); here the big important issue is how much we can affect the future. That seems to be an open question, well worth pursuing. Macrohistory may be set for a comeback, especially if new methodologies in experimental history, big data history, or even Popper’s own “technological social science” were developed. That one cannot reach certitude does not prevent relevant and reliable (enough) input to decisions in some domains. Knowing which domains that are is another key research issue. In futures studies the critique is largely internalized by now, but it might be worth telling other disciplines about it. To me the most intriguing conclusion is that physical eschatology needs to take the action of intelligent life into account – and that means accepting some pretty far-reaching indeterminacy and non-ergodicity on vast scales.

Doing right and feeling good

My panel at Hay-on-Wye (me, Elaine Glaser, Peter Dews and Simon Baron-Cohen) talked about compassion, the sentiment model of morality, effective altruism and how to really help the world. Now available as video!

My view is largely that moral action is strongly driven and motivated by emotions rather than reason, but outside the world of the blindingly obvious or everyday human activity our intuitions and feelings are not great guides. We do not function well morally when the numbers get too big or the cognitive biases become maladaptive. Morality may be about the heart, but ethics is in the brain.

Fair brains?

Yesterday I gave a lecture at the London Futurists, “What is a fair distribution of brains?”:

My slides can be found here (PDF, 4.5 Mb).

My main take-home messages were:

Cognitive enhancement is potentially very valuable to individuals and society, both in pure economic terms but also for living a good life. Intelligence protects against many bad things (from ill health to being a murder victim), increases opportunity, and allows you to create more for yourself and others. Cognitive ability interacts in a virtuous cycle with education, wealth and social capital.

That said, intelligence is not everything. Non-cognitive factors like motivation are also important. And societies that leave out people – due to sexism, racism, class divisions or other factors – will lose out on brain power. Giving these groups education and opportunities is a very cheap way of getting a big cognitive capital boost for society.

I was critiqued for talking about “cognitive enhancement” when I could just have talked about “cognitive change”. Enhancement has a built in assumption of some kind of improvement. However, a talk about fairness and cognitive change becomes rather anaemic: it just becomes a talk about what opportunities we should give people, not whether these changes affect their relationship in a morally relevant way.

Distributive justice

Theories of distributive justice typically try to answer: what goods are to be distributed, among whom, and what is the proper distribution? In our case it would be cognitive enhancements, and the interested parties are at least existing people but could include future generations (especially if we use genetic means).

Egalitarian theories argue that there has to be some form of equality, either equality of opportunity (everybody gets to enhance if they want), equality of outcome (everybody equally smart). Meritocratic theories would say the enhancement should be distributed by merit, presumably mainly to those who work hard at improving themselves or have already demonstrated great potential. Conversely, need-based theories and prioritarians argue we should prioritize those who are worst off or need the enhancement the most. Utilitarian justice requires the maximization of the total or average welfare across all relevant individuals.

Most of these theories agree with Rawls that impartiality is important: it should not matter who you are. Rawls famously argued for two principles of justice: (1) “Each person is to have an equal right to the most extensive total system of equal basic liberties compatible with a similar system of liberty for all.”, and (2) “Social and economic inequalities are to be arranged so that they are both (a) to the greatest benefit of the least advantaged, consistent with the just savings principle, and (b) attached to offices and positions open to all under conditions of fair equality of opportunity.”

It should be noted that a random distribution is impartial: if we cannot afford to give enhancement to everybody, we could have a lottery (meritocrats, prioritarians and utilitarians might want this lottery to be biased by some merit/need weighting, or to be just between the people relevant for getting the enhancement, while egalitarians would want everybody to be in).

Why should we even care about distributive justice? One argument is that we all have individual preferences and life goals we seek to achieve; if all relevant resources are in the hands of a few, there will be less preference satisfaction than if everybody had enough. In some cases there might be satiation, where we do not need more than a certain level of stuff to be satisfied and the distribution of the rest becomes irrelevant, but given the unbounded potential ambitions and desires of people it is unlikely to apply generally.

Many unequal situations are not seen as unjust because that is just the way the world is: it is a brute biological fact that males on average live shorter than females, and that there is a random distribution of cognitive ability. But if we change the technological conditions, these facts become possible to change: now we can redistribute stuff to affect them. Ironically, transhumanism hopes/aims to change conditions so that some states, which are at present not unjust, will become unjust!

Some enhancements are absolute: they help you or society no matter what others do, others are merely positional. Positional enhancements are a zero-sum game. However, doing the reversal test demonstrates that cognitive ability has absolute components: a world where everybody got a bit more stupid is not a better world, despite the unchanged relative rankings. There is more accidents and mistakes, more risk that some joint threat cannot be handled, and many life projects become harder and impossible to achieve. And the Flynn effect demonstrates that we are unlikely to be at some particular optimum right now.

The Rawlsian principles are OK with enhancement of the best-off if that helps the worst-off. This is not unreasonable for cognitive enhancement: the extreme high performers have a disproportionate output (patents, books, lectures) that benefit the rest of society, and the network effects of a generally smarter society might benefit everyone living in it. However, less cognitively able people are also less able to make use of opportunities created by this: intelligence is fundamentally a limit to equality of opportunity, and the more you have, the more you are able to select what opportunities and projects to aim for. So a Rawlsian would likely be fairly keen on giving more enhancement to the worst off.

Would a world where everybody had same intelligence be better than the current one? Intuitively it seems emotionally neutral. The reason is that we have conveniently and falsely talked about intelligence as one thing. As several audience members argued, there are many parts of intelligence. Even if one does not buy Gardner’s multiple intelligence theory, it is clear that there are different styles of problem-solving and problem-posing. This is true even if measurements of the magnitude of mental abilities are fairly correlated. A world where everybody thought in the same way would be a bad place. We might not want bad thinking, but there are many forms of good thinking. And we benefit from diversity of thinking styles. Different styles of cognition can make the world more unequal but not more unjust.

Inequality over time

As I have argued before, enhancements in the forms of gadgets and pills are likely to come down in price and become easy to distribute, while service-based enhancements are more problematic since they will tend to remain expensive. Modelling the spread of enhancement suggests that enhancements that start out expensive but then become cheaper first leads to a growth of inequality and then a decrease. If there is a levelling off effect where it becomes harder to enhance beyond a certain point this eventually leads to a more cognitively equal society as everybody catches up and ends up close to the efficiency boundary.

When considering inequality across time we should likely accept early inequality if it leads to later equality. After all, we should not treat spatially remote people differently from nearby people, and the same is true across time. As Claudio Tamburrini said, “Do not sacrifice poor of the future for the poor of the present.”

The risk is if there is compounding: enhanced people can make more money, and use that to enhance themselves or their offspring more. I seriously doubt this works for biomedical enhancement since there are limits to what biological brains can do (and human generation times are long compared to technology change), but it may be risky in regards to outsourcing cognition to machines. If you can convert capital into cognitive ability by just buying more software, then things could become explosive if the payoffs from being smart in this way are large. However, then we are likely to have an intelligence explosion anyway, and the issue of social justice takes back seat compared to the risks of a singularity. Another reason to think it is not strongly compounding is that geniuses are not all billionaires, and billionaires – while smart – are typically not the very most intelligent people. Pickety’s argument actually suggests that it is better to have a lot of money than a lot of brains since you can always hire smart consultants.

Francis Fukuyama famously argued that enhancement was bad for society because it risks making people fundamentally unequal. However, liberal democracy is already based on idea of common society of unequal individuals – they are different in ability, knowledge and power, yet treated fairly and impartially as “one man, one vote”. There is a difference between moral equality and equality measured in wealth, IQ or anything else. We might be concerned about extreme inequalities in some of the latter factors leading to a shift in moral equality, or more realistically, that those factors allow manipulation of the system to the benefit of the best off. This is why strengthening the “dominant cooperative framework” (to use Allen Buchanan’s term) is important: social systems are resilient, and we can make them more resilient to known or expected future challenges.

Conclusions

My main conclusions were:

  • Enhancing cognition can make society more or less unequal. Whether this is unjust depends both on the technology, one’s theory of justice, and what policies are instituted.
  • Some technologies just affect positional goods, and they make everybody worse off. Some are win-win situations, and I think much of intelligence enhancement is in this category.
  • Cognitive enhancement is likely to individually help the worst off, but make the best off compete harder.
  • Controlling mature technologies is hard, since there are both vested interests and social practices around them. We have an opportunity to affect the development of cognitive enhancement now, before it becomes very mainstream and hard to change.
  • Strengthening the “dominant cooperative framework” of society is a good idea in any case.
  • Individual morphological freedom must be safeguarded.
  • Speeding up progress and diffusion is likely to reduce inequality over time – and promote diversity.
  • Different parts of the world likely to approach CE differently and at different speeds.

As transhumanists, what do we want?

The transhumanist declaration makes wide access a point, not just on fairness or utilitarian grounds, but also for learning more. We have a limited perspective and cannot know well beforehand were the best paths are, so it is better to let people pursue their own inquiry. There may also be intrinsic values in freedom, autonomy and open-ended life projects: not giving many people the chance to this may lose much value.

Existential risk overshadows inequality: achieving equality by dying out is not a good deal. So if some enhancements increases existential risk we should avoid them. Conversely, if enhancements look like they reduce existential risk (maybe some moral or cognitive enhancements) they may be worth pursuing even if they are bad for (current) inequality.

We will likely end up with a diverse world that will contain different approaches, none universal. Some areas will prohibit enhancement, others allow it. No view is likely to become dominant quickly (without rather nasty means or some very surprising philosophical developments). That strongly speaks for the need to construct a tolerant world system.

If we have morphological freedom, then preventing cognitive enhancement needs to point at a very clear social harm. If the social harm is less than existing practices like schooling, then there is no legitimate reason to limit enhancement.  There are also costs of restrictions: opportunity costs, international competition, black markets, inequality, losses in redistribution and public choice issues where regulators become self-serving. Controlling technology is like controlling art: it is an attempt to control human creativity and exploration, and should be done very cautiously.

Manufacturing love cheaply, slowly and in an evidence based way

Lecture yurtSince I am getting married tomorrow, it is fitting that the Institute of Art and Ideas TV has just put my lecture from Hay-on-Wye this year online: Manufacturing love.

It was a lovely opportunity to sit in a very comfy armchair and feel like a real philosopher. Of course, armchair philosophy is what it is: tomorrow I will do an empirical experiment with N=2 (with ethics approval from mother-in-law). We’ll see how it works out.

While I suspect my theoretical understanding is limited and the biomedical tools I have written about are not available, there is actually some nice empirical research on what makes good wedding vows. My husband and me also went for a cheap, simple wedding for just the closest friends and family, which seems to be a good sign (the rest of my friends come to an informal party the day after). And it is probably a good sign that we got together in a slow and gradual way: we have very compatible personalities.

A fun project.