Benjamin Zand has made a neat little documentary about transhumanism, attempts to live forever and the posthuman challenge. I show up of course as soon as ethics is being mentioned.
Benjamin and me had a much, much longer (and very fun) conversation about ethics than could even be squeezed into a TV documentary. Everything from personal identity to overpopulation to the meaning of life. Plus the practicalities of cryonics, transhuman compassion and how to test if brain emulation actually works.
I think the inequality and control issues are interesting to develop further.
Would human enhancement boost inequality?
There is a trivial sense in which just inventing an enhancement produces profound inequality since one person has it, and the rest of mankind lacks it. But this is clearly ethically uninteresting: what we actually care about is whether everybody gets to share something good eventually.
However, the trivial example shows an interesting aspect of inequality: it has a timescale. An enhancement that will eventually benefit everyone but is unequally distributed may be entirely OK if it is spreading fast enough. In fact, by being expensive at the start it might even act as a kind of early adopter/rich tax, since they first versions will pay for R&D of consumer versions – compare computers and smartphones. While one could argue that it is bad to get temporary inequality, long-term benefits would outweigh this for most enhancements and most value theories: we should not sacrifice the poor of tomorrow for the poor of today by delaying the launch of beneficial technologies (especially since it is unlikely that R&D to make them truly cheap will happen just due to technocrats keeping technology in their labs – making tech cheap and useful is actually one area where we know empirically the free market is really good).
If the spread of some great enhancement could be faster though, then we may have a problem.
I often encounter people who think that the rich will want to keep enhancements to themselves. I have never encountered any evidence for this being actually true except for status goods or elites in authoritarian societies.
There are enhancements like height that are merely positional: it is good to be taller than others (if male, at least), but if everybody gets taller nobody benefits and everybody loses a bit (more banged heads and heart problems). Other enhancements are absolute: living healthy longer or being smarter is good for nearly all people regardless of how long other people live or how smart they are (yes, there might be some coordination benefits if you live just as long as your spouse or have a society where you can participate intellectually, but these hardly negate the benefit of joint enhancement – in fact, they support it). Most of the interesting enhancements are in this category: while they might be great status goods at first, I doubt they will remain that for long since there are other reasons than status to get them. In fact, there are likely network effects from some enhanchements like intelligence: the more smart people working together in a society, the greater the benefits.
In the video, I point out that limiting enhancement to the elite means the society as a whole will not gain the benefit. Since elites actually reap rents from their society, this means that from their perspective it is actually in their best interest to have a society growing richer and more powerful (as long as they are in charge). This will mean they lose out in the long run to other societies that have broader spreads of enhancement. We know that widespread schooling, free information access and freedom to innovate tend to produce way wealthier and more powerful societies than those where only elites have access to these goods. I have strong faith in the power of diverse societies, despite their messiness.
My real worry is that enhancements may be like services rather than gadgets or pills (which come down exponentially in price). That would keep them harder to reach, and might hold back adoption (especially since we have not been as good at automating services as manufacturing). Still, we do subsidize education at great cost, and if an enhancement is desirable democratic societies are likely to scramble for a way of supplying it widely, even if it is only through an enhancement lottery.
However, even a world with unequal distribution is not necessarily unjust. Beside the standard Nozickian argument that a distribution is just if it was arrived at through just means there is the Rawlsian argument that if the unequal distribution actually produces benefits for the weakest it is OK. This is likely very true for intelligence amplification and maybe brain emulation since they are likely to cause strong economic growth an innovations that produce spillover effects – especially if there is any form of taxation or even mild redistribution.
Who controls what we become? Nobody, we/ourselves/us
The second issue is who gets a say in this.
As I respond in the interview, in a way nobody gets a say. Things just happen.
People innovate, adopt technologies and change, and attempts to control that means controlling creativity, business and autonomy – you better have a very powerful ethical case to argue for limitations in these, and an even better political case to implement any. A moral limitation of life extension needs to explain how it averts consequences worse than 100,000 dead people per day. Even if we all become jaded immortals that seems less horrible than a daily pile of corpses 12.3 meters high and 68 meters across (assuming an angle of repose of 20 degrees – this was the most gruesome geometry calculation I have done so far). Saying we should control technology is a bit like saying society should control art: it might be more practically useful, but it springs from the same well of creativity and limiting it is as suffocating as limiting what may be written or painted.
Technological determinism is often used as an easy out for transhumanists: the future will arrive no matter what you do, so the choice is just between accepting or resisting it. But this is not the argument I am making. That nobody is in charge doesn’t mean the future is not changeable.
The very creativity, economics and autonomy that creates the future is by its nature something individual and unpredictable. While we can relatively safely assume that if something can be done it will be done, what actually matters is whether it will be done early or late, or seldom or often. We can try to hurry beneficial or protective technologies so they arrive before the more problematic ones. We can try to aim at beneficial directions in favour over more problematic ones. We can create incentives that make fewer want to use the bad ones. And so on. The “we” in this paragraph is not so much a collective coordinated “us” as the sum of individuals, companies and institutions, “ourselves”: there is no requirement to get UN permission before you set out to make safe AI or develop life extension. It just helps if a lot of people support your aims.
John Stuart Mill’s harm principle allows society to step in an limit freedom when it causes harms to others, but most enhancements look unlikely to produce easily recognizable harms. This is not a ringing endorsement: as Nick Bostrom has pointed out, there are some bad directions of evolution we might not want to go down, yet it is individually rational for each of us to go slightly in that direction. And existential risk is so dreadful that it actually does provide a valid reason to stop certain human activities if we cannot find alternative solutions. So while I think we should not try to stop people from enhancing themselves we should want to improve our collective coordination ability to restrain ourselves. This is the “us” part. Restraint does not just have to happen in the form of rules: we restrain ourselves already using socialization, reputations, and incentive structures. Moral and cognitive enhancement could add restraints we currently do not have: if you can clearly see the consequences of your actions it becomes much harder to do bad things. The long-term outlook fostered by radical life extension may also make people more risk aversive and willing to plan for long-term sustainability.
One could dream of some enlightened despot or technocrat deciding. A world government filled with wise, disinterested and skilled members planning our species future. But this suffers from essentially the economic calculation problem: while a central body might have a unified goal, it will lack information about the preferences and local states among the myriad agents in the world. Worse, the cognitive abilities of the technocrat will be far smaller than the total cognitive abilities of the other agents. This is why rules and laws tend to get gamed – there are many diverse entities thinking about ways around them. But there are also fundamental uncertainties and emergent phenomena that will bubble up from the surrounding agents and mess up the technocratic plans. As Virginia Postrel noted, the typical solution is to try to browbeat society into a simpler form that can be managed more easily… which might be acceptable if the stakes are the very survival of the species, but otherwise just removes what makes a society worth living in. So we better maintain our coordination ourselves, all of us, in our diverse ways.