Can you be a death positive transhumanist?

Spes altera vitaeI recently came across the concept of “death positivity”, expressed as the idea that we should accept the inevitability of death and embrace the diversity of attitudes and customs surrounding it. Looking a bit deeper, I found the Order of the Good Death and their statement.

That got me thinking about transhumanist attitudes to death and how they are perceived.

While the brief Kotaku description makes it sound that death positivity is perhaps about celebrating death, the Order of the Good Death mainly is about acknowledging death and dying. That we hide it behind closed doors and avoid public discussion (or even thinking about it) is doing harm to society and arguably our own emotions. Fear and denial are not good approaches. Perhaps the best slogan-description is “Accepting that death itself is natural, but the death anxiety of modern culture is not.”

The Order aims at promoting more honest public discussion, curiosity, innovation and gatherings to discuss death-related topic. Much of this relates to the practices in the “death industry”, some of which definitely should be discussed in terms of economic costs, environmental impact, ethics and legal rights.

Denying death as a bad thing?

Queuing for eternal restThere is an odd paradox here. Transhumanism is often described as death denying, and this description is not meant as a compliment in the public debate. Wanting to live forever is presented as immature, selfish or immoral. Yet we have an overall death denying society, so how can this be held to be bad?

Part of it is that the typical frame of the critique is from a “purveyor of wisdom” (a philosopher, a public intellectual, the local preacher) who no doubt might scold society too had not the transhumanist been a more convenient target.

This critique is rarely applied to established religions that are even more radically death denying – Christianity after all teaches the immortality of the soul, and in Hinduism and Buddhism ending the self is a nearly impossible struggle through countless reincarnations: talk about denying death! You rarely hear people asking how life could have a meaning if there is an ever-lasting hereafter. (In fact, some have like Tolstoy argued that it is only because such ever-lasting states that anything could have meaning). Some of the lack of critique is due to social capital: major religions hold much of it, transhumanism less, so criticising tends to focus on those groups that have less impact. Not just because the “purveyor of wisdom” fears a response but because they are themselves consciously or not embedded inside the norms and myths of these influential groups.

Another reason for criticising the immortalist position is death denial. Immortalism, and its more plausible sibling longevism, directly breaks the taboo against discussing death honestly. It questions core ideas about what human existence is like, and it by necessity delves into the processes of ageing and death. It tries to bring up uncomfortable subjects and does not accept the standard homilies about why life should be like it is, and why we need to accept it. This second reason actually makes transhumanism and death positivity unlikely allies.

Naïve transhumanists sometimes try to recruit people by offering the hope of immortality. Often they are surprised and shocked by the negative reactions. Leaving the appearance of a Faustian bargain aside, people typically respond by shoring up their conventional beliefs and defending their existential views. Few transhumanist ideas cause stronger reactions than life extension – I have lectured about starting new human species, uploading minds, remaking the universe, enhancing love, and many extreme topics, but I rarely get as negative comments as when discussing the feasibility and ethics of longevity.

The reason for this is in my opinion very much fear of death (with a hefty dose of status quo bias mixed in). As we grow up we have to handle our mortality and we build a defensive framework telling us how to handle it – typically by downplaying the problem of death by ignoring it, explaining or hoping via a religious framework, or finding some form of existential acceptance. But since most people rarely are exposed to dissenting views or alternatives they react very badly when this framework is challenged. This is where death positivity would be very useful.

Why strict immortalism is a non-starter

XIII: EntropyGiven our current scientific understanding death is unavoidable. The issue is not whether life extension is possible or not, just the basic properties of our universe. Given the accelerating expansion of the universe we can only gain access to a finite amount of material resources. Using these resources is subject to thermodynamic inefficiencies that cannot be avoided. Basically the third law of thermodynamics and Landauer’s principle imply that there is a finite number of information processing steps that can be undertaken in our future. Eventually the second law of thermodynamics wins (helped by proton decay and black hole evaporation) and nothing that can store information or perform the operations needed for any kind of life will remain. This means that no matter what strange means any being undertakes as far as we understand physics it will eventually dissolve.

One should also not discount plain bad luck: finite beings in a universe where quantum randomness happens will sooner or later be subjected to a life-ending coincidence.

The Heat Death of the Universe and Quantum Murphy’s Law are a very high upper bounds. They are important because they force any transhumanist who doesn’t want to dump rationality overboard and insist that the laws of physics must allow true immortality because it is desired to acknowledge that they will eventually die – perhaps aeons hence and in a vastly changed state, but at some point it will have happened (perhaps so subtly that nobody even noticed: shifts in identity also count).

To this the reasonable transhumanist responds with a shrug: we have more pressing mortality concerns today, when ageing, disease, accidents and existential risk are so likely that we can hardly expect to survive a century. We endlessly try to explain to interviewers that transhumanism is not really seeking capital “I” Immortality but merely indefinitely long lifespans, and actually we are interested in years of health and activity rather than just watching the clock tick as desiccated mummies. The point is, a reasonable transhumanistic view will be focused on getting more and better life.

Running from death or running towards life?

Love triangleOne can strive to extend life because one is scared of drying – death as something deeply negative – or because life is worth living – remaining alive has a high value.

But if one can never avoid having death at some point in one’s lifespan then the disvalue of death will always be present. It will not affect whether living one life is better than another.

An exception may be if one believes that the disvalue can be discounted by being delayed, but this merely affects the local situation in time: at any point one prefers the longest possible life, but the overall utility as seen from the outside when evaluating a life will always suffer the total disvalue.

I believe the death-apologist thinkers have made some good points about why death is not intensely negative (e.g. the Lucretian arguments). I do not think they are convincing in that it is a positive property of the world. If “death gives life meaning” then presumably divorce is what makes love meaningful. If it is a good thing that old people retire from positions of power, why not have mandatory retirement rather than the equivalent of random death-squads? In fact, defences of death as a positive tend to use remarkably weak reasons for motivations, reasons that would never be taken seriously if motivating complacency about a chronic or epidemic disease.

Life-affirming transhumanism on the other hand is not too worried about the inevitability of death. The question is rather how much and what kind of good life is possible. One can view it as a game of seeking to maximise a “score” of meaningfulness and value under risk. Some try to minimise the risk, others to get high points, still others want to figure the rules or structure their life projects to make a meaningful structure across time.

Ending the game properly

Restart, human!This also includes ending life when it is no longer meaningful. Were one to regard death as extremely negative, then one should hang on even if there was nothing but pain and misery in the future. If death merely has zero value, then one can be in bad states where it is better to be dead than alive.

As we have argued in a recent paper many of the anti-euthanasia arguments turn on their head when applied to cryonics: if one regards life as a too precious gift to be thrown away and that the honourable thing is to continue to struggle on, then undergoing cryothanasia (being cryonically suspended well before one would otherwise have died) when suffering a terminal disease in the rational hope that this improves ones chances clearly seems better than to not take the chance or allow disease to reduce one’s chances.

This also shows an important point where one kind of death positivity and transhumanism may part ways. One can frame accepting death as accept that death exists and deal with it. Another frame, equally compatible with the statement, is not struggling too much against it.  The second frame is often what philosophers suggest as a means for equanimity. While possibly psychologically beneficial it clearly has limits: the person not going to the doctor with a treatable disease when they know it will develop into something untreatable (or not stepping out of the way of an approaching truck) is not just “not struggling” but being actively unreasonable. One can and should set some limit where struggle and interventions become unreasonable, but this is always going to be both individual and technology dependent. With modern medicine many previously lethal conditions (e.g. bacterial meningitis, many cancers) have become treatable to such an extent that it is not reasonable to avail oneself to treatment.

Transhumanism puts a greater value in longevity than is usual, partially because of its optimistic outlook (the future is likely to be good, technology is likely to advance), and this leads to a greater willingness to struggle on even when conventional wisdom says it is a good time to give up and become fatalistic. This is a reason transhumanists are far more OK with radical attempts to stave off death than most people, including cryonics.

Cryonics

Long term careCryonics is another surprisingly death-positive aspect of transhumanism. It forces you to confront your mortality head on, and it does not offer very strong reassurance. Quite the opposite: it requires planning for ones (hopefully temporary) demise, consider the various treatment/burial options, likely causes of death, and the risks and uncertainties involved in medicine. I have friends who seriously struggled with their dread of death when trying to sign up.

Talking about the cryonics choice with family is one of the hardest parts of the practice and has caused significant heartbreak, yet keeping silent and springing it as a surprise guarantees even more grief (and lawsuits). This is one area where better openness about death would be extremely helpful.

It is telling that members of the cryonics community seeks out each other, since it is one of the few environments where these things can be discussed openly and without stigma. It seems likely that the death-positive and the cryonics community have more in common than they might think.

Cryonics also has to deal with the bureaucracy and logistics of death, with the added complication that it aims at something slightly different than conventional burial. To a cryonicist the patients are still patients even when they have undergone cardiac arrest, are legally declared dead, solid and immersed in liquid nitrogen: they need care and protection since they may only be temporarily dead. Or deanimated, if we want to reserve “death” as a word for irreversibly non-living. (As a philosopher, I must say I find the cryosuspended state delightfully like a thought-experiment in a philosophy paper).

Final words

Winter dawnI have argued that transhumanism should be death-positive, at least in the sense that discussing death and accepting its long-term inevitability is both healthy and realistic. Transhumanists will not generally support a positive value of death and will tend to react badly to that kind of statement. But assigning it a vastly negative value produces a timid outlook that is unlikely to work well with the other parts of the transhumanist idea complex. Rather, death is bad because life is good but that doesn’t mean we should not think about it.

Indeed, transhumanists may want to become better at talking about death. Respected and liked people who have been part of the movement for a long time have died and we are often awkward about how to handle it. Transhumanists need to handle grief too. Even if the subject may be only temporarily dead in a cryonic tank.

Conversely, transhumanism and cryonics may represent an interesting challenge for the death positive movement in that they certainly represent an unusual take on attitudes and customs towards death. Seeing death as an engineering problem is rather different from how most people see it. Questioning the human condition is risky when dealing with fragile situations. And were transhumanism to be successful in some of its aims there may be new and confusing forms of death.

Review of the cyborg bill of rights 1.0

Cyborg NewtonThe Cyborg Bill of Rights 1.0 is out. Rich MacKinnon suggests the following rights:

FREEDOM FROM DISASSEMBLY
A person shall enjoy the sanctity of bodily integrity and be free from unnecessary search, seizure, suspension or interruption of function, detachment, dismantling, or disassembly without due process.

FREEDOM OF MORPHOLOGY
A person shall be free (speech clause) to express themselves through temporary or permanent adaptions, alterations, modifications, or augmentations to the shape or form of their bodies. Similarly, a person shall be free from coerced or otherwise involuntary morphological changes.

RIGHT TO ORGANIC NATURALIZATION
A person shall be free from exploitive or injurious 3rd party ownerships of vital and supporting bodily systems. A person is entitled to the reasonable accrual of ownership interest in 3rd party properties affixed, attached, embedded, implanted, injected, infused, or otherwise permanently integrated with a person’s body for a long-term purpose.

RIGHT TO BODILY SOVEREIGNTY
A person is entitled to dominion over intelligences and agents, and their activities, whether they are acting as permanent residents, visitors, registered aliens, trespassers, insurgents, or invaders within the person’s body and its domain.

EQUALITY FOR MUTANTS
A legally recognized mutant shall enjoy all the rights, benefits, and responsibilities extended to natural persons.

As a sometime philosopher with a bit of history of talking about rights regarding bodily modification, I of course feel compelled to comment.

What are rights?

Artifical handFirst, what is a right? Clearly anybody can state that we have a right to X, but only some agents and X-rights make sense or have staying power.

One kind of rights are legal rights of various kinds. This can be international law, national law, or even informal national codes (for example the Swedish allemansrätten, which is actually not a moral/human right and actually fairly recent). Here the agent has to be some legitimate law- or rule-maker. The US Bill of Rights is an example: the result of a political  process that produced legal rights, with relatively little if any moral content. Legal rights need to be enforceable somehow.

Then there are normative moral principles such as fundamental rights (applicable to a person since they are a person), natural rights (applicable because of facts of the world) or divine rights (imposed by God). These are universal and egalitarian: applicable everywhere, everywhen, and the same for everybody. Bentham famously dismissed the idea of natural rights as “nonsense on stilts” and there is a general skepticism today about rights being fundamental norms. But insofar they do exist, anybody can discover and state them. Moral rights need to be doable.

While there may be doubts about the metaphysical nature of rights, if a society agrees on a right it will shape action, rules and thinking in an important way. It is like money: it only gets value by the implicit agreement that it has value and can be exchanged for goods. Socially constructed rights can be proposed by anybody, but they only become real if enough people buy into the construction. They might be unenforceable and impossible to perform (which may over time doom them).

What about the cyborg rights? There is no clear reference to moral principles, and only the last one refers to law. In fact, the preamble states:

Our process begins with a draft of proposed rights that are discussed thoroughly, adopted by convention, and then published to serve as model language for adoption and incorporation by NGOs, governments, and rights organizations.

That is, these rights are at present a proposal for social construction (quite literally) that hopefully will be turned into a convention (a weak international treaty) that eventually may become national law. This also fits with the proposal coming from MacKinnon rather than the General Secretary of the UN – we can all propose social constructions and urge the creation of conventions, treaties and laws.

But a key challenge is to come up with something that can become enforceable at some point. Cyborg bodies might be more finely divisible and transparent than human bodies, so that it becomes hard to regulate these rights. How do you enforce sovereignty against spyware?

Justification

Dragon leg 2Why is a right a right? There has to be a reason for a right (typically hinted at in preambles full of “whereas…”)

I have mostly been interested in moral rights. Patrick D. Hopkins wrote an excellent overview “Is enhancement worthy of being a right?” in 2008 where he looks at how you could motivate morphological freedom. He argues that there are three main strategies to show that a right is fundamental or natural:

  1. That the right conforms to human nature. This requires showing that it fits a natural end. That is, there are certain things humans should aim for, and rights help us live such lives. This is also the approach of natural law accounts.
  2. That the right is grounded in interests. Rights help us get the kinds of experiences or states of the world that we (rightly) care about. That is, there are certain things that are good for us (e.g.  “the preservation of life, health, bodily integrity, play, friendship, classic autonomy, religion, aesthetics, and the pursuit of knowledge”) and the right helps us achieve this. Why those things are good for us is another matter of justification, but if we agree on the laundry list then the right follows if it helps achieve them.
  3. That the right is grounded in our autonomy. The key thing is not what we choose but that we get to choose: without freedom of choice we are not moral agents. Much of rights by this account will be about preventing others from restricting our choices and not interfering with their choices. If something can be chosen freely and does not harm others, it has a good chance to be a right. However, this is a pretty shallow approach to autonomy; there are more rigorous and demanding ideas of autonomy in ethics (see SEP and IEP for more). This is typically how many fundamental rights get argued (I have a right to my body since if somebody can interfere with my body, they can essentially control me and prevent my autonomy).

One can do this in many ways. For example, David Miller writes on grounding human rights that one approach is to allow people from different cultures to live together as equals, or basing rights on human needs (very similar to interest accounts), or the instrumental use of them to safeguard other (need-based) rights. Many like to include human dignity, another tricky concept.

Social constructions can have a lot of reasons. Somebody wanted something, and this was recognized by others for some reason. Certain reasons are cultural universals, and that make it more likely that society will recognize a right. For example, property seems to be universal, and hence a right to one’s property is easier to argue than a right to paid holidays (but what property is, and what rules surround it, can be very different).

Legal rights are easier. They exist because there is a law or treaty, and the reasons for that are typically a political agreement on something.

It should be noted that many declarations of rights do not give any reasons. Often because we would disagree on the reasons, even if we agree on the rights. The UN declaration of human rights give no hint of where these rights come from (compare to the US declaration of independence, where it is “self-evident” that the creator has provided certain rights to all men). Still, this is somewhat unsatisfactory and leaves many questions unanswered.

So, how do we justify cyborg rights?

In the liberal rights framework I used for morphological freedom we could derive things rather straightforwardly: we have a fundamental right to life, and from this follows freedom from disassembly. We have a fundamental right to liberty, and together with the right to life this leads to a right to our own bodies, bodily sovereignty, freedom of morphology and the first half of the right to organic naturalization. We have a right to our property (typically derived from fundamental rights to seek our happiness and have liberty), and from this the second half of the organic naturalization right follows (we are literally mixing ourselves rather than our work with the value produced by the implants). Equality for mutants follow from having the same fundamental rights as humans (note that the bill talks about “persons”, and most ethical arguments try to be valid for whatever entities count as persons – this tends to be more than general enough to cover cyborg bodies). We still need some justification of the fundamental rights of life, liberty and happiness, but that is outside the scope of this exercise. Just use your favorite justifications.

The human nature approach would say that cyborg nature is such that these rights fit with it. This might be tricky to use as long as we do not have many cyborgs to study the nature of. In fact, since cyborgs are imagined as self-creating (or at least self-modifying) beings it might be hard to find any shared nature… except maybe the self-creation part. As I often like to argue, this is close to Mirandola’s idea of human dignity deriving from our ability to change ourselves.

The interest approach would ask how the cyborg interests are furthered by these rights. That seems pretty straightforward for most reasonably human-like interests. In fact, the above liberal rights framework is to a large extent an interest-based account.

The autonomy account is also pretty straightforward. All cyborg rights except the last are about autonomy.

Could we skip the ethics and these possibly empty constructions? Perhaps: we could see the cyborg bill of rights as a way of making a cyborg-human society possible to live in. We need to tolerate each other and set boundaries on allowed messing around with each other’s bodies. Universals of property lead to the naturalization right, territoriality the sovereignty right universal that actions under self-control are distinguished from those not under control might be taken as the root for autonomy-like motivations that then support the rest.

Which one is best? That depends. The liberal rights/interest system produces nice modular rules, although there will be much arguments on what has precedence. The human nature approach might be deep and poetic, but potentially easy to disagree on. Autonomy is very straightforward (except when the cyborg starts messing with their brain). Social constructivism allows us to bring in issues of what actually works in a real society, not just what perfect isolated cyborgs (on a frictionless infinite plane) should do.

Parts of rights

Alternative limb projectOne of the cool properties of rights is that they have parts – “the Hohfeldian incidents“, after Wesley Hohfeld (1879–1918) who discovered them. He was thinking of legal rights, but this applies to moral rights too. His system is descriptive – this is how rights work – rather than explaining why the came about or whether this is a good thing. The four parts are:

Privileges (alias liberties): I have a right to eat what I want. Someone with a driver’s licence has the privilege to drive. If you have a duty not do do something, then you have no privilege about it.

Claims: I have a claim on my employer to pay my salary. Children have a claim vis-a-vis every adult not to be abused. My employer is morally and legally dutybound to pay, since they agreed to do so. We are dutybound to refrain from abusing children since it is wrong and illegal.

These two are what most talk about rights deal. In the bill, the freedom from disassembly and freedom of morphology are about privileges and claims. The next two are a bit meta, dealing with rights over the first two:

Powers: My boss has the power to order me to research a certain topic, and then I have a duty to do it. I can invite somebody to my home, and then they have the privilege of being there as long as I give it to them. Powers allow us to change privileges and claims, and sometimes powers (an admiral can relieve a captain of the power to command a ship).

Immunities: My boss cannot order me to eat meat. The US government cannot impose religious duties on citizens. These are immunities: certain people or institutions cannot change other incidents.

These parts are then combined into full rights. For example, my property rights to this computer involve the privilege to use the computer, a claim against others to not use the computer, the power to allow others to use it or to sell it to them (giving them the entire rights bundle), and an immunity of others altering these rights. Sure, in practice the software inside is of doubtful loyalty and there are law-enforcement and emergency situation exceptions, but the basic system is pretty clear. Licence agreements typically give you a far

Sometimes we speak about positive and negative rights: if I have a negative right I am entitled to non-interference from others, while a positive right entitles me to some help or goods. My right to my body is a negative right in the sense that others may not prevent me from using or changing my body as I wish, but I do not have a positive right to demand that they help me with some weird bodymorphing. However, in practice there is a lot of blending going on: public healthcare systems give us positive rights to some (but not all) treatment, policing gives us a positive right of protection (whether we want it or not). If you are a libertarian you will tend to emphasize the negative rights as being the most important, while social democrats tend to emphasize state-supported positive rights.

The cyborg bill of rights starts by talking about privileges and claims. Freedom of morphology clearly expresses an immunity to forced bodily change. The naturalization right is about immunity from unwilling change of the rights of parts, and an expression of a kind of power over parts being integrated into the body. Sovereignty is all about power over entities getting into the body.

The right of bodily sovereignty seems to imply odd things about consensual sex – once there is penetration, there is dominion. And what about entities that are partially inside the body? I think this is because it is trying to reinvent some of the above incidents. The aim is presumably to cover pregnancy/abortion, what doctors may do, and other interventions at the same time. The doctor case is easy, since it is roughly what we agree on today: we have the power to allow doctors to work on our bodies, but we can also withdraw this whenever we want

Some other thoughts

Nigel on the screenThe recent case where the police subpoenad the pacemaker data of a suspected arsonist brings some of these rights into relief. The subpoena occurred with due process, so it was allowed by the freedom from disassembly. In fact, since it is only information and that it is copied one can argue that there was no real “disassembly”. There have been cases where police wanted bullets lodged in people in order to do ballistics on them, but US courts have generally found that bodily integrity trumps the need for evidence. Maybe one could argue for a derived right to bodily privacy, but social needs can presumably trump this just as it trumps normal privacy. Right now views on bodily integrity and privacy are still based on the assumption that bodies are integral and opaque. In a cyborg world this is no longer true, and the law may well move in a more invasive direction.

“Legally recognized mutant”? What about mutants denied legal recognition? Legal recognition makes sense for things that the law must differentiate between, not for things the law is blind to. Legally recognized mutants (whatever they are) would be a group that needs to be treated in some special way. If they are just like natural humans they do not need special recognition. We may have laws making it illegal to discriminate against mutants, but this is a law about a certain kind of behavior rather than the recipient. If I racially discriminate against somebody but happens to be wrong about their race, I am still guilty. So the legal recognition part does not do any work in this right.

And why just mutants? Presumably the aim here is to cover cyborgs, transhumans and other prefix-humans so they are recognized as legal and moral agents with the same standing. The issue is whether this is achieved by arguing that they were human and “mutated”, or are descended from humans, and hence should have the same standing, or whether this is due to them having the right kind of mental states to be persons. The first approach is really problematic: anencephalic infants are mutants but hardly persons, and basing rights on lineage seems ripe for abuse. The second is much simpler, and allows us to generalize to other beings like brain emulations, aliens, hypothetical intelligent moral animals, or the Swampman.

This links to a question that might deserve a section on its own: who are the rightsholders? Normal human rights typically deal with persons, which at least includes adults capable of moral thinking and acting (they are moral agents). Someone who is incapable, for example due to insanity or being a child, have reduced rights but are still a moral patient (someone we have duties towards). A child may not have full privileges and powers, but they do have claims and immunities. I like to argue that once you can comprehend and make use of a right you deserve to have it, since you have capacity relative to the right. Some people also think prepersons like fertilized eggs are persons and have rights; I think this does not make much sense since they lack any form of mind, but others think that having the potential for a future mind is enough to grant immunity. Tricky border cases like persistent vegetative states, cryonics patients, great apes and weird neurological states keep bioethicists busy.

In the cyborg case the issue is what properties make something a potential rightsholder and how to delineate the border of the being. I would argue that if you have a moral agent system it is a rightsholder no matter what it is made of. That is fine, except that cyborgs might have interchangeable parts: if cyborg A gives her arm to cyborg B, have anything changed? I would argue that the arm switched from being a part of/property of A to being a part of/property of B, but the individuals did not change since the parts that make them moral agents are unchanged (this is just how transplants don’t change identity). But what if A gave part of her brain to B? A turns into A’, B turns into B’, and these may be new agents. Or what if A has outsourced a lot of her mind to external systems running in the cloud or in B’s brain? We may still argue that rights adhere to being a moral agent and person rather than being the same person or a person that can easily be separated from other persons or infrastructure. But clearly we can make things really complicated through overlapping bodies and minds.

Summary

I have looked at the cyborg bill of rights and how it fits with rights in law, society and ethics. Overall it is a first stab at establishing social conventions for enhanced, modular people. It likely needs a lot of tightening up to work, and people need to actually understand and care about its contents for it to have any chance of becoming something legally or socially “real”. From an ethical standpoint one can motivate the bill in a lot of ways; for maximum acceptance one needs to use a wide and general set of motivations, but these will lead to trouble when we try to implement things practically since they give no way of trading one off against another one in a principled way. There is a fair bit of work needed to refine the incidences of the rights, not to mention who is a rightsholder (and why). That will be fun.

Living forever

Benjamin Zand has made a neat little documentary about transhumanism, attempts to live forever and the posthuman challenge. I show up of course as soon as ethics is being mentioned.

Benjamin and me had a much, much longer (and very fun) conversation about ethics than could even be squeezed into a TV documentary. Everything from personal identity to overpopulation to the meaning of life. Plus the practicalities of cryonics, transhuman compassion and how to test if brain emulation actually works.

I think the inequality and control issues are interesting to develop further.

Would human enhancement boost inequality?

There is a trivial sense in which just inventing an enhancement produces profound inequality since one person has it, and the rest of mankind lacks it. But this is clearly ethically uninteresting: what we actually care about is whether everybody gets to share something good eventually.

However, the trivial example shows an interesting aspect of inequality: it has a timescale. An enhancement that will eventually benefit everyone but is unequally distributed may be entirely OK if it is spreading fast enough. In fact, by being expensive at the start it might even act as a kind of early adopter/rich tax, since they first versions will pay for R&D of consumer versions – compare computers and smartphones. While one could argue that it is bad to get temporary inequality, long-term benefits would outweigh this for most enhancements and most value theories: we should not sacrifice the poor of tomorrow for the poor of today by delaying the launch of beneficial technologies (especially since it is unlikely that R&D to make them truly cheap will happen just due to technocrats keeping technology in their labs – making tech cheap and useful is actually one area where we know empirically the free market is really good).

If the spread of some great enhancement could be faster though, then we may have a problem.

I often encounter people who think that the rich will want to keep enhancements to themselves. I have never encountered any evidence for this being actually true except for status goods or elites in authoritarian societies.

There are enhancements like height that are merely positional: it is good to be taller than others (if male, at least), but if everybody gets taller nobody benefits and everybody loses a bit (more banged heads and heart problems). Other enhancements are absolute: living healthy longer or being smarter is good for nearly all people regardless of how long other people live or how smart they are (yes, there might be some coordination benefits if you live just as long as your spouse or have a society where you can participate intellectually, but these hardly negate the benefit of joint enhancement – in fact, they support it). Most of the interesting enhancements are in this category: while they might be great status goods at first, I doubt they will remain that for long since there are other reasons than status to get them. In fact, there are likely network effects from some enhanchements like intelligence: the more smart people working together in a society, the greater the benefits.

In the video, I point out that limiting enhancement to the elite means the society as a whole will not gain the benefit. Since elites actually reap rents from their society, this means that from their perspective it is actually in their best interest to have a society growing richer and more powerful (as long as they are in charge). This will mean they lose out in the long run to other societies that have broader spreads of enhancement. We know that widespread schooling, free information access and freedom to innovate tend to produce way wealthier and more powerful societies than those where only elites have access to these goods. I have strong faith in the power of diverse societies, despite their messiness.

My real worry is that enhancements may be like services rather than gadgets or pills (which come down exponentially in price). That would keep them harder to reach, and might hold back adoption (especially since we have not been as good at automating services as manufacturing). Still, we do subsidize education at great cost, and if an enhancement is desirable democratic societies are likely to scramble for a way of supplying it widely, even if it is only through an enhancement lottery.

However, even a world with unequal distribution is not necessarily unjust. Beside the standard Nozickian argument that a distribution is just if it was arrived at through just means there is the Rawlsian argument that if the unequal distribution actually produces benefits for the weakest it is OK. This is likely very true for intelligence amplification and maybe brain emulation since they are likely to cause strong economic growth an innovations that produce spillover effects – especially if there is any form of taxation or even mild redistribution.

Who controls what we become? Nobody, we/ourselves/us

The second issue is who gets a say in this.

As I respond in the interview, in a way nobody gets a say. Things just happen.

People innovate, adopt technologies and change, and attempts to control that means controlling creativity, business and autonomy – you better have a very powerful ethical case to argue for limitations in these, and an even better political case to implement any. A moral limitation of life extension needs to explain how it averts consequences worse than 100,000 dead people per day. Even if we all become jaded immortals that seems less horrible than a daily pile of corpses 12.3 meters high and 68 meters across (assuming an angle of repose of 20 degrees – this was the most gruesome geometry calculation I have done so far). Saying we should control technology is a bit like saying society should control art: it might be more practically useful, but it springs from the same well of creativity and limiting it is as suffocating as limiting what may be written or painted.

Technological determinism is often used as an easy out for transhumanists: the future will arrive no matter what you do, so the choice is just between accepting or resisting it. But this is not the argument I am making. That nobody is in charge doesn’t mean the future is not changeable.

The very creativity, economics and autonomy that creates the future is by its nature something individual and unpredictable. While we can relatively safely assume that if something can be done it will be done, what actually matters is whether it will be done early or late, or seldom or often. We can try to hurry beneficial or protective technologies so they arrive before the more problematic ones. We can try to aim at beneficial directions in favour over more problematic ones. We can create incentives that make fewer want to use the bad ones. And so on. The “we” in this paragraph is not so much a collective coordinated “us” as the sum of individuals, companies and institutions, “ourselves”: there is no requirement to get UN permission before you set out to make safe AI or develop life extension. It just helps if a lot of people support your aims.

John Stuart Mill’s harm principle allows society to step in an limit freedom when it causes harms to others, but most enhancements look unlikely to produce easily recognizable harms. This is not a ringing endorsement: as Nick Bostrom has pointed out, there are some bad directions of evolution we might not want to go down, yet it is individually rational for each of us to go slightly in that direction. And existential risk is so dreadful that it actually does provide a valid reason to stop certain human activities if we cannot find alternative solutions. So while I think we should not try to stop people from enhancing themselves we should want to improve our collective coordination ability to restrain ourselves. This is the “us” part. Restraint does not just have to happen in the form of rules: we restrain ourselves already using socialization, reputations, and incentive structures. Moral and cognitive enhancement could add restraints we currently do not have: if you can clearly see the consequences of your actions it becomes much harder to do bad things. The long-term outlook fostered by radical life extension may also make people more risk aversive and willing to plan for long-term sustainability.

One could dream of some enlightened despot or technocrat deciding. A world government filled with wise, disinterested and skilled members planning our species future. But this suffers from essentially the economic calculation problem: while a central body might have a unified goal, it will lack information about the preferences and local states among the myriad agents in the world. Worse, the cognitive abilities of the technocrat will be far smaller than the total cognitive abilities of the other agents. This is why rules and laws tend to get gamed – there are many diverse entities thinking about ways around them. But there are also fundamental uncertainties and emergent phenomena that will bubble up from the surrounding agents and mess up the technocratic plans. As Virginia Postrel noted, the typical solution is to try to browbeat society into a simpler form that can be managed more easily… which might be acceptable if the stakes are the very survival of the species, but otherwise just removes what makes a society worth living in. So we better maintain our coordination ourselves, all of us, in our diverse ways.

 

Harming virtual bodies

BodyI was recently interviewed by Anna Denejkina for Vertigo, and references to the article seems to be circulating around. Given the hot button topic – transhumanism and virtual rape – I thought it might be relevant to bring out what I said in the email interview.

(Slightly modified for clarity, grammar and links)

> How are bioethicists and philosophers coping with the ethical issues which may arise from transhumanist hacking, and what would be an outcome of hacking into the likes of full body haptic suit, a smart sex toy, e-spot implant, i.e.: would this be considered act of kidnapping, or rape, or another crime?

There is some philosophy of virtual reality and augmented reality, and a lot more about the ethics of cyberspace. The classic essay is this 1998 one, dealing with a text-based rape in the mid-90s.

My personal view is that our bodies are the interfaces between our minds and the world. The evil of rape is that it involves violating our ability to interact with the world in a sensual manner: it involves both coercion of bodies and inflicting a mental violation. So from this perspective it does not matter much if the rape happens to a biological body, or a virtual body connected via a haptic suit, or some brain implant. There might of course be lesser violations if the coercion is limited (you can easily log out) or if there is a milder violation (a hacked sex toy might infringe on privacy and ones sexual integrity, but it is not able to coerce): the key issue is that somebody is violating the body-mind interface system, and we are especially vulnerable when this involves our sexual, emotional and social sides.

Widespread use of virtual sex will no doubt produce many tricky ethical situations. (what about recording the activities and replaying them without the partner’s knowledge? what if the partner is not who I think it is? what mapping the sexual encounter onto virtual or robot bodies that look like children and animals? what about virtual sexual encounters that break the laws in one country but not another?)

Much of this will sort itself out like with any new technology: we develop norms for it, sometimes after much debate and anguish. I suspect we will become much more tolerant of many things that are currently weird and taboo. The issue ethicists may worry about is whether we would also become blasé about things that should not be accepted. I am optimistic about it: I think that people actually do react to things that are true violations.

> If such a violation was to occur, what can be done to ensure that today’s society is ready to treat this as a real criminal issue?
Criminal law tends to react slowly to new technology, and usually tries to map new crimes onto old ones (if I steal your World of Warcraft equipment I might be committing fraud rather than theft, although different jurisdictions have very different views – some even treat this as gambling debts). This is especially true for common law systems like the US and UK. In civil law systems like most of Europe laws tend to get passed when enough people convince politicians that There Ought To Be a Law Against It (sometimes unwisely).

So to sum up, look at whether people involuntarily actually suffer real psychological anguish, loss of reputations or lose control over important parts of their exoselves due to the actions of other people. If they do, then at least something immoral has happened. Whether laws, better software security, social norms or something else (virtual self defence? built-in safewords?) is the best remedy may depend on the technology and culture.

I think there is an interesting issue in what role the body plays here. As I said, the body is an interface between our minds and the world around us. It is also a nontrivial thing: it has properties and states of its own, and these affect how we function. Even if one takes a nearly cybergnostic view that we are merely minds interfacing with the world rather than a richer embodiment view this plays an important role. If I have a large, small, hard or vulnerable body, it will affect how I can act in the world – and this will undoubtedly affect how I think of myself. Our representations of ourselves are strongly tied to our bodies and the relationship between them and our environment. Our somatosensory cortex maps itself to how touch distributes itself on our skin, and our parietal cortex not only represents the body-environment geometry but seems involved in our actual sense of self.

This means that hacking the body is more serious than hacking other kinds of software or possessions. Currently it is our only way of existing in the world. Even in an advanced VR/transhuman society where people can switch bodies simply and freely, infringing on bodies has bigger repercussions than changing other software outside the mind – especially if it is subtle. The violations discussed in the article are crude, overt ones. But subtle changes to ourselves may fly under the radar of outrage, yet do harm.

Most people are no doubt more interested in the titillating combination of sex and tech – there is a 90’s cybersex vibe coming off this discussion, isn’t it? The promise of new technology to give us new things to be outraged or dream about. But the philosophical core is about the relation between the self, the other, and what actually constitutes harm – very abstract, and not truly amenable to headlines.

 

Contraire de l’esprit de l’escalier: enhancement and therapy

Public healthYesterday I participated in a round-table discussion with professor Miguel Benasyag about the therapy vs. enhancement distinction at the TransVision 2014 conference. Unfortunately I could not get in a word sidewise, so it was not much of a discussion. So here are the responses I wanted to make, but didn’t get the chance to do: in a way this post is the opposite of l’esprit de l’escalier.

Enhancement: top-down, bottom-up, or sideways?

Does enhancements – whether implanted or not – represent a top-down imposition of order on the biosystem? If one accepts that view, one ends up with a dichotomy between that and bottom-up approaches where biosystems are trained or placed in a smart context that produce the desired outcome: unless one thinks imposing order is a good thing, one becomes committed to some form of naturalistic conservatism.

But this ignores something Benasyag brought up himself: the body and brain are flexible and adaptable. The cerebral cortex can reorganize to become a primary cortex for any sense, depending on which input nerve is wired up to it. My friend Todd’s implanted magnet has likely reorganized a small part of his somatosensory cortex to represent his new sense. This enhancement is not a top-down imposition of a desired cortical structure, neither a pure bottom-up training of the biosystem.

Real enhancements integrate, they do not impose a given structure. This also addresses concerns of authenticity: if enhancements are entirely externally imposed – whether through implantation or external stimuli – they are less due to the person using them. But if their function is emergent from the person’s biosystem, the device itself, and how it is being used, then it will function in a unique, personal way. It may change the person, but that change is based on the person.

Complex enhancements

Enhancements are often described as simple, individualistic, atomic, things. But actual enhancements will be systems. A dramatic example was in my ears: since I am both French- and signing-impaired, I could listen to (and respond to) comments thanks to an enhancing system involving three skilled translators, a set of wireless headphones and microphones. This system was not just complex, but it was adaptive (translators know how to improvise, we the users learned how to use it) and social (micro-norms for how to use it emerged organically).

Enhancements need a social infrastructure to function – both a shared, distributed knowledge of how and when to use them (praxis) and possibly a distributed functioning itself. A brain-computer interface is of little use without anybody to talk to. In fact, it is the enhancements that affect communication abilities that are most powerful both in the sense of enhancing cognition (by bringing brains together) and changing how people are socially situated.

Cochlear implants and social enhancement

This aspect of course links to the issues in the adjacent debate about disability. Are we helping children by giving them cochlear implants, or are we undermining a vital deaf cultural community. The unique thing about cochlear implants is that they have this social effect and have to be used early in life for best results. In this case there is a tension between the need to integrate the enhancement with the hearing and language systems in an authentic way, a shift in which social community which will be readily available, and concerns over that this is just used to top-down normalize away the problem of deafness. How do we resolve this?

The value of deaf culture is largely its value to members: there might be some intrinsic value to the culture, but this is true for every culture and subculture. I think it is safe to say there is a fairly broad consensus in western culture today that individuals should not sacrifice their happiness – and especially not be forced to do it – for the sake of the culture. It might be supererogatory: a good thing to do, but not something that can be demanded. Culture is for the members, not the other way around: people are ends, not means.

So the real issue is the social linkages and the normalisation. How do we judge the merits of being able to participate in social networks? One might be small but warm, another vast and mainstream. It seems that the one thing to avoid is not being able to participate in either. But this is not a technical problem as much as a problem of adaptation and culture. Once implants are good enough that learning to use them does not compete with learning signing the real issue becomes the right social upbringing and the question of personal choices. This goes way beyond implant technology and becomes a question of how we set up social adaptation processes – a thick, rich and messy domain where we need to do much more work.

It is also worth considering the next step. What if somebody offered a communications device that would enable an entirely new form of communication, and hence social connection? In a sense we are gaining that using new media, but one could also consider something direct, like Egan’s TAP. As that story suggests, there might be rather subtle effects if people integrate new connections – in his case merely epistemic ones, but one could imagine entirely new forms of social links. How do we evaluate them? Especially since having a few pioneers test them tells us less than for non-social enhancements. That remains a big question.

Justifying off-label enhancement

A somewhat fierce question I got (and didn’t get to respond to) was how I could justify that I occasionally take modafinil, a drug intended for use of narcoleptics.

There seems to be a deontological or intention-oriented view behind the question: the intentions behind making the drug should be obeyed. But many drugs have been approved for one condition and then use expanded to other conditions. Presumably aspirin use for cardiovascular conditions is not unethical. And pharma companies largely intend to make money by making medicines, so the deep intention might be trivial to meet. More generally, claiming the point of drugs is to help sick people (who we have an obligation to help) doesn’t work since there obviously exist drug use for non-sick people (sports medicine, for example). So unless many current practices are deeply unethical this line of argument doesn’t work.

What I think was the real source was the concern that my use somehow deprived a sick person of the use. This is false, since I paid for it myself: the market is flexible enough to produce enough, and it was not the case of splitting a finite healthcare cake. The finiteness case might be applicable if we were talking about how much care me and my neighbours would get for our respective illnesses, and whether they had a claim on my behaviour through our shared healthcare cake. So unless my interlocutor thought my use was likely to cause health problems she would have to pay for, it seems that this line of reasoning fails.

The deep issue is of course whether there is a normatively significant difference between therapy and enhancement. I deny it. I think the goal of healthcare should not be health but wellbeing. Health is just an enabling instrumental thing. And it is becoming increasingly individual: I do not need more muscles, but I do benefit from a better brain for my life project. Yours might be different. Hence there is no inherent reason to separate treatment and enhancement: both aim at the same thing.

That said, in practice people make this distinction and use it to judge what care they want to pay for for their fellow citizens. But this will shift as technology and society changes, and as I said, I do not think this is a normative issue. Political issue, yes, messy, yes, but not foundational.

What do transhumanists think?

One of the greatest flaws of the term “transhumanism” is that it suggests that there is something in particular all transhumanist believe. Benasayag made some rather sweeping claims about what transhumanists (enhancement as embodying body-hate and a desire for control) wanted to do that were most definitely not shared by the actual transhumanists in the audience or stage. It is as problematic as claiming that all French intellectuals believe something: at best a loose generalisation, but most likely utterly misleading. But when you label a group – especially if they themselves are trying to maintain an official label – it becomes easier to claim that all transhumanists believe in something. Outsiders also do not see the sheer diversity inside, assuming everybody agrees on the few samples of writing they have  read.

The fault here lies both in the laziness of outside interlocutors and in transhumanists not making their diversity clearer, perhaps by avoiding slapping the term “transhumanism” on every relevant issue: human enhancement is of interest to transhumanists, but we should be able to discuss it even if there were no transhumanists.