Review of the cyborg bill of rights 1.0

Cyborg NewtonThe Cyborg Bill of Rights 1.0 is out. Rich MacKinnon suggests the following rights:

A person shall enjoy the sanctity of bodily integrity and be free from unnecessary search, seizure, suspension or interruption of function, detachment, dismantling, or disassembly without due process.

A person shall be free (speech clause) to express themselves through temporary or permanent adaptions, alterations, modifications, or augmentations to the shape or form of their bodies. Similarly, a person shall be free from coerced or otherwise involuntary morphological changes.

A person shall be free from exploitive or injurious 3rd party ownerships of vital and supporting bodily systems. A person is entitled to the reasonable accrual of ownership interest in 3rd party properties affixed, attached, embedded, implanted, injected, infused, or otherwise permanently integrated with a person’s body for a long-term purpose.

A person is entitled to dominion over intelligences and agents, and their activities, whether they are acting as permanent residents, visitors, registered aliens, trespassers, insurgents, or invaders within the person’s body and its domain.

A legally recognized mutant shall enjoy all the rights, benefits, and responsibilities extended to natural persons.

As a sometime philosopher with a bit of history of talking about rights regarding bodily modification, I of course feel compelled to comment.

What are rights?

Artifical handFirst, what is a right? Clearly anybody can state that we have a right to X, but only some agents and X-rights make sense or have staying power.

One kind of rights are legal rights of various kinds. This can be international law, national law, or even informal national codes (for example the Swedish allemansrätten, which is actually not a moral/human right and actually fairly recent). Here the agent has to be some legitimate law- or rule-maker. The US Bill of Rights is an example: the result of a political  process that produced legal rights, with relatively little if any moral content. Legal rights need to be enforceable somehow.

Then there are normative moral principles such as fundamental rights (applicable to a person since they are a person), natural rights (applicable because of facts of the world) or divine rights (imposed by God). These are universal and egalitarian: applicable everywhere, everywhen, and the same for everybody. Bentham famously dismissed the idea of natural rights as “nonsense on stilts” and there is a general skepticism today about rights being fundamental norms. But insofar they do exist, anybody can discover and state them. Moral rights need to be doable.

While there may be doubts about the metaphysical nature of rights, if a society agrees on a right it will shape action, rules and thinking in an important way. It is like money: it only gets value by the implicit agreement that it has value and can be exchanged for goods. Socially constructed rights can be proposed by anybody, but they only become real if enough people buy into the construction. They might be unenforceable and impossible to perform (which may over time doom them).

What about the cyborg rights? There is no clear reference to moral principles, and only the last one refers to law. In fact, the preamble states:

Our process begins with a draft of proposed rights that are discussed thoroughly, adopted by convention, and then published to serve as model language for adoption and incorporation by NGOs, governments, and rights organizations.

That is, these rights are at present a proposal for social construction (quite literally) that hopefully will be turned into a convention (a weak international treaty) that eventually may become national law. This also fits with the proposal coming from MacKinnon rather than the General Secretary of the UN – we can all propose social constructions and urge the creation of conventions, treaties and laws.

But a key challenge is to come up with something that can become enforceable at some point. Cyborg bodies might be more finely divisible and transparent than human bodies, so that it becomes hard to regulate these rights. How do you enforce sovereignty against spyware?


Dragon leg 2Why is a right a right? There has to be a reason for a right (typically hinted at in preambles full of “whereas…”)

I have mostly been interested in moral rights. Patrick D. Hopkins wrote an excellent overview “Is enhancement worthy of being a right?” in 2008 where he looks at how you could motivate morphological freedom. He argues that there are three main strategies to show that a right is fundamental or natural:

  1. That the right conforms to human nature. This requires showing that it fits a natural end. That is, there are certain things humans should aim for, and rights help us live such lives. This is also the approach of natural law accounts.
  2. That the right is grounded in interests. Rights help us get the kinds of experiences or states of the world that we (rightly) care about. That is, there are certain things that are good for us (e.g.  “the preservation of life, health, bodily integrity, play, friendship, classic autonomy, religion, aesthetics, and the pursuit of knowledge”) and the right helps us achieve this. Why those things are good for us is another matter of justification, but if we agree on the laundry list then the right follows if it helps achieve them.
  3. That the right is grounded in our autonomy. The key thing is not what we choose but that we get to choose: without freedom of choice we are not moral agents. Much of rights by this account will be about preventing others from restricting our choices and not interfering with their choices. If something can be chosen freely and does not harm others, it has a good chance to be a right. However, this is a pretty shallow approach to autonomy; there are more rigorous and demanding ideas of autonomy in ethics (see SEP and IEP for more). This is typically how many fundamental rights get argued (I have a right to my body since if somebody can interfere with my body, they can essentially control me and prevent my autonomy).

One can do this in many ways. For example, David Miller writes on grounding human rights that one approach is to allow people from different cultures to live together as equals, or basing rights on human needs (very similar to interest accounts), or the instrumental use of them to safeguard other (need-based) rights. Many like to include human dignity, another tricky concept.

Social constructions can have a lot of reasons. Somebody wanted something, and this was recognized by others for some reason. Certain reasons are cultural universals, and that make it more likely that society will recognize a right. For example, property seems to be universal, and hence a right to one’s property is easier to argue than a right to paid holidays (but what property is, and what rules surround it, can be very different).

Legal rights are easier. They exist because there is a law or treaty, and the reasons for that are typically a political agreement on something.

It should be noted that many declarations of rights do not give any reasons. Often because we would disagree on the reasons, even if we agree on the rights. The UN declaration of human rights give no hint of where these rights come from (compare to the US declaration of independence, where it is “self-evident” that the creator has provided certain rights to all men). Still, this is somewhat unsatisfactory and leaves many questions unanswered.

So, how do we justify cyborg rights?

In the liberal rights framework I used for morphological freedom we could derive things rather straightforwardly: we have a fundamental right to life, and from this follows freedom from disassembly. We have a fundamental right to liberty, and together with the right to life this leads to a right to our own bodies, bodily sovereignty, freedom of morphology and the first half of the right to organic naturalization. We have a right to our property (typically derived from fundamental rights to seek our happiness and have liberty), and from this the second half of the organic naturalization right follows (we are literally mixing ourselves rather than our work with the value produced by the implants). Equality for mutants follow from having the same fundamental rights as humans (note that the bill talks about “persons”, and most ethical arguments try to be valid for whatever entities count as persons – this tends to be more than general enough to cover cyborg bodies). We still need some justification of the fundamental rights of life, liberty and happiness, but that is outside the scope of this exercise. Just use your favorite justifications.

The human nature approach would say that cyborg nature is such that these rights fit with it. This might be tricky to use as long as we do not have many cyborgs to study the nature of. In fact, since cyborgs are imagined as self-creating (or at least self-modifying) beings it might be hard to find any shared nature… except maybe the self-creation part. As I often like to argue, this is close to Mirandola’s idea of human dignity deriving from our ability to change ourselves.

The interest approach would ask how the cyborg interests are furthered by these rights. That seems pretty straightforward for most reasonably human-like interests. In fact, the above liberal rights framework is to a large extent an interest-based account.

The autonomy account is also pretty straightforward. All cyborg rights except the last are about autonomy.

Could we skip the ethics and these possibly empty constructions? Perhaps: we could see the cyborg bill of rights as a way of making a cyborg-human society possible to live in. We need to tolerate each other and set boundaries on allowed messing around with each other’s bodies. Universals of property lead to the naturalization right, territoriality the sovereignty right universal that actions under self-control are distinguished from those not under control might be taken as the root for autonomy-like motivations that then support the rest.

Which one is best? That depends. The liberal rights/interest system produces nice modular rules, although there will be much arguments on what has precedence. The human nature approach might be deep and poetic, but potentially easy to disagree on. Autonomy is very straightforward (except when the cyborg starts messing with their brain). Social constructivism allows us to bring in issues of what actually works in a real society, not just what perfect isolated cyborgs (on a frictionless infinite plane) should do.

Parts of rights

Alternative limb projectOne of the cool properties of rights is that they have parts – “the Hohfeldian incidents“, after Wesley Hohfeld (1879–1918) who discovered them. He was thinking of legal rights, but this applies to moral rights too. His system is descriptive – this is how rights work – rather than explaining why the came about or whether this is a good thing. The four parts are:

Privileges (alias liberties): I have a right to eat what I want. Someone with a driver’s licence has the privilege to drive. If you have a duty not do do something, then you have no privilege about it.

Claims: I have a claim on my employer to pay my salary. Children have a claim vis-a-vis every adult not to be abused. My employer is morally and legally dutybound to pay, since they agreed to do so. We are dutybound to refrain from abusing children since it is wrong and illegal.

These two are what most talk about rights deal. In the bill, the freedom from disassembly and freedom of morphology are about privileges and claims. The next two are a bit meta, dealing with rights over the first two:

Powers: My boss has the power to order me to research a certain topic, and then I have a duty to do it. I can invite somebody to my home, and then they have the privilege of being there as long as I give it to them. Powers allow us to change privileges and claims, and sometimes powers (an admiral can relieve a captain of the power to command a ship).

Immunities: My boss cannot order me to eat meat. The US government cannot impose religious duties on citizens. These are immunities: certain people or institutions cannot change other incidents.

These parts are then combined into full rights. For example, my property rights to this computer involve the privilege to use the computer, a claim against others to not use the computer, the power to allow others to use it or to sell it to them (giving them the entire rights bundle), and an immunity of others altering these rights. Sure, in practice the software inside is of doubtful loyalty and there are law-enforcement and emergency situation exceptions, but the basic system is pretty clear. Licence agreements typically give you a far

Sometimes we speak about positive and negative rights: if I have a negative right I am entitled to non-interference from others, while a positive right entitles me to some help or goods. My right to my body is a negative right in the sense that others may not prevent me from using or changing my body as I wish, but I do not have a positive right to demand that they help me with some weird bodymorphing. However, in practice there is a lot of blending going on: public healthcare systems give us positive rights to some (but not all) treatment, policing gives us a positive right of protection (whether we want it or not). If you are a libertarian you will tend to emphasize the negative rights as being the most important, while social democrats tend to emphasize state-supported positive rights.

The cyborg bill of rights starts by talking about privileges and claims. Freedom of morphology clearly expresses an immunity to forced bodily change. The naturalization right is about immunity from unwilling change of the rights of parts, and an expression of a kind of power over parts being integrated into the body. Sovereignty is all about power over entities getting into the body.

The right of bodily sovereignty seems to imply odd things about consensual sex – once there is penetration, there is dominion. And what about entities that are partially inside the body? I think this is because it is trying to reinvent some of the above incidents. The aim is presumably to cover pregnancy/abortion, what doctors may do, and other interventions at the same time. The doctor case is easy, since it is roughly what we agree on today: we have the power to allow doctors to work on our bodies, but we can also withdraw this whenever we want

Some other thoughts

Nigel on the screenThe recent case where the police subpoenad the pacemaker data of a suspected arsonist brings some of these rights into relief. The subpoena occurred with due process, so it was allowed by the freedom from disassembly. In fact, since it is only information and that it is copied one can argue that there was no real “disassembly”. There have been cases where police wanted bullets lodged in people in order to do ballistics on them, but US courts have generally found that bodily integrity trumps the need for evidence. Maybe one could argue for a derived right to bodily privacy, but social needs can presumably trump this just as it trumps normal privacy. Right now views on bodily integrity and privacy are still based on the assumption that bodies are integral and opaque. In a cyborg world this is no longer true, and the law may well move in a more invasive direction.

“Legally recognized mutant”? What about mutants denied legal recognition? Legal recognition makes sense for things that the law must differentiate between, not for things the law is blind to. Legally recognized mutants (whatever they are) would be a group that needs to be treated in some special way. If they are just like natural humans they do not need special recognition. We may have laws making it illegal to discriminate against mutants, but this is a law about a certain kind of behavior rather than the recipient. If I racially discriminate against somebody but happens to be wrong about their race, I am still guilty. So the legal recognition part does not do any work in this right.

And why just mutants? Presumably the aim here is to cover cyborgs, transhumans and other prefix-humans so they are recognized as legal and moral agents with the same standing. The issue is whether this is achieved by arguing that they were human and “mutated”, or are descended from humans, and hence should have the same standing, or whether this is due to them having the right kind of mental states to be persons. The first approach is really problematic: anencephalic infants are mutants but hardly persons, and basing rights on lineage seems ripe for abuse. The second is much simpler, and allows us to generalize to other beings like brain emulations, aliens, hypothetical intelligent moral animals, or the Swampman.

This links to a question that might deserve a section on its own: who are the rightsholders? Normal human rights typically deal with persons, which at least includes adults capable of moral thinking and acting (they are moral agents). Someone who is incapable, for example due to insanity or being a child, have reduced rights but are still a moral patient (someone we have duties towards). A child may not have full privileges and powers, but they do have claims and immunities. I like to argue that once you can comprehend and make use of a right you deserve to have it, since you have capacity relative to the right. Some people also think prepersons like fertilized eggs are persons and have rights; I think this does not make much sense since they lack any form of mind, but others think that having the potential for a future mind is enough to grant immunity. Tricky border cases like persistent vegetative states, cryonics patients, great apes and weird neurological states keep bioethicists busy.

In the cyborg case the issue is what properties make something a potential rightsholder and how to delineate the border of the being. I would argue that if you have a moral agent system it is a rightsholder no matter what it is made of. That is fine, except that cyborgs might have interchangeable parts: if cyborg A gives her arm to cyborg B, have anything changed? I would argue that the arm switched from being a part of/property of A to being a part of/property of B, but the individuals did not change since the parts that make them moral agents are unchanged (this is just how transplants don’t change identity). But what if A gave part of her brain to B? A turns into A’, B turns into B’, and these may be new agents. Or what if A has outsourced a lot of her mind to external systems running in the cloud or in B’s brain? We may still argue that rights adhere to being a moral agent and person rather than being the same person or a person that can easily be separated from other persons or infrastructure. But clearly we can make things really complicated through overlapping bodies and minds.


I have looked at the cyborg bill of rights and how it fits with rights in law, society and ethics. Overall it is a first stab at establishing social conventions for enhanced, modular people. It likely needs a lot of tightening up to work, and people need to actually understand and care about its contents for it to have any chance of becoming something legally or socially “real”. From an ethical standpoint one can motivate the bill in a lot of ways; for maximum acceptance one needs to use a wide and general set of motivations, but these will lead to trouble when we try to implement things practically since they give no way of trading one off against another one in a principled way. There is a fair bit of work needed to refine the incidences of the rights, not to mention who is a rightsholder (and why). That will be fun.

Solomon’s frozen judgement

A girl dying of cancer wanted to use cryonic preservation to have a chance at being revived in the future. While supported by her mother the father disagreed; in a recent high court ruling, the judge found that she could be cryopreserved.

As the judge noted, the verdict was not a statement on the validity of cryonics itself, but about how to make decisions about prospective orders. In many ways the case would presumably have gone the same way if there had been a disagreement about whether the daughter could have catholic last rites. However, cryonics makes things fresh and exciting (I have been in the media all day thanks to this).

What is the ethics of parents disagreeing about the cryosuspension of their child?

Best interests

One obvious principle is that parents ought to act in the best interest of their children.

If the child is morally mature and with informed consent, then they can clearly have a valid interest in taking a chance on cryonics: they might not be legally adult, but as in normal medical ethics their stated interests have strong weight. Conversely, one could imagine a case where a child would not want to be preserved, in which case I think most people would agree their preferences should dominate.

The general legal consensus in the West is that the child’s welfare is so important that it can overrule the objections of parents. In UK law parents have the right and the duty to give consent for a minor. Children can consent for medical treatment, overriding their parents, at 16. However, if refusing treatment parents and court can override. This mostly comes into play in cases such as avoiding blood transfusions for religious reasons.

In this case the issue was that the parents were disagreeing and the child was not legally old enough.

If one thinks cryonics is reasonable, then one should clearly cryosuspend the child: it is in their best interest. But if one thinks cryonics is not reasonable, is it harming the interest of the child? This seems to require some theory of how cryonics is bad for the interests of the child.

As an analogy, imagine a case where one parent is a Jehovah’s Witness and want to refuse a treatment involving blood transfusion: the child will die without the treatment, and it will be a close call even with it. Here the objecting parent may claim that undergoing the transfusion harms the child in an important spiritual way and refuse consent. The other parent disagrees. Here the law would come down on the side of the pro-transfusion parent.

On this account and if we agree the cases are similar, we might say that parents have a legal duty to consent to cryonics.

Weak and strong reasons

In practice the controversialness of cryonics may speak against this: many people disagree about cryonics being good for one’s welfare. However, most such arguments usually seem to be based on various farfetched scenarios about how the future could be a bad place to end up in. Others bring up loss of social connections or that personal identity would be disrupted. A more rational argument is that it is an unproven treatment of dubious efficacy, which would make it irrational to undertake if there was an alternative; however since there isn’t any alternative this argument has little power. The same goes for the risk of loss of social connection or identity: had there been an alternative to death (which definitely severs connections and dissolves identity) that may have been preferable. If one seriously thinks that the future would be so dark that it is better not to get there, one should probably not have children.

In practice it is likely that the status of cryonics as nonstandard treatment would make the law hesitate to overrule parents. We know blood transfusions work, and while spiritual badness might be a respectable as a private view we as a society do not accept it as a sufficient reason to have somebody die. But in the case of cryonics the unprovenness of the treatment means that hope for revival is on nearly the same epistemic level as spiritual badness: a respectable private view, but not strong enough to be a valid public reason. Cryonicists are doing their best to produce scientific evidence – tissue scans, memory experiments, protocols – that move the reasons to believe in cryonics from the personal faith level to the public evidence level. They already have some relevant evidence. As soon as lab mice are revived or people become convinced the process saves the connectome the reasons would be strengthened and cryonics becomes more akin blood transfusion.

The key difference is that weak private reasons are enough to allow an experimental treatment where there is no alternative but death, but they are generally not enough to go for an experimental treatment when there is some better treatment. When disallowing a treatment weak reasons may work well against unproven or uncertain treatments, but not when it is proven. However, disallowing a treatment with no alternative is equivalent to selecting death.

When two parents disagree about cryonics (and the child does not have a voice) it hence seems that they both have weak reasons, but the asymmetry between having a chance and dying tilts in favor of cryonics. If it was purely a matter of aesthetics or value (for example, arguing about the right kind of last rites) there would be no societal or ethical constraint. But here there is some public evidence, making it at least possible that the interests of the child might be served by cryonics. Better safe than sorry.

When the child also has a voice and can express its desires, then it becomes obvious which way to go.

King Solomon might have solved the question by cryosuspending the child straight away, promising the dissenting parent not to allow revival until they either changed their mind or there was enough public evidence to convince anybody that it would be in the child’s interest to be revived. The nicest thing about cryonics is that it buys you time to think things through.

AI, morality, ethics and metaethics

Next Sunday I will be debating AI ethics at Battle of Ideas. Here is a podcast where I talk AI, morality and ethics:

What distinguishes morals from ethics?

There is actually a shocking confusion about what the distinction between morals and ethics is. says ethics is about rules of conduct produced by an external source while morals are an individual’s own principles of right and wrong. says morals are principles on which one’s own judgement of right and wrong are based (abstract, subjective and personal), ethics are the principles of right conduct (practical, social and objective). Ian Welsh gives a soundbite: “morals are how you treat people you know.  Ethics are how you treat people you don’t know.” Paul Walker and Terry Lovat say ethics leans towards decisions based on individual character and subjective understanding of right and wrong, while morals is about widely shared communal or societal norms – here ethics is individual assessment of something being good or bad, while morality is inter-subjective community assessment.

Wikipedia distinguishes between ethics as a research field and the common human ability to think critically about moral values and direct actions appropriately, or a particular persons principles of values. Morality is the differentiation between things that are proper and improper, as well as a body of standards and principles in derived from a code of conduct in some philosophy, religion or culture… or derived from a standard a person believes to be universal. regards ethics as a system of moral principles, the rules of conduct recognized in some human environment, an individual’s moral principles (and the branch of philosophy). Morality is about conforming to the rules of right conduct, having moral quality or character, a doctrine or system of morals and a few other meanings. The Cambridge dictionary thinks ethics is the study of what is right or wrong, or the set of beliefs about it, while morality is a set of personal or social standards for good/bad behavior and character.

And so on.

I think most people try to include the distinction between shared systems of conduct and individual codes, and the distinction between things that are subjective, socially agreed on, and maybe objective. Plus that we all agree on that ethics is a philosophical research field.

My take on it

I like to think of it as a AI issue. We have a policy function \pi(s,a) that maps states and action pairs to a probability of acting that way; this is set using a value function Q(s) where various states are assigned values. Morality in my sense is just the policy function and maybe the value function: they have been learned through interacting with the world in various ways.

Ethics in my sense is ways of selecting policies and values. We are able to not only change how we act but also how we evaluate things, and the information that does this change is not just reward signals that update value function directly, but also knowledge about the world, discoveries about ourselves, and interactions with others – in particular ideas that directly change the policy and value functions.

When I realize that lying rarely produces good outcomes (too much work) and hence reduce my lying, then I am doing ethics (similarly, I might be convinced about this by hearing others explain that lying is morally worse than I thought or convincing me about Kantian ethics). I might even learn that short-term pleasure is less valuable than other forms of pleasure, changing how I view sensory rewards.

Academic ethics is all about the kinds of reasons and patterns we should use to update our policies and values, trying to systematize them. It shades over into metaethics, which is trying to understand what ethics is really about (and what metaethics is about: it is its own meta-discipline, unlike metaphysics that has metametaphysics, which I think is its own meta-discipline).

I do not think I will resolve any confusion, but at least this is how I tend to use the terminology. Morals is how I act and evaluate, ethics is how I update how I act and evaluate, metaethics is how I try to think about my ethics.

Doing right and feeling good

My panel at Hay-on-Wye (me, Elaine Glaser, Peter Dews and Simon Baron-Cohen) talked about compassion, the sentiment model of morality, effective altruism and how to really help the world. Now available as video!

My view is largely that moral action is strongly driven and motivated by emotions rather than reason, but outside the world of the blindingly obvious or everyday human activity our intuitions and feelings are not great guides. We do not function well morally when the numbers get too big or the cognitive biases become maladaptive. Morality may be about the heart, but ethics is in the brain.

Being reasonable

DisagreementThe ever readable Scott Alexander stimulated a post on Practical Ethics about defaults, status quo, and disagreements about sex. The quick of it: our culture sets defaults on who is reasonable or unreasonable when couples disagree, and these become particularly troubling when dealing with biomedical enhancements of love and sex. The defaults combine with status quo bias and our scepticism for biomedical interventions to cause biases that can block or push people towards certain interventions.

Universal principles?

Essence of ethicsI got challenged on the extropian list, which is a fun reason to make a mini-lecture.

On 2015-10-02 17:12, William Flynn Wallace wrote:
> ​Anders says above that we have discovered universal timeless principles.​ I’d like to know what they are and who proposed them, because that’s chutzpah of the highest order. Oh boy – let’s discuss that one.

Here is one: a thing is identical to itself. (1)

Here is another one: “All human beings are born free and equal in dignity and rights.” (2)

Here is a third one: “Act only according to that maxim whereby you can, at the same time, will that it should become a universal law.” (3)

(1) was first explicitly mentioned by Plato (in Theaetetus). I think you also agree with it – things that are not identical to themselves are unlikely to even be called “things”, and without the principle very little thinking makes sense.

I am not sure whether it is chutzpah of the highest order or a very humble observation.

(2) is from the UN declaration of universal human rights. This sentence needs enormous amounts of unpacking – “free”, “equal”, “dignity”, “rights”… these words can (and are) used in very different ways. Yet I think it makes sense to say that according to a big chunk of Western philosophy this sentence is a true sentence (in the sense that ethical propositions are true), that it is universal (the truth is not contingent on when and where you are, although the applications may change), and we know historically that we have not known this principle forever. Now *why* it is true quickly branches out into different answers depending on what metaethical positions you hold, not to mention the big topic of what kind of truth moral truth actually is (if anything). The funny thing is that the universal part is way less contentious, because of the widely accepted (and rarely stated) formal ethical principle that if it is moral to P in situation X, then the location in time and space where X happens does not matter.

Chutzpah of the highest order? Totally. So is the UN.

(3) is Immanuel Kant, and he argued that any rational moral agent could through pure reason reach this principle. It is in many ways like (1) almost a consistency requirement of moral will (not action, since he doesn’t actually care about the consequences – we cannot fully control those, but we can control what we decide to do). There is a fair bit of unpacking of the wording, but unlike the UN case he defines his terms fairly carefully in the preceding text. His principle is, if he is right, the supreme principle of morality.

Chuzpah auf höchstem Niveau? Total!

Note that (1) is more or less an axiom: there is no argument for why it is true, because there is little point in even trying. (3) is intended to be like a theorem in geometry: from some axioms and the laws of logic, we end up with the categorical imperative. It is just as audacious or normal as the Pythagorean theorem. (2) is a kind of compromise between different ethical systems: the Kantians would defend it based on their system, while consequentialists could make a rule utilitarian argument for why it is true, and contractualists would say it is true because the UN agrees on it. They agree on the mid-level meaning, but not on the other’s derivations. It is thick, messy and political, yet also represents fairly well what most educated people would conclude (of course, they would then show off by disagreeing loudly with each other about details, obscuring the actual agreement).

Philosopher’s views

Do people who think about these things actually believe in universal principles? One fun source is David Bourget and David J. Chalmers’ survey of professional philosophers (data). 56.4% of the respondents were moral realists (there are moral facts and moral values, and that these are objective and independent of our views), 65.7% were moral cognitivists (ethical sentences can be true or false); these were correlated to 0.562. 25.9% were deontologists, which means that they would hold somewhat Kant-like views that some actions are always or never right (some of the rest of course also believe in principles, but the survey cannot tell us anything more). 71.1% thought there was a priori knowledge (things we know by virtue of being thinking beings rather than experience).

My views

Do I believe in timeless principles? Kind of. There are statements in physics that are invariant of translations, rotations, Lorenz boosts and other transformations, and of course math remains math. Whether physics and math are “out there” or just in minds is hard to tell (I lean towards that at least physics is out there in some form), but clearly any minds that know some subset of correct, invariant physics and math can derive other correct conclusions from it. And other minds with the same information can make the same derivations and reach the same conclusions – no matter when or where. So there are knowable principles in these domains every sufficiently informed and smart mind would know. Things get iffy with values, since they might be far more linked to the entities experiencing them, but clearly we can do analyse game theory and make statements like “If agent A is trying to optimize X, agent B optimizes Y, and X and Y do not interact, then they can get more of X and Y by cooperating”. So I think we can get pretty close to universal principles in this framework, even if it turns out that they merely reside inside minds knowing about the outside world.

Living forever

Benjamin Zand has made a neat little documentary about transhumanism, attempts to live forever and the posthuman challenge. I show up of course as soon as ethics is being mentioned.

Benjamin and me had a much, much longer (and very fun) conversation about ethics than could even be squeezed into a TV documentary. Everything from personal identity to overpopulation to the meaning of life. Plus the practicalities of cryonics, transhuman compassion and how to test if brain emulation actually works.

I think the inequality and control issues are interesting to develop further.

Would human enhancement boost inequality?

There is a trivial sense in which just inventing an enhancement produces profound inequality since one person has it, and the rest of mankind lacks it. But this is clearly ethically uninteresting: what we actually care about is whether everybody gets to share something good eventually.

However, the trivial example shows an interesting aspect of inequality: it has a timescale. An enhancement that will eventually benefit everyone but is unequally distributed may be entirely OK if it is spreading fast enough. In fact, by being expensive at the start it might even act as a kind of early adopter/rich tax, since they first versions will pay for R&D of consumer versions – compare computers and smartphones. While one could argue that it is bad to get temporary inequality, long-term benefits would outweigh this for most enhancements and most value theories: we should not sacrifice the poor of tomorrow for the poor of today by delaying the launch of beneficial technologies (especially since it is unlikely that R&D to make them truly cheap will happen just due to technocrats keeping technology in their labs – making tech cheap and useful is actually one area where we know empirically the free market is really good).

If the spread of some great enhancement could be faster though, then we may have a problem.

I often encounter people who think that the rich will want to keep enhancements to themselves. I have never encountered any evidence for this being actually true except for status goods or elites in authoritarian societies.

There are enhancements like height that are merely positional: it is good to be taller than others (if male, at least), but if everybody gets taller nobody benefits and everybody loses a bit (more banged heads and heart problems). Other enhancements are absolute: living healthy longer or being smarter is good for nearly all people regardless of how long other people live or how smart they are (yes, there might be some coordination benefits if you live just as long as your spouse or have a society where you can participate intellectually, but these hardly negate the benefit of joint enhancement – in fact, they support it). Most of the interesting enhancements are in this category: while they might be great status goods at first, I doubt they will remain that for long since there are other reasons than status to get them. In fact, there are likely network effects from some enhanchements like intelligence: the more smart people working together in a society, the greater the benefits.

In the video, I point out that limiting enhancement to the elite means the society as a whole will not gain the benefit. Since elites actually reap rents from their society, this means that from their perspective it is actually in their best interest to have a society growing richer and more powerful (as long as they are in charge). This will mean they lose out in the long run to other societies that have broader spreads of enhancement. We know that widespread schooling, free information access and freedom to innovate tend to produce way wealthier and more powerful societies than those where only elites have access to these goods. I have strong faith in the power of diverse societies, despite their messiness.

My real worry is that enhancements may be like services rather than gadgets or pills (which come down exponentially in price). That would keep them harder to reach, and might hold back adoption (especially since we have not been as good at automating services as manufacturing). Still, we do subsidize education at great cost, and if an enhancement is desirable democratic societies are likely to scramble for a way of supplying it widely, even if it is only through an enhancement lottery.

However, even a world with unequal distribution is not necessarily unjust. Beside the standard Nozickian argument that a distribution is just if it was arrived at through just means there is the Rawlsian argument that if the unequal distribution actually produces benefits for the weakest it is OK. This is likely very true for intelligence amplification and maybe brain emulation since they are likely to cause strong economic growth an innovations that produce spillover effects – especially if there is any form of taxation or even mild redistribution.

Who controls what we become? Nobody, we/ourselves/us

The second issue is who gets a say in this.

As I respond in the interview, in a way nobody gets a say. Things just happen.

People innovate, adopt technologies and change, and attempts to control that means controlling creativity, business and autonomy – you better have a very powerful ethical case to argue for limitations in these, and an even better political case to implement any. A moral limitation of life extension needs to explain how it averts consequences worse than 100,000 dead people per day. Even if we all become jaded immortals that seems less horrible than a daily pile of corpses 12.3 meters high and 68 meters across (assuming an angle of repose of 20 degrees – this was the most gruesome geometry calculation I have done so far). Saying we should control technology is a bit like saying society should control art: it might be more practically useful, but it springs from the same well of creativity and limiting it is as suffocating as limiting what may be written or painted.

Technological determinism is often used as an easy out for transhumanists: the future will arrive no matter what you do, so the choice is just between accepting or resisting it. But this is not the argument I am making. That nobody is in charge doesn’t mean the future is not changeable.

The very creativity, economics and autonomy that creates the future is by its nature something individual and unpredictable. While we can relatively safely assume that if something can be done it will be done, what actually matters is whether it will be done early or late, or seldom or often. We can try to hurry beneficial or protective technologies so they arrive before the more problematic ones. We can try to aim at beneficial directions in favour over more problematic ones. We can create incentives that make fewer want to use the bad ones. And so on. The “we” in this paragraph is not so much a collective coordinated “us” as the sum of individuals, companies and institutions, “ourselves”: there is no requirement to get UN permission before you set out to make safe AI or develop life extension. It just helps if a lot of people support your aims.

John Stuart Mill’s harm principle allows society to step in an limit freedom when it causes harms to others, but most enhancements look unlikely to produce easily recognizable harms. This is not a ringing endorsement: as Nick Bostrom has pointed out, there are some bad directions of evolution we might not want to go down, yet it is individually rational for each of us to go slightly in that direction. And existential risk is so dreadful that it actually does provide a valid reason to stop certain human activities if we cannot find alternative solutions. So while I think we should not try to stop people from enhancing themselves we should want to improve our collective coordination ability to restrain ourselves. This is the “us” part. Restraint does not just have to happen in the form of rules: we restrain ourselves already using socialization, reputations, and incentive structures. Moral and cognitive enhancement could add restraints we currently do not have: if you can clearly see the consequences of your actions it becomes much harder to do bad things. The long-term outlook fostered by radical life extension may also make people more risk aversive and willing to plan for long-term sustainability.

One could dream of some enlightened despot or technocrat deciding. A world government filled with wise, disinterested and skilled members planning our species future. But this suffers from essentially the economic calculation problem: while a central body might have a unified goal, it will lack information about the preferences and local states among the myriad agents in the world. Worse, the cognitive abilities of the technocrat will be far smaller than the total cognitive abilities of the other agents. This is why rules and laws tend to get gamed – there are many diverse entities thinking about ways around them. But there are also fundamental uncertainties and emergent phenomena that will bubble up from the surrounding agents and mess up the technocratic plans. As Virginia Postrel noted, the typical solution is to try to browbeat society into a simpler form that can be managed more easily… which might be acceptable if the stakes are the very survival of the species, but otherwise just removes what makes a society worth living in. So we better maintain our coordination ourselves, all of us, in our diverse ways.


Ethics of brain emulations, New Scientist edition

Si elegansI have an opinion piece in New Scientist about the ethics of brain emulation. The content is similar to what I was talking about at IJCNN and in my academic paper (and the comic about it). Here are a few things that did not fit the text:

Ethics that got left out

Due to length constraints I had to cut the discussion about why animals might be moral patients. That made the essay look positively Benthamite in its focus on pain. In fact, I am agnostic on whether experience is necessary for being a moral patient. Here is the cut section:

Why should we care about how real animals are treated? Different philosophers have given different answers. Immanuel Kant did not think animals matter in themselves, but our behaviour towards them matters morally: a human who kicks a dog is cruel and should not do it. Jeremy Bentham famously argued that thinking does not matter, but the capacity to suffer: “…the question is not, Can they reason? nor, Can they talk? but, Can they suffer?” . Other philosophers have argued that it matters that animals experience being subjects of their own life, with desires and goals that make sense to them. While there is a fair bit of disagreement of what this means for our responsibilities to animals and what we may use them for, there is a widespread agreement that they are moral patients, something we ought to treat with some kind of care.

This is of course a super-quick condensation of a debate that fills bookshelves. It also leaves out Christine Korsgaard’s interesting Kantian work on animal rights, which as far as I can tell does not need to rely on particular accounts of consciousness and pain but rather interests. Most people would say that without consciousness or experience there is nobody that is harmed, but I am not entirely certain unconscious systems cannot be regarded as moral patients. There are for example people working in environmental ethics that ascribe moral patient-hood and partial rights to species or natural environments.

Big simulations: what are they good for?

Another interesting thing that had to be left out is comparisons of different large scale neural simulations.

(I am a bit uncertain about where the largest model in the Human Brain Project is right now; they are running more realistic models, so they will be smaller in terms of neurons. But they clearly have the ambition to best the others in the long run.)

Of course, one can argue which approach matters. Spaun is a model of cognition using low resolution neurons, while the slightly larger (in neurons) simulation from the Lansner lab was just a generic piece of cortex, showing some non-trivial alpha and gamma rhythms, and the even larger ones showing some interesting emergent behavior despite the lack of biological complexity in the neurons. Conversely, Cotterill’s CyberChild that I worry about in the opinion piece had just 21 neurons in each region but they formed a fairly complex network with many brain regions that in a sense is more meaningful as an organism than the near-disembodied problem-solver Spaun. Meanwhile SpiNNaker is running rings around the others in terms of speed, essentially running in real-time while the others have slowdowns by a factor of a thousand or worse.

The core of the matter is defining what one wants to achieve. Lots of neurons, biological realism, non-trivial emergent behavior, modelling a real neural system, purposeful (even conscious) behavior, useful technology, or scientific understanding? Brain emulation aims at getting purposeful, whole-organism behavior from running a very large, very complete biologically realistic simulation. Many robotics and AI people are happy without the biological realism and would prefer as small simulation as possible. Neuroscientists and cognitive scientists care about what they can learn and understand based on the simulations, rather than their completeness. They are all each pursuing something useful, but it is very different between the fields. As long as they remember that others are not pursuing the same aim they can get along.

What I hope: more honest uncertainty

What I hope happens is that computational neuroscientists think a bit about the issue of suffering (or moral patient-hood) in their simulations rather than slip into the comfortable “It is just a simulation, it cannot feel anything” mode of thinking by default.

It is easy to tell oneself that simulations do not matter because not only do we know how they work when we make them (giving us the illusion that we actually know everything there is to know about the system – obviously not true since we at least need to run them to see what happens), but institutionally it is easier to regard them as non-problems in terms of workload, conflicts and complexity (let’s not rock the boat at the planning meeting, right?) And once something is in the “does not matter morally” category it becomes painful to move it out of it – many will now be motivated to keep it there.

I rather have people keep an open mind about these systems. We do not understand experience. We do not understand consciousness. We do not understand brains and organisms as wholes, and there is much we do not understand about the parts either. We do not have agreement on moral patient-hood. Hence the rational thing to do, even when one is pretty committed to a particular view, is to be open to the possibility that it might be wrong. The rational response to this uncertainty is to get more information if possible, to hedge our bets, and try to avoid actions we might regret in the future.

The limits of the in vitro burger

New growthStepping on toes everywhere in our circles, Ben Levinstein and me have a post at Practical Ethics about the limitations of in vitro meat for reducing animal suffering.

The basic argument is that while factory farming produces a lot of suffering, a post-industrial world would likely have very few lives of the involved species. It would be better if they had better lives and larger populations instead. So, at least in some views of consequentialism, the ethical good of in vitro meat is reduced from a clear win to possibly even a second best to humane farming.

An analogy can be made with horses, whose population has declined precipitiously from the pre-tractor, pre-car days. Current horses live (I guess) nicer lives than the more work-oriented horses of 1900, but they have much fewer lives. So the current 3 million horses in the US might have lives (say) twice as good as the 25 million horses in the 1920s: the total value has still declined. However, factory farmed animals may have lives that are not worth living, holding negative value. If we assume the about 50 billion chickens in in the world all have lives of value -1 each, then replacing them with in vitro meat would give make the world 50 billion units better. But this could also be achieved by making their lives one unit better (and why stop there? maybe they could get two units more). Whether it matters how many entities are experiencing depends on your approach, as does whether it is an extra value if there is a chicken species around rather than not.

Now, I am not too troubled by this since I think in vitro meat is also very good from a health perspective, a climate perspective, and an existential risk reduction perspective (it is good for space colonization and survival if sunlight is interrupted). But I think most people come to in vitro meat from an ethical angle. And given just that perspective, we should not be too complacent that in the future we will become postagricultural: it may take time, and it might actually not increase total wellfare as much as we expected.


The moral responsibility of office software

Si elegansOn practical ethics Ben and me blog about user design ethics: when you make software that a lot of people use, even tiny flaws such as delays mean significant losses when summed over all users, and affordances can entice many people to do the wrong thing. So be careful and perfectionist!

This is in many ways the fundamental problem of the modern era. Since successful things get copied into millions or billions, the impact of a single choice can become tremendous. One YouTube clip or one tweet, and suddenly the attention of millions of people will descend on someone. One bug, and millions of computers are vulnerable. A clever hack, and suddenly millions can do it too.

We ought to be far more careful, yet that is hard to square with a free life. Most of the time, it also does not matter since we get lost in the noise with our papers, tweets or companies – the logic of the power law means the vast majority will never matter even a fraction as much as the biggest.