Obligatory Covid-19 blogging

SARS-CoV-2 spike ectodomain structure (open state)
SARS-CoV-2 spike ectodomain structure (open state) https://3dprint.nih.gov/discover/3DPX-013160
Over at Practical Ethics I have blogged a bit:

The Unilateralist Curse and Covid-19, or Why You Should Stay Home: why we are in a unilateralist curse situation in regards to staying home, making it rational to stay home even when it seems irrational.

Taleb and Norman had a short letter Ethics of Precaution: Individual and Systemic Risk making a similar point, noting that recognizing the situation type and taking contagion dynamics into account is a reason to be more cautious. It is different from our version in the effect of individual action: we had single actor causing full consequences, the letter has exponential scale-up. Also, way more actors: everyone rather than just epistemic peers, and incentives that are not aligned since actors do not bear the full costs of their actions. The problem is finding strategies robust to stupid, selfish actors. Institutional channeling of collective rationality and coordination is likely the only way for robustness here.

Never again – will we make Covid-19 a warning shot or a dud? deals with the fact that we are surprisingly good at forgetting harsh lessons (1918, 1962, Y2K, 2006…), giving us a moral duty to try to ensure appropriate collective memory of what needs to be recalled.

This is why Our World In Data, the Oxford COVID-19 Government Response Tracker and IMF’s policy responses to Covid-19 are so important. The disjointed international responses act as natural experiments that will tell us important things about best practices and the strengths/weaknesses of various strategies.

And the pedestrians are off! Oh no, that lady is jaywalking!

In 1983 Swedish Television began an all-evening entertainment program named Razzel. It was centred around the state lottery draw, with music, sketch comedy, and television series interspersed between the blocks. Yes, this was back in the day when there was two TV channels to choose from and more or less everybody watched. The ice age had just about ended.

One returning feature consisted of camera footage of a pedestrian crossing in Stockholm. A sports commenter well-known for his coverage of horse-racing narrated the performance of the unknowing pedestrians as if they were competing in a race. In some cases I even think he even showed up to deliver flowers to the “winner”. But you would get disqualified if you had a false start or went outside the stripes!

I suspect this feature noticeably improved traffic safety for a generation.

I was reminded of this childhood memory earlier today when discussing the use of face recognition in China to detect jaywalkers and display them on a billboard to shame them. The typical response in a western audience is fear of what looks like a totalitarian social engineering program. The glee with which many responded to the news that the system had been confused by a bus ad, putting a celebrity on the board of shame, is telling.

Is there a difference?

But compare the Chinese system to the TV program. In the China case the jaywalker may be publicly shamed from the billboard… but in the cheerful 80s TV program they were shamed in front of much of the nation.

There is a degree of increased personalness in the Chinese case since it also displays their name, but no doubt friends and neighbours would recognize you if they saw you on TV (remember, this was back when we only had two television channels and a fair fraction of people watched TV on Friday evening). There may also be SMS messages involved in some versions of the system. This acts differently: now it is you who gets told off when you misbehave.

A fundamental difference may be the valence of the framing. The TV show did this as happy entertainment, more of a parody of sport television than an attempt at influencing people. The Chinese system explicitly aims at discouraging misbehaviour. The TV show encouraged positive behaviour (if only accidentally).

So the dimensions here may be the extent of the social effect (locally, or nationwide), the degree the feedback is directly personal or public, and whether it is a positive or negative feedback. There is also a dimension of enforcement: is this something that happens every time you transgress the rules, or just randomly?

In terms of actually changing behaviour making the social effect broad rather than close and personal might not have much effect: we mostly care about our standing relative to our peers, so having the entire nation laugh at you is certainly worse than your friends laughing, but still not orders of magnitude more mortifying. The personal message on the other hand sends a signal that you were observed; together with an expectation of effective enforcement this likely has a fairly clear deterrence effect (it is often not the size of the punishment that deters people from crime, but their expectation of getting caught). The negative stick of acting wrong and being punished is likely stronger than the positive carrot of a hypothetical bouquet of flowers.

Where is the rub?

From an ethical standpoint, is there a problem here? We are subject to norm enforcement from friends and strangers all the time. What is new is the application of media and automation. They scale up the stakes and add the possibility of automated enforcement. Shaming people for jaywalking is fairly minor, but some people have lost jobs, friends or been physically assaulted when their social transgressions have become viral social media. Automated enforcement makes the panopticon effect far stronger: instead of suspecting a possibility of being observed it is a near certainty. So the net effect is stronger, more pervasive norm enforcement…

…of norms that can be observed and accurately assessed. Jaywalking is transparent in a way being rude or selfish often isn’t. We may end up in a situation where we carefully obey some norms, not because they are the most important but because they can be monitored. I do not think there is anything in principle impossible about a rudeness detection neural network, but I suspect the error rates and lack of context sensitivity would make it worse than useless in preventing actual rudeness. Goodhart’s law may even make it backfire.

So, in the end, the problem is that automated systems encode a formalization of a social norm rather than the actual fluid social norm. Having a TV commenter narrate your actions is filtered through the actual norms of how to behave, while the face recognition algorithm looks for a pattern associated with transgression rather than actual transgression. The problem is that strong feedback may then lock in obedience to the hard to change formalization rather than actual good behaviour.

Arguing against killer robot janissaries

Military robot being shown to families at New Scientist Live 2017.
Military robot being shown to families at New Scientist Live 2017.

I have a piece in Dagens Samhälle with Olle Häggström, Carin Ism, Max Tegmark and Markus Anderljung urging the Swedish parliament to consider banning lethal autonomous weapons.

This is of course mostly symbolic; the real debate is happening right now over in Geneva at the CCW. I also participated in a round-table with the Red Cross that led to their report on the issue, which is one of the working papers presented there.

I am not particularly optimistic that we will get a ban – nor that a ban would actually achieve much. However, I am much more optimistic that this debate may force a general agreement about the importance of getting meaningful human control. This is actually an area where most military and peace groups would agree: nobody wants systems that are unaccountable and impossible to control. Making sure there are international agreements that using such systems is irresponsible and maybe even a war crime would be a big win. But there are lots of devils in the details.

When it comes to arguments for why LAWs are morally bad I am personally not so convinced that the bad comes from a machine making the decision to kill a person. Clearly some machine possible decisionmaking does improve proportionality and reduce arbitrariness. Similarly arguments about whether they would increase or reduce the risk of military action and how this would play out in terms of human suffering and death are interesting empirical arguments but we should not be overconfident in that we know the answers. Given that once LAWs are in use it will be hard to roll them back if the answers are bad, we might find it prudent to try to avoid them (but consider the opposing scenario where since time immemorial robots have fought our wars and somebody now suggests using humans too – there is a status quo bias here).

My main reason for being opposed to LAWs is not that they would be inherently immoral, nor that they would necessarily or even likely make war worse or more likely. My view is that the problem is that they give states too much power. Basically they make their monopoly on violence independent of the wishes of the citizens. Once a sufficiently potent LAW military (or police force) exist it will be able to exert coercive and lethal power as ordered without any mediation through citizens. While having humans in the army certainly doesn’t guarantee moral behavior, if ordered to turn against the citizenry or act in a grossly immoral way they can exert moral agency and resist (with varying levels of overtness). The LAW army will instead implement the orders as long as they are formally lawful (assuming there is at least a constraint against unlawful commands). States know that if they mistreat their population too much their army might side with the population, a reason why some of the nastier governments make use of mercenaries or a special separate class of soldier to reduce the risk. If LAWs become powerful enough they might make dictatorships far more stable by removing a potentially risky key component of state power from the internal politics.

Bans and moral arguments are unlikely to work against despots. But building broad moral consensuses on what is acceptable in war does have effects. If R&D emphasis is directed towards finding solutions to how to manage responsibility for autonomous device decisions that will develop a lot of useful technologies for making such systems at least safer – and one can well imagine similar legal and political R&D into finding better solutions to citizen-independent state power.

In fact, far more important than LAWs is what to do about Lethal Autonomous States. Bad governance kills, many institutions/corporations/states behave just as badly as the worst AI risk visions and have a serious value alignment problem, and we do not have great mechanisms for handling responsibility in inter-state conflicts. The UN system is a first stab at the problem but obviously much, much more can be done. In the meantime, we can try avoiding going too quickly down a risky path while we try to find safe-making technologies and agreements.

Review of the cyborg bill of rights 1.0

Cyborg NewtonThe Cyborg Bill of Rights 1.0 is out. Rich MacKinnon suggests the following rights:

FREEDOM FROM DISASSEMBLY
A person shall enjoy the sanctity of bodily integrity and be free from unnecessary search, seizure, suspension or interruption of function, detachment, dismantling, or disassembly without due process.

FREEDOM OF MORPHOLOGY
A person shall be free (speech clause) to express themselves through temporary or permanent adaptions, alterations, modifications, or augmentations to the shape or form of their bodies. Similarly, a person shall be free from coerced or otherwise involuntary morphological changes.

RIGHT TO ORGANIC NATURALIZATION
A person shall be free from exploitive or injurious 3rd party ownerships of vital and supporting bodily systems. A person is entitled to the reasonable accrual of ownership interest in 3rd party properties affixed, attached, embedded, implanted, injected, infused, or otherwise permanently integrated with a person’s body for a long-term purpose.

RIGHT TO BODILY SOVEREIGNTY
A person is entitled to dominion over intelligences and agents, and their activities, whether they are acting as permanent residents, visitors, registered aliens, trespassers, insurgents, or invaders within the person’s body and its domain.

EQUALITY FOR MUTANTS
A legally recognized mutant shall enjoy all the rights, benefits, and responsibilities extended to natural persons.

As a sometime philosopher with a bit of history of talking about rights regarding bodily modification, I of course feel compelled to comment.

What are rights?

Artifical handFirst, what is a right? Clearly anybody can state that we have a right to X, but only some agents and X-rights make sense or have staying power.

One kind of rights are legal rights of various kinds. This can be international law, national law, or even informal national codes (for example the Swedish allemansrätten, which is actually not a moral/human right and actually fairly recent). Here the agent has to be some legitimate law- or rule-maker. The US Bill of Rights is an example: the result of a political  process that produced legal rights, with relatively little if any moral content. Legal rights need to be enforceable somehow.

Then there are normative moral principles such as fundamental rights (applicable to a person since they are a person), natural rights (applicable because of facts of the world) or divine rights (imposed by God). These are universal and egalitarian: applicable everywhere, everywhen, and the same for everybody. Bentham famously dismissed the idea of natural rights as “nonsense on stilts” and there is a general skepticism today about rights being fundamental norms. But insofar they do exist, anybody can discover and state them. Moral rights need to be doable.

While there may be doubts about the metaphysical nature of rights, if a society agrees on a right it will shape action, rules and thinking in an important way. It is like money: it only gets value by the implicit agreement that it has value and can be exchanged for goods. Socially constructed rights can be proposed by anybody, but they only become real if enough people buy into the construction. They might be unenforceable and impossible to perform (which may over time doom them).

What about the cyborg rights? There is no clear reference to moral principles, and only the last one refers to law. In fact, the preamble states:

Our process begins with a draft of proposed rights that are discussed thoroughly, adopted by convention, and then published to serve as model language for adoption and incorporation by NGOs, governments, and rights organizations.

That is, these rights are at present a proposal for social construction (quite literally) that hopefully will be turned into a convention (a weak international treaty) that eventually may become national law. This also fits with the proposal coming from MacKinnon rather than the General Secretary of the UN – we can all propose social constructions and urge the creation of conventions, treaties and laws.

But a key challenge is to come up with something that can become enforceable at some point. Cyborg bodies might be more finely divisible and transparent than human bodies, so that it becomes hard to regulate these rights. How do you enforce sovereignty against spyware?

Justification

Dragon leg 2Why is a right a right? There has to be a reason for a right (typically hinted at in preambles full of “whereas…”)

I have mostly been interested in moral rights. Patrick D. Hopkins wrote an excellent overview “Is enhancement worthy of being a right?” in 2008 where he looks at how you could motivate morphological freedom. He argues that there are three main strategies to show that a right is fundamental or natural:

  1. That the right conforms to human nature. This requires showing that it fits a natural end. That is, there are certain things humans should aim for, and rights help us live such lives. This is also the approach of natural law accounts.
  2. That the right is grounded in interests. Rights help us get the kinds of experiences or states of the world that we (rightly) care about. That is, there are certain things that are good for us (e.g.  “the preservation of life, health, bodily integrity, play, friendship, classic autonomy, religion, aesthetics, and the pursuit of knowledge”) and the right helps us achieve this. Why those things are good for us is another matter of justification, but if we agree on the laundry list then the right follows if it helps achieve them.
  3. That the right is grounded in our autonomy. The key thing is not what we choose but that we get to choose: without freedom of choice we are not moral agents. Much of rights by this account will be about preventing others from restricting our choices and not interfering with their choices. If something can be chosen freely and does not harm others, it has a good chance to be a right. However, this is a pretty shallow approach to autonomy; there are more rigorous and demanding ideas of autonomy in ethics (see SEP and IEP for more). This is typically how many fundamental rights get argued (I have a right to my body since if somebody can interfere with my body, they can essentially control me and prevent my autonomy).

One can do this in many ways. For example, David Miller writes on grounding human rights that one approach is to allow people from different cultures to live together as equals, or basing rights on human needs (very similar to interest accounts), or the instrumental use of them to safeguard other (need-based) rights. Many like to include human dignity, another tricky concept.

Social constructions can have a lot of reasons. Somebody wanted something, and this was recognized by others for some reason. Certain reasons are cultural universals, and that make it more likely that society will recognize a right. For example, property seems to be universal, and hence a right to one’s property is easier to argue than a right to paid holidays (but what property is, and what rules surround it, can be very different).

Legal rights are easier. They exist because there is a law or treaty, and the reasons for that are typically a political agreement on something.

It should be noted that many declarations of rights do not give any reasons. Often because we would disagree on the reasons, even if we agree on the rights. The UN declaration of human rights give no hint of where these rights come from (compare to the US declaration of independence, where it is “self-evident” that the creator has provided certain rights to all men). Still, this is somewhat unsatisfactory and leaves many questions unanswered.

So, how do we justify cyborg rights?

In the liberal rights framework I used for morphological freedom we could derive things rather straightforwardly: we have a fundamental right to life, and from this follows freedom from disassembly. We have a fundamental right to liberty, and together with the right to life this leads to a right to our own bodies, bodily sovereignty, freedom of morphology and the first half of the right to organic naturalization. We have a right to our property (typically derived from fundamental rights to seek our happiness and have liberty), and from this the second half of the organic naturalization right follows (we are literally mixing ourselves rather than our work with the value produced by the implants). Equality for mutants follow from having the same fundamental rights as humans (note that the bill talks about “persons”, and most ethical arguments try to be valid for whatever entities count as persons – this tends to be more than general enough to cover cyborg bodies). We still need some justification of the fundamental rights of life, liberty and happiness, but that is outside the scope of this exercise. Just use your favorite justifications.

The human nature approach would say that cyborg nature is such that these rights fit with it. This might be tricky to use as long as we do not have many cyborgs to study the nature of. In fact, since cyborgs are imagined as self-creating (or at least self-modifying) beings it might be hard to find any shared nature… except maybe the self-creation part. As I often like to argue, this is close to Mirandola’s idea of human dignity deriving from our ability to change ourselves.

The interest approach would ask how the cyborg interests are furthered by these rights. That seems pretty straightforward for most reasonably human-like interests. In fact, the above liberal rights framework is to a large extent an interest-based account.

The autonomy account is also pretty straightforward. All cyborg rights except the last are about autonomy.

Could we skip the ethics and these possibly empty constructions? Perhaps: we could see the cyborg bill of rights as a way of making a cyborg-human society possible to live in. We need to tolerate each other and set boundaries on allowed messing around with each other’s bodies. Universals of property lead to the naturalization right, territoriality the sovereignty right universal that actions under self-control are distinguished from those not under control might be taken as the root for autonomy-like motivations that then support the rest.

Which one is best? That depends. The liberal rights/interest system produces nice modular rules, although there will be much arguments on what has precedence. The human nature approach might be deep and poetic, but potentially easy to disagree on. Autonomy is very straightforward (except when the cyborg starts messing with their brain). Social constructivism allows us to bring in issues of what actually works in a real society, not just what perfect isolated cyborgs (on a frictionless infinite plane) should do.

Parts of rights

Alternative limb projectOne of the cool properties of rights is that they have parts – “the Hohfeldian incidents“, after Wesley Hohfeld (1879–1918) who discovered them. He was thinking of legal rights, but this applies to moral rights too. His system is descriptive – this is how rights work – rather than explaining why the came about or whether this is a good thing. The four parts are:

Privileges (alias liberties): I have a right to eat what I want. Someone with a driver’s licence has the privilege to drive. If you have a duty not do do something, then you have no privilege about it.

Claims: I have a claim on my employer to pay my salary. Children have a claim vis-a-vis every adult not to be abused. My employer is morally and legally dutybound to pay, since they agreed to do so. We are dutybound to refrain from abusing children since it is wrong and illegal.

These two are what most talk about rights deal. In the bill, the freedom from disassembly and freedom of morphology are about privileges and claims. The next two are a bit meta, dealing with rights over the first two:

Powers: My boss has the power to order me to research a certain topic, and then I have a duty to do it. I can invite somebody to my home, and then they have the privilege of being there as long as I give it to them. Powers allow us to change privileges and claims, and sometimes powers (an admiral can relieve a captain of the power to command a ship).

Immunities: My boss cannot order me to eat meat. The US government cannot impose religious duties on citizens. These are immunities: certain people or institutions cannot change other incidents.

These parts are then combined into full rights. For example, my property rights to this computer involve the privilege to use the computer, a claim against others to not use the computer, the power to allow others to use it or to sell it to them (giving them the entire rights bundle), and an immunity of others altering these rights. Sure, in practice the software inside is of doubtful loyalty and there are law-enforcement and emergency situation exceptions, but the basic system is pretty clear. Licence agreements typically give you a far

Sometimes we speak about positive and negative rights: if I have a negative right I am entitled to non-interference from others, while a positive right entitles me to some help or goods. My right to my body is a negative right in the sense that others may not prevent me from using or changing my body as I wish, but I do not have a positive right to demand that they help me with some weird bodymorphing. However, in practice there is a lot of blending going on: public healthcare systems give us positive rights to some (but not all) treatment, policing gives us a positive right of protection (whether we want it or not). If you are a libertarian you will tend to emphasize the negative rights as being the most important, while social democrats tend to emphasize state-supported positive rights.

The cyborg bill of rights starts by talking about privileges and claims. Freedom of morphology clearly expresses an immunity to forced bodily change. The naturalization right is about immunity from unwilling change of the rights of parts, and an expression of a kind of power over parts being integrated into the body. Sovereignty is all about power over entities getting into the body.

The right of bodily sovereignty seems to imply odd things about consensual sex – once there is penetration, there is dominion. And what about entities that are partially inside the body? I think this is because it is trying to reinvent some of the above incidents. The aim is presumably to cover pregnancy/abortion, what doctors may do, and other interventions at the same time. The doctor case is easy, since it is roughly what we agree on today: we have the power to allow doctors to work on our bodies, but we can also withdraw this whenever we want

Some other thoughts

Nigel on the screenThe recent case where the police subpoenad the pacemaker data of a suspected arsonist brings some of these rights into relief. The subpoena occurred with due process, so it was allowed by the freedom from disassembly. In fact, since it is only information and that it is copied one can argue that there was no real “disassembly”. There have been cases where police wanted bullets lodged in people in order to do ballistics on them, but US courts have generally found that bodily integrity trumps the need for evidence. Maybe one could argue for a derived right to bodily privacy, but social needs can presumably trump this just as it trumps normal privacy. Right now views on bodily integrity and privacy are still based on the assumption that bodies are integral and opaque. In a cyborg world this is no longer true, and the law may well move in a more invasive direction.

“Legally recognized mutant”? What about mutants denied legal recognition? Legal recognition makes sense for things that the law must differentiate between, not for things the law is blind to. Legally recognized mutants (whatever they are) would be a group that needs to be treated in some special way. If they are just like natural humans they do not need special recognition. We may have laws making it illegal to discriminate against mutants, but this is a law about a certain kind of behavior rather than the recipient. If I racially discriminate against somebody but happens to be wrong about their race, I am still guilty. So the legal recognition part does not do any work in this right.

And why just mutants? Presumably the aim here is to cover cyborgs, transhumans and other prefix-humans so they are recognized as legal and moral agents with the same standing. The issue is whether this is achieved by arguing that they were human and “mutated”, or are descended from humans, and hence should have the same standing, or whether this is due to them having the right kind of mental states to be persons. The first approach is really problematic: anencephalic infants are mutants but hardly persons, and basing rights on lineage seems ripe for abuse. The second is much simpler, and allows us to generalize to other beings like brain emulations, aliens, hypothetical intelligent moral animals, or the Swampman.

This links to a question that might deserve a section on its own: who are the rightsholders? Normal human rights typically deal with persons, which at least includes adults capable of moral thinking and acting (they are moral agents). Someone who is incapable, for example due to insanity or being a child, have reduced rights but are still a moral patient (someone we have duties towards). A child may not have full privileges and powers, but they do have claims and immunities. I like to argue that once you can comprehend and make use of a right you deserve to have it, since you have capacity relative to the right. Some people also think prepersons like fertilized eggs are persons and have rights; I think this does not make much sense since they lack any form of mind, but others think that having the potential for a future mind is enough to grant immunity. Tricky border cases like persistent vegetative states, cryonics patients, great apes and weird neurological states keep bioethicists busy.

In the cyborg case the issue is what properties make something a potential rightsholder and how to delineate the border of the being. I would argue that if you have a moral agent system it is a rightsholder no matter what it is made of. That is fine, except that cyborgs might have interchangeable parts: if cyborg A gives her arm to cyborg B, have anything changed? I would argue that the arm switched from being a part of/property of A to being a part of/property of B, but the individuals did not change since the parts that make them moral agents are unchanged (this is just how transplants don’t change identity). But what if A gave part of her brain to B? A turns into A’, B turns into B’, and these may be new agents. Or what if A has outsourced a lot of her mind to external systems running in the cloud or in B’s brain? We may still argue that rights adhere to being a moral agent and person rather than being the same person or a person that can easily be separated from other persons or infrastructure. But clearly we can make things really complicated through overlapping bodies and minds.

Summary

I have looked at the cyborg bill of rights and how it fits with rights in law, society and ethics. Overall it is a first stab at establishing social conventions for enhanced, modular people. It likely needs a lot of tightening up to work, and people need to actually understand and care about its contents for it to have any chance of becoming something legally or socially “real”. From an ethical standpoint one can motivate the bill in a lot of ways; for maximum acceptance one needs to use a wide and general set of motivations, but these will lead to trouble when we try to implement things practically since they give no way of trading one off against another one in a principled way. There is a fair bit of work needed to refine the incidences of the rights, not to mention who is a rightsholder (and why). That will be fun.

Solomon’s frozen judgement

A girl dying of cancer wanted to use cryonic preservation to have a chance at being revived in the future. While supported by her mother the father disagreed; in a recent high court ruling, the judge found that she could be cryopreserved.

As the judge noted, the verdict was not a statement on the validity of cryonics itself, but about how to make decisions about prospective orders. In many ways the case would presumably have gone the same way if there had been a disagreement about whether the daughter could have catholic last rites. However, cryonics makes things fresh and exciting (I have been in the media all day thanks to this).

What is the ethics of parents disagreeing about the cryosuspension of their child?

Best interests

One obvious principle is that parents ought to act in the best interest of their children.

If the child is morally mature and with informed consent, then they can clearly have a valid interest in taking a chance on cryonics: they might not be legally adult, but as in normal medical ethics their stated interests have strong weight. Conversely, one could imagine a case where a child would not want to be preserved, in which case I think most people would agree their preferences should dominate.

The general legal consensus in the West is that the child’s welfare is so important that it can overrule the objections of parents. In UK law parents have the right and the duty to give consent for a minor. Children can consent for medical treatment, overriding their parents, at 16. However, if refusing treatment parents and court can override. This mostly comes into play in cases such as avoiding blood transfusions for religious reasons.

In this case the issue was that the parents were disagreeing and the child was not legally old enough.

If one thinks cryonics is reasonable, then one should clearly cryosuspend the child: it is in their best interest. But if one thinks cryonics is not reasonable, is it harming the interest of the child? This seems to require some theory of how cryonics is bad for the interests of the child.

As an analogy, imagine a case where one parent is a Jehovah’s Witness and want to refuse a treatment involving blood transfusion: the child will die without the treatment, and it will be a close call even with it. Here the objecting parent may claim that undergoing the transfusion harms the child in an important spiritual way and refuse consent. The other parent disagrees. Here the law would come down on the side of the pro-transfusion parent.

On this account and if we agree the cases are similar, we might say that parents have a legal duty to consent to cryonics.

Weak and strong reasons

In practice the controversialness of cryonics may speak against this: many people disagree about cryonics being good for one’s welfare. However, most such arguments usually seem to be based on various farfetched scenarios about how the future could be a bad place to end up in. Others bring up loss of social connections or that personal identity would be disrupted. A more rational argument is that it is an unproven treatment of dubious efficacy, which would make it irrational to undertake if there was an alternative; however since there isn’t any alternative this argument has little power. The same goes for the risk of loss of social connection or identity: had there been an alternative to death (which definitely severs connections and dissolves identity) that may have been preferable. If one seriously thinks that the future would be so dark that it is better not to get there, one should probably not have children.

In practice it is likely that the status of cryonics as nonstandard treatment would make the law hesitate to overrule parents. We know blood transfusions work, and while spiritual badness might be a respectable as a private view we as a society do not accept it as a sufficient reason to have somebody die. But in the case of cryonics the unprovenness of the treatment means that hope for revival is on nearly the same epistemic level as spiritual badness: a respectable private view, but not strong enough to be a valid public reason. Cryonicists are doing their best to produce scientific evidence – tissue scans, memory experiments, protocols – that move the reasons to believe in cryonics from the personal faith level to the public evidence level. They already have some relevant evidence. As soon as lab mice are revived or people become convinced the process saves the connectome the reasons would be strengthened and cryonics becomes more akin blood transfusion.

The key difference is that weak private reasons are enough to allow an experimental treatment where there is no alternative but death, but they are generally not enough to go for an experimental treatment when there is some better treatment. When disallowing a treatment weak reasons may work well against unproven or uncertain treatments, but not when it is proven. However, disallowing a treatment with no alternative is equivalent to selecting death.

When two parents disagree about cryonics (and the child does not have a voice) it hence seems that they both have weak reasons, but the asymmetry between having a chance and dying tilts in favor of cryonics. If it was purely a matter of aesthetics or value (for example, arguing about the right kind of last rites) there would be no societal or ethical constraint. But here there is some public evidence, making it at least possible that the interests of the child might be served by cryonics. Better safe than sorry.

When the child also has a voice and can express its desires, then it becomes obvious which way to go.

King Solomon might have solved the question by cryosuspending the child straight away, promising the dissenting parent not to allow revival until they either changed their mind or there was enough public evidence to convince anybody that it would be in the child’s interest to be revived. The nicest thing about cryonics is that it buys you time to think things through.

AI, morality, ethics and metaethics

Next Sunday I will be debating AI ethics at Battle of Ideas. Here is a podcast where I talk AI, morality and ethics: https://soundcloud.com/institute-of-ideas/battle-cry-anders-sandberg-on-ethical-ai

What distinguishes morals from ethics?

There is actually a shocking confusion about what the distinction between morals and ethics is. Differen.com says ethics is about rules of conduct produced by an external source while morals are an individual’s own principles of right and wrong. Grammarist.com says morals are principles on which one’s own judgement of right and wrong are based (abstract, subjective and personal), ethics are the principles of right conduct (practical, social and objective). Ian Welsh gives a soundbite: “morals are how you treat people you know.  Ethics are how you treat people you don’t know.” Paul Walker and Terry Lovat say ethics leans towards decisions based on individual character and subjective understanding of right and wrong, while morals is about widely shared communal or societal norms – here ethics is individual assessment of something being good or bad, while morality is inter-subjective community assessment.

Wikipedia distinguishes between ethics as a research field and the common human ability to think critically about moral values and direct actions appropriately, or a particular persons principles of values. Morality is the differentiation between things that are proper and improper, as well as a body of standards and principles in derived from a code of conduct in some philosophy, religion or culture… or derived from a standard a person believes to be universal.

Dictionary.com regards ethics as a system of moral principles, the rules of conduct recognized in some human environment, an individual’s moral principles (and the branch of philosophy). Morality is about conforming to the rules of right conduct, having moral quality or character, a doctrine or system of morals and a few other meanings. The Cambridge dictionary thinks ethics is the study of what is right or wrong, or the set of beliefs about it, while morality is a set of personal or social standards for good/bad behavior and character.

And so on.

I think most people try to include the distinction between shared systems of conduct and individual codes, and the distinction between things that are subjective, socially agreed on, and maybe objective. Plus that we all agree on that ethics is a philosophical research field.

My take on it

I like to think of it as a AI issue. We have a policy function \pi(s,a) that maps states and action pairs to a probability of acting that way; this is set using a value function Q(s) where various states are assigned values. Morality in my sense is just the policy function and maybe the value function: they have been learned through interacting with the world in various ways.

Ethics in my sense is ways of selecting policies and values. We are able to not only change how we act but also how we evaluate things, and the information that does this change is not just reward signals that update value function directly, but also knowledge about the world, discoveries about ourselves, and interactions with others – in particular ideas that directly change the policy and value functions.

When I realize that lying rarely produces good outcomes (too much work) and hence reduce my lying, then I am doing ethics (similarly, I might be convinced about this by hearing others explain that lying is morally worse than I thought or convincing me about Kantian ethics). I might even learn that short-term pleasure is less valuable than other forms of pleasure, changing how I view sensory rewards.

Academic ethics is all about the kinds of reasons and patterns we should use to update our policies and values, trying to systematize them. It shades over into metaethics, which is trying to understand what ethics is really about (and what metaethics is about: it is its own meta-discipline, unlike metaphysics that has metametaphysics, which I think is its own meta-discipline).

I do not think I will resolve any confusion, but at least this is how I tend to use the terminology. Morals is how I act and evaluate, ethics is how I update how I act and evaluate, metaethics is how I try to think about my ethics.

Doing right and feeling good

My panel at Hay-on-Wye (me, Elaine Glaser, Peter Dews and Simon Baron-Cohen) talked about compassion, the sentiment model of morality, effective altruism and how to really help the world. Now available as video!

My view is largely that moral action is strongly driven and motivated by emotions rather than reason, but outside the world of the blindingly obvious or everyday human activity our intuitions and feelings are not great guides. We do not function well morally when the numbers get too big or the cognitive biases become maladaptive. Morality may be about the heart, but ethics is in the brain.

Being reasonable

DisagreementThe ever readable Scott Alexander stimulated a post on Practical Ethics about defaults, status quo, and disagreements about sex. The quick of it: our culture sets defaults on who is reasonable or unreasonable when couples disagree, and these become particularly troubling when dealing with biomedical enhancements of love and sex. The defaults combine with status quo bias and our scepticism for biomedical interventions to cause biases that can block or push people towards certain interventions.

Universal principles?

Essence of ethicsI got challenged on the extropian list, which is a fun reason to make a mini-lecture.

On 2015-10-02 17:12, William Flynn Wallace wrote:
> ​Anders says above that we have discovered universal timeless principles.​ I’d like to know what they are and who proposed them, because that’s chutzpah of the highest order. Oh boy – let’s discuss that one.

Here is one: a thing is identical to itself. (1)

Here is another one: “All human beings are born free and equal in dignity and rights.” (2)

Here is a third one: “Act only according to that maxim whereby you can, at the same time, will that it should become a universal law.” (3)

(1) was first explicitly mentioned by Plato (in Theaetetus). I think you also agree with it – things that are not identical to themselves are unlikely to even be called “things”, and without the principle very little thinking makes sense.

I am not sure whether it is chutzpah of the highest order or a very humble observation.

(2) is from the UN declaration of universal human rights. This sentence needs enormous amounts of unpacking – “free”, “equal”, “dignity”, “rights”… these words can (and are) used in very different ways. Yet I think it makes sense to say that according to a big chunk of Western philosophy this sentence is a true sentence (in the sense that ethical propositions are true), that it is universal (the truth is not contingent on when and where you are, although the applications may change), and we know historically that we have not known this principle forever. Now *why* it is true quickly branches out into different answers depending on what metaethical positions you hold, not to mention the big topic of what kind of truth moral truth actually is (if anything). The funny thing is that the universal part is way less contentious, because of the widely accepted (and rarely stated) formal ethical principle that if it is moral to P in situation X, then the location in time and space where X happens does not matter.

Chutzpah of the highest order? Totally. So is the UN.

(3) is Immanuel Kant, and he argued that any rational moral agent could through pure reason reach this principle. It is in many ways like (1) almost a consistency requirement of moral will (not action, since he doesn’t actually care about the consequences – we cannot fully control those, but we can control what we decide to do). There is a fair bit of unpacking of the wording, but unlike the UN case he defines his terms fairly carefully in the preceding text. His principle is, if he is right, the supreme principle of morality.

Chuzpah auf höchstem Niveau? Total!

Note that (1) is more or less an axiom: there is no argument for why it is true, because there is little point in even trying. (3) is intended to be like a theorem in geometry: from some axioms and the laws of logic, we end up with the categorical imperative. It is just as audacious or normal as the Pythagorean theorem. (2) is a kind of compromise between different ethical systems: the Kantians would defend it based on their system, while consequentialists could make a rule utilitarian argument for why it is true, and contractualists would say it is true because the UN agrees on it. They agree on the mid-level meaning, but not on the other’s derivations. It is thick, messy and political, yet also represents fairly well what most educated people would conclude (of course, they would then show off by disagreeing loudly with each other about details, obscuring the actual agreement).

Philosopher’s views

Do people who think about these things actually believe in universal principles? One fun source is David Bourget and David J. Chalmers’ survey of professional philosophers (data). 56.4% of the respondents were moral realists (there are moral facts and moral values, and that these are objective and independent of our views), 65.7% were moral cognitivists (ethical sentences can be true or false); these were correlated to 0.562. 25.9% were deontologists, which means that they would hold somewhat Kant-like views that some actions are always or never right (some of the rest of course also believe in principles, but the survey cannot tell us anything more). 71.1% thought there was a priori knowledge (things we know by virtue of being thinking beings rather than experience).

My views

Do I believe in timeless principles? Kind of. There are statements in physics that are invariant of translations, rotations, Lorenz boosts and other transformations, and of course math remains math. Whether physics and math are “out there” or just in minds is hard to tell (I lean towards that at least physics is out there in some form), but clearly any minds that know some subset of correct, invariant physics and math can derive other correct conclusions from it. And other minds with the same information can make the same derivations and reach the same conclusions – no matter when or where. So there are knowable principles in these domains every sufficiently informed and smart mind would know. Things get iffy with values, since they might be far more linked to the entities experiencing them, but clearly we can do analyse game theory and make statements like “If agent A is trying to optimize X, agent B optimizes Y, and X and Y do not interact, then they can get more of X and Y by cooperating”. So I think we can get pretty close to universal principles in this framework, even if it turns out that they merely reside inside minds knowing about the outside world.

Living forever

Benjamin Zand has made a neat little documentary about transhumanism, attempts to live forever and the posthuman challenge. I show up of course as soon as ethics is being mentioned.

Benjamin and me had a much, much longer (and very fun) conversation about ethics than could even be squeezed into a TV documentary. Everything from personal identity to overpopulation to the meaning of life. Plus the practicalities of cryonics, transhuman compassion and how to test if brain emulation actually works.

I think the inequality and control issues are interesting to develop further.

Would human enhancement boost inequality?

There is a trivial sense in which just inventing an enhancement produces profound inequality since one person has it, and the rest of mankind lacks it. But this is clearly ethically uninteresting: what we actually care about is whether everybody gets to share something good eventually.

However, the trivial example shows an interesting aspect of inequality: it has a timescale. An enhancement that will eventually benefit everyone but is unequally distributed may be entirely OK if it is spreading fast enough. In fact, by being expensive at the start it might even act as a kind of early adopter/rich tax, since they first versions will pay for R&D of consumer versions – compare computers and smartphones. While one could argue that it is bad to get temporary inequality, long-term benefits would outweigh this for most enhancements and most value theories: we should not sacrifice the poor of tomorrow for the poor of today by delaying the launch of beneficial technologies (especially since it is unlikely that R&D to make them truly cheap will happen just due to technocrats keeping technology in their labs – making tech cheap and useful is actually one area where we know empirically the free market is really good).

If the spread of some great enhancement could be faster though, then we may have a problem.

I often encounter people who think that the rich will want to keep enhancements to themselves. I have never encountered any evidence for this being actually true except for status goods or elites in authoritarian societies.

There are enhancements like height that are merely positional: it is good to be taller than others (if male, at least), but if everybody gets taller nobody benefits and everybody loses a bit (more banged heads and heart problems). Other enhancements are absolute: living healthy longer or being smarter is good for nearly all people regardless of how long other people live or how smart they are (yes, there might be some coordination benefits if you live just as long as your spouse or have a society where you can participate intellectually, but these hardly negate the benefit of joint enhancement – in fact, they support it). Most of the interesting enhancements are in this category: while they might be great status goods at first, I doubt they will remain that for long since there are other reasons than status to get them. In fact, there are likely network effects from some enhanchements like intelligence: the more smart people working together in a society, the greater the benefits.

In the video, I point out that limiting enhancement to the elite means the society as a whole will not gain the benefit. Since elites actually reap rents from their society, this means that from their perspective it is actually in their best interest to have a society growing richer and more powerful (as long as they are in charge). This will mean they lose out in the long run to other societies that have broader spreads of enhancement. We know that widespread schooling, free information access and freedom to innovate tend to produce way wealthier and more powerful societies than those where only elites have access to these goods. I have strong faith in the power of diverse societies, despite their messiness.

My real worry is that enhancements may be like services rather than gadgets or pills (which come down exponentially in price). That would keep them harder to reach, and might hold back adoption (especially since we have not been as good at automating services as manufacturing). Still, we do subsidize education at great cost, and if an enhancement is desirable democratic societies are likely to scramble for a way of supplying it widely, even if it is only through an enhancement lottery.

However, even a world with unequal distribution is not necessarily unjust. Beside the standard Nozickian argument that a distribution is just if it was arrived at through just means there is the Rawlsian argument that if the unequal distribution actually produces benefits for the weakest it is OK. This is likely very true for intelligence amplification and maybe brain emulation since they are likely to cause strong economic growth an innovations that produce spillover effects – especially if there is any form of taxation or even mild redistribution.

Who controls what we become? Nobody, we/ourselves/us

The second issue is who gets a say in this.

As I respond in the interview, in a way nobody gets a say. Things just happen.

People innovate, adopt technologies and change, and attempts to control that means controlling creativity, business and autonomy – you better have a very powerful ethical case to argue for limitations in these, and an even better political case to implement any. A moral limitation of life extension needs to explain how it averts consequences worse than 100,000 dead people per day. Even if we all become jaded immortals that seems less horrible than a daily pile of corpses 12.3 meters high and 68 meters across (assuming an angle of repose of 20 degrees – this was the most gruesome geometry calculation I have done so far). Saying we should control technology is a bit like saying society should control art: it might be more practically useful, but it springs from the same well of creativity and limiting it is as suffocating as limiting what may be written or painted.

Technological determinism is often used as an easy out for transhumanists: the future will arrive no matter what you do, so the choice is just between accepting or resisting it. But this is not the argument I am making. That nobody is in charge doesn’t mean the future is not changeable.

The very creativity, economics and autonomy that creates the future is by its nature something individual and unpredictable. While we can relatively safely assume that if something can be done it will be done, what actually matters is whether it will be done early or late, or seldom or often. We can try to hurry beneficial or protective technologies so they arrive before the more problematic ones. We can try to aim at beneficial directions in favour over more problematic ones. We can create incentives that make fewer want to use the bad ones. And so on. The “we” in this paragraph is not so much a collective coordinated “us” as the sum of individuals, companies and institutions, “ourselves”: there is no requirement to get UN permission before you set out to make safe AI or develop life extension. It just helps if a lot of people support your aims.

John Stuart Mill’s harm principle allows society to step in an limit freedom when it causes harms to others, but most enhancements look unlikely to produce easily recognizable harms. This is not a ringing endorsement: as Nick Bostrom has pointed out, there are some bad directions of evolution we might not want to go down, yet it is individually rational for each of us to go slightly in that direction. And existential risk is so dreadful that it actually does provide a valid reason to stop certain human activities if we cannot find alternative solutions. So while I think we should not try to stop people from enhancing themselves we should want to improve our collective coordination ability to restrain ourselves. This is the “us” part. Restraint does not just have to happen in the form of rules: we restrain ourselves already using socialization, reputations, and incentive structures. Moral and cognitive enhancement could add restraints we currently do not have: if you can clearly see the consequences of your actions it becomes much harder to do bad things. The long-term outlook fostered by radical life extension may also make people more risk aversive and willing to plan for long-term sustainability.

One could dream of some enlightened despot or technocrat deciding. A world government filled with wise, disinterested and skilled members planning our species future. But this suffers from essentially the economic calculation problem: while a central body might have a unified goal, it will lack information about the preferences and local states among the myriad agents in the world. Worse, the cognitive abilities of the technocrat will be far smaller than the total cognitive abilities of the other agents. This is why rules and laws tend to get gamed – there are many diverse entities thinking about ways around them. But there are also fundamental uncertainties and emergent phenomena that will bubble up from the surrounding agents and mess up the technocratic plans. As Virginia Postrel noted, the typical solution is to try to browbeat society into a simpler form that can be managed more easily… which might be acceptable if the stakes are the very survival of the species, but otherwise just removes what makes a society worth living in. So we better maintain our coordination ourselves, all of us, in our diverse ways.