Why-chain

By Anders Sandberg

I manifested on a hillside lit by the everlasting sunset, overlooking the dry western plains. A little girl was poking around a tangle of flowers in the terra cotta light. Her clothes were made of felt lizards, quietly and slowly moving around her body. She was manipulating the plants like a well-learned game or a housekeeping task.

She noticed my approach but did not look up from her pursuit: “Hello. What is your name?”

“I don’t have one. I am from out there.” I made a gesture towards the dark eastern sky, hoping she would understand my reference to the Dyson sphere or the wider galaxy.

“I will call you Madenḫu then.”

“What is your name?”

“Today I am Ritsa. Why are you here?”

It is hard to tell what kind of biological one is dealing with. This one was small and looked like a young one, but it could just as well be ancient with a designer body. Or some kind of group mind. I had turned off my omniscience when going here, merely replacing it with perfect intuition: it is impractical to handle the conscious bandwidth and lag of full omniscience when manifesting. So I improvised.

“I am visiting everybody on the world today. I have some news you need to hear. Soon, we will adjust the sphere around the world and the sun. You will have to move or change.”

She did not look up, but continued spooning pollen from a flower into a gooey receptacle of a tubular plant.

“Why?”

“The sphere will reflect differently and this planet will become too hot to sustain life. You will have to become something different, or move somewhere else.”

“No, I wondered why you are changing the sphere.” She gently lifted a small insect from her plant and put it on a dark flower.

Even though I recognized the inevitable why chain coming up, before I manifested I had decided to be truthful and answer everything: “We will make it reflect sunlight in a particular direction so that the system will move.”

“Why?”

“In a long while the sun will pass near another star we have also moved, and they will change course. We are doing it with most stars across the whole galaxy.”

“Why?”

“We are reorganizing it and making it move. Binary stars will be flung past the core black hole and one in each pair expelled, making the entire galaxy move. Other stars will herd the halo so it stays in place.” I was assuming she understood the terms, but how can you tell with a biological?

“Why?”

“We need to move galaxies together into hyperclusters so they are not lost when the universe expands faster.”

“Why?”

“The big forms of mind need to hang together. They cannot do that if their parts run away from each other.”

She did not ask why. Instead she began to move milky sap using a leaf to another plant.

“Why do you have to change this place? It’s fine the way it is.”

I currently did not know why somebody had kept this tidally locked terrestrial around when they built the Dyson sphere around the M-dwarf. Maybe it once had some significance, or it was because of some forgotten aesthetic-financial game. That there was a biological civilization on it had been overlooked until right now. After all, biologicals were mostly on the same level as netlife, but far slower.

“The big plan needs this sun. There are many like it, but it would be hard to move this part of the galaxy without it.”

“But not impossible for you.”

“Probably not. There is some flexibility. But…”

“…we are not important enough. I know.”

“You are important. We want to save you.”

“But you have to do the big things for the big reasons. Small things need to be moved out of the way.”

“Yes. You understand perfectly.” I wished the other inhabitants were as amenable. I intuited that my other selves were having a far harder time.

“Would you save that one?” she asked, pointing at the insect that had returned to the first plant.

“If you agree to dematerialize we can bundle the small animals into the virtuality. If you move or adapt, I guess it will not make it.”

“So you don’t care about moving too small things. How close to edge of smallness are we?”

I did not answer at first. There was no true answer, or at least no true simple answer. Outside, versions of me were having similar conversations with billions of beings on millions of worlds. But I could have tried to talk to trillions of slightly simpler beings on billions of worlds. Or quadrillions of smart things everywhere. The line was more politeness than moral.

“I am closer to you than you are to that edge.”

It was a polite distortion, technically true: me-here was not far from a biological. But I also realized that compared to the big minds the whole of me was probably little more than a smart thing. I would rather be a galaxy held together by politeness than by force.

My intuition told me, somehow, that perhaps we are all small things held together by the politeness of the vast. I did not know what to make of it.

Obligatory Covid-19 blogging

SARS-CoV-2 spike ectodomain structure (open state)
SARS-CoV-2 spike ectodomain structure (open state) https://3dprint.nih.gov/discover/3DPX-013160
Over at Practical Ethics I have blogged a bit:

The Unilateralist Curse and Covid-19, or Why You Should Stay Home: why we are in a unilateralist curse situation in regards to staying home, making it rational to stay home even when it seems irrational.

Taleb and Norman had a short letter Ethics of Precaution: Individual and Systemic Risk making a similar point, noting that recognizing the situation type and taking contagion dynamics into account is a reason to be more cautious. It is different from our version in the effect of individual action: we had single actor causing full consequences, the letter has exponential scale-up. Also, way more actors: everyone rather than just epistemic peers, and incentives that are not aligned since actors do not bear the full costs of their actions. The problem is finding strategies robust to stupid, selfish actors. Institutional channeling of collective rationality and coordination is likely the only way for robustness here.

Never again – will we make Covid-19 a warning shot or a dud? deals with the fact that we are surprisingly good at forgetting harsh lessons (1918, 1962, Y2K, 2006…), giving us a moral duty to try to ensure appropriate collective memory of what needs to be recalled.

This is why Our World In Data, the Oxford COVID-19 Government Response Tracker and IMF’s policy responses to Covid-19 are so important. The disjointed international responses act as natural experiments that will tell us important things about best practices and the strengths/weaknesses of various strategies.

And the pedestrians are off! Oh no, that lady is jaywalking!

In 1983 Swedish Television began an all-evening entertainment program named Razzel. It was centred around the state lottery draw, with music, sketch comedy, and television series interspersed between the blocks. Yes, this was back in the day when there was two TV channels to choose from and more or less everybody watched. The ice age had just about ended.

One returning feature consisted of camera footage of a pedestrian crossing in Stockholm. A sports commenter well-known for his coverage of horse-racing narrated the performance of the unknowing pedestrians as if they were competing in a race. In some cases I even think he even showed up to deliver flowers to the “winner”. But you would get disqualified if you had a false start or went outside the stripes!

I suspect this feature noticeably improved traffic safety for a generation.

I was reminded of this childhood memory earlier today when discussing the use of face recognition in China to detect jaywalkers and display them on a billboard to shame them. The typical response in a western audience is fear of what looks like a totalitarian social engineering program. The glee with which many responded to the news that the system had been confused by a bus ad, putting a celebrity on the board of shame, is telling.

Is there a difference?

But compare the Chinese system to the TV program. In the China case the jaywalker may be publicly shamed from the billboard… but in the cheerful 80s TV program they were shamed in front of much of the nation.

There is a degree of increased personalness in the Chinese case since it also displays their name, but no doubt friends and neighbours would recognize you if they saw you on TV (remember, this was back when we only had two television channels and a fair fraction of people watched TV on Friday evening). There may also be SMS messages involved in some versions of the system. This acts differently: now it is you who gets told off when you misbehave.

A fundamental difference may be the valence of the framing. The TV show did this as happy entertainment, more of a parody of sport television than an attempt at influencing people. The Chinese system explicitly aims at discouraging misbehaviour. The TV show encouraged positive behaviour (if only accidentally).

So the dimensions here may be the extent of the social effect (locally, or nationwide), the degree the feedback is directly personal or public, and whether it is a positive or negative feedback. There is also a dimension of enforcement: is this something that happens every time you transgress the rules, or just randomly?

In terms of actually changing behaviour making the social effect broad rather than close and personal might not have much effect: we mostly care about our standing relative to our peers, so having the entire nation laugh at you is certainly worse than your friends laughing, but still not orders of magnitude more mortifying. The personal message on the other hand sends a signal that you were observed; together with an expectation of effective enforcement this likely has a fairly clear deterrence effect (it is often not the size of the punishment that deters people from crime, but their expectation of getting caught). The negative stick of acting wrong and being punished is likely stronger than the positive carrot of a hypothetical bouquet of flowers.

Where is the rub?

From an ethical standpoint, is there a problem here? We are subject to norm enforcement from friends and strangers all the time. What is new is the application of media and automation. They scale up the stakes and add the possibility of automated enforcement. Shaming people for jaywalking is fairly minor, but some people have lost jobs, friends or been physically assaulted when their social transgressions have become viral social media. Automated enforcement makes the panopticon effect far stronger: instead of suspecting a possibility of being observed it is a near certainty. So the net effect is stronger, more pervasive norm enforcement…

…of norms that can be observed and accurately assessed. Jaywalking is transparent in a way being rude or selfish often isn’t. We may end up in a situation where we carefully obey some norms, not because they are the most important but because they can be monitored. I do not think there is anything in principle impossible about a rudeness detection neural network, but I suspect the error rates and lack of context sensitivity would make it worse than useless in preventing actual rudeness. Goodhart’s law may even make it backfire.

So, in the end, the problem is that automated systems encode a formalization of a social norm rather than the actual fluid social norm. Having a TV commenter narrate your actions is filtered through the actual norms of how to behave, while the face recognition algorithm looks for a pattern associated with transgression rather than actual transgression. The problem is that strong feedback may then lock in obedience to the hard to change formalization rather than actual good behaviour.

Arguing against killer robot janissaries

Military robot being shown to families at New Scientist Live 2017.
Military robot being shown to families at New Scientist Live 2017.

I have a piece in Dagens Samhälle with Olle Häggström, Carin Ism, Max Tegmark and Markus Anderljung urging the Swedish parliament to consider banning lethal autonomous weapons.

This is of course mostly symbolic; the real debate is happening right now over in Geneva at the CCW. I also participated in a round-table with the Red Cross that led to their report on the issue, which is one of the working papers presented there.

I am not particularly optimistic that we will get a ban – nor that a ban would actually achieve much. However, I am much more optimistic that this debate may force a general agreement about the importance of getting meaningful human control. This is actually an area where most military and peace groups would agree: nobody wants systems that are unaccountable and impossible to control. Making sure there are international agreements that using such systems is irresponsible and maybe even a war crime would be a big win. But there are lots of devils in the details.

When it comes to arguments for why LAWs are morally bad I am personally not so convinced that the bad comes from a machine making the decision to kill a person. Clearly some machine possible decisionmaking does improve proportionality and reduce arbitrariness. Similarly arguments about whether they would increase or reduce the risk of military action and how this would play out in terms of human suffering and death are interesting empirical arguments but we should not be overconfident in that we know the answers. Given that once LAWs are in use it will be hard to roll them back if the answers are bad, we might find it prudent to try to avoid them (but consider the opposing scenario where since time immemorial robots have fought our wars and somebody now suggests using humans too – there is a status quo bias here).

My main reason for being opposed to LAWs is not that they would be inherently immoral, nor that they would necessarily or even likely make war worse or more likely. My view is that the problem is that they give states too much power. Basically they make their monopoly on violence independent of the wishes of the citizens. Once a sufficiently potent LAW military (or police force) exist it will be able to exert coercive and lethal power as ordered without any mediation through citizens. While having humans in the army certainly doesn’t guarantee moral behavior, if ordered to turn against the citizenry or act in a grossly immoral way they can exert moral agency and resist (with varying levels of overtness). The LAW army will instead implement the orders as long as they are formally lawful (assuming there is at least a constraint against unlawful commands). States know that if they mistreat their population too much their army might side with the population, a reason why some of the nastier governments make use of mercenaries or a special separate class of soldier to reduce the risk. If LAWs become powerful enough they might make dictatorships far more stable by removing a potentially risky key component of state power from the internal politics.

Bans and moral arguments are unlikely to work against despots. But building broad moral consensuses on what is acceptable in war does have effects. If R&D emphasis is directed towards finding solutions to how to manage responsibility for autonomous device decisions that will develop a lot of useful technologies for making such systems at least safer – and one can well imagine similar legal and political R&D into finding better solutions to citizen-independent state power.

In fact, far more important than LAWs is what to do about Lethal Autonomous States. Bad governance kills, many institutions/corporations/states behave just as badly as the worst AI risk visions and have a serious value alignment problem, and we do not have great mechanisms for handling responsibility in inter-state conflicts. The UN system is a first stab at the problem but obviously much, much more can be done. In the meantime, we can try avoiding going too quickly down a risky path while we try to find safe-making technologies and agreements.

Review of the cyborg bill of rights 1.0

Cyborg NewtonThe Cyborg Bill of Rights 1.0 is out. Rich MacKinnon suggests the following rights:

FREEDOM FROM DISASSEMBLY
A person shall enjoy the sanctity of bodily integrity and be free from unnecessary search, seizure, suspension or interruption of function, detachment, dismantling, or disassembly without due process.

FREEDOM OF MORPHOLOGY
A person shall be free (speech clause) to express themselves through temporary or permanent adaptions, alterations, modifications, or augmentations to the shape or form of their bodies. Similarly, a person shall be free from coerced or otherwise involuntary morphological changes.

RIGHT TO ORGANIC NATURALIZATION
A person shall be free from exploitive or injurious 3rd party ownerships of vital and supporting bodily systems. A person is entitled to the reasonable accrual of ownership interest in 3rd party properties affixed, attached, embedded, implanted, injected, infused, or otherwise permanently integrated with a person’s body for a long-term purpose.

RIGHT TO BODILY SOVEREIGNTY
A person is entitled to dominion over intelligences and agents, and their activities, whether they are acting as permanent residents, visitors, registered aliens, trespassers, insurgents, or invaders within the person’s body and its domain.

EQUALITY FOR MUTANTS
A legally recognized mutant shall enjoy all the rights, benefits, and responsibilities extended to natural persons.

As a sometime philosopher with a bit of history of talking about rights regarding bodily modification, I of course feel compelled to comment.

What are rights?

Artifical handFirst, what is a right? Clearly anybody can state that we have a right to X, but only some agents and X-rights make sense or have staying power.

One kind of rights are legal rights of various kinds. This can be international law, national law, or even informal national codes (for example the Swedish allemansrätten, which is actually not a moral/human right and actually fairly recent). Here the agent has to be some legitimate law- or rule-maker. The US Bill of Rights is an example: the result of a political  process that produced legal rights, with relatively little if any moral content. Legal rights need to be enforceable somehow.

Then there are normative moral principles such as fundamental rights (applicable to a person since they are a person), natural rights (applicable because of facts of the world) or divine rights (imposed by God). These are universal and egalitarian: applicable everywhere, everywhen, and the same for everybody. Bentham famously dismissed the idea of natural rights as “nonsense on stilts” and there is a general skepticism today about rights being fundamental norms. But insofar they do exist, anybody can discover and state them. Moral rights need to be doable.

While there may be doubts about the metaphysical nature of rights, if a society agrees on a right it will shape action, rules and thinking in an important way. It is like money: it only gets value by the implicit agreement that it has value and can be exchanged for goods. Socially constructed rights can be proposed by anybody, but they only become real if enough people buy into the construction. They might be unenforceable and impossible to perform (which may over time doom them).

What about the cyborg rights? There is no clear reference to moral principles, and only the last one refers to law. In fact, the preamble states:

Our process begins with a draft of proposed rights that are discussed thoroughly, adopted by convention, and then published to serve as model language for adoption and incorporation by NGOs, governments, and rights organizations.

That is, these rights are at present a proposal for social construction (quite literally) that hopefully will be turned into a convention (a weak international treaty) that eventually may become national law. This also fits with the proposal coming from MacKinnon rather than the General Secretary of the UN – we can all propose social constructions and urge the creation of conventions, treaties and laws.

But a key challenge is to come up with something that can become enforceable at some point. Cyborg bodies might be more finely divisible and transparent than human bodies, so that it becomes hard to regulate these rights. How do you enforce sovereignty against spyware?

Justification

Dragon leg 2Why is a right a right? There has to be a reason for a right (typically hinted at in preambles full of “whereas…”)

I have mostly been interested in moral rights. Patrick D. Hopkins wrote an excellent overview “Is enhancement worthy of being a right?” in 2008 where he looks at how you could motivate morphological freedom. He argues that there are three main strategies to show that a right is fundamental or natural:

  1. That the right conforms to human nature. This requires showing that it fits a natural end. That is, there are certain things humans should aim for, and rights help us live such lives. This is also the approach of natural law accounts.
  2. That the right is grounded in interests. Rights help us get the kinds of experiences or states of the world that we (rightly) care about. That is, there are certain things that are good for us (e.g.  “the preservation of life, health, bodily integrity, play, friendship, classic autonomy, religion, aesthetics, and the pursuit of knowledge”) and the right helps us achieve this. Why those things are good for us is another matter of justification, but if we agree on the laundry list then the right follows if it helps achieve them.
  3. That the right is grounded in our autonomy. The key thing is not what we choose but that we get to choose: without freedom of choice we are not moral agents. Much of rights by this account will be about preventing others from restricting our choices and not interfering with their choices. If something can be chosen freely and does not harm others, it has a good chance to be a right. However, this is a pretty shallow approach to autonomy; there are more rigorous and demanding ideas of autonomy in ethics (see SEP and IEP for more). This is typically how many fundamental rights get argued (I have a right to my body since if somebody can interfere with my body, they can essentially control me and prevent my autonomy).

One can do this in many ways. For example, David Miller writes on grounding human rights that one approach is to allow people from different cultures to live together as equals, or basing rights on human needs (very similar to interest accounts), or the instrumental use of them to safeguard other (need-based) rights. Many like to include human dignity, another tricky concept.

Social constructions can have a lot of reasons. Somebody wanted something, and this was recognized by others for some reason. Certain reasons are cultural universals, and that make it more likely that society will recognize a right. For example, property seems to be universal, and hence a right to one’s property is easier to argue than a right to paid holidays (but what property is, and what rules surround it, can be very different).

Legal rights are easier. They exist because there is a law or treaty, and the reasons for that are typically a political agreement on something.

It should be noted that many declarations of rights do not give any reasons. Often because we would disagree on the reasons, even if we agree on the rights. The UN declaration of human rights give no hint of where these rights come from (compare to the US declaration of independence, where it is “self-evident” that the creator has provided certain rights to all men). Still, this is somewhat unsatisfactory and leaves many questions unanswered.

So, how do we justify cyborg rights?

In the liberal rights framework I used for morphological freedom we could derive things rather straightforwardly: we have a fundamental right to life, and from this follows freedom from disassembly. We have a fundamental right to liberty, and together with the right to life this leads to a right to our own bodies, bodily sovereignty, freedom of morphology and the first half of the right to organic naturalization. We have a right to our property (typically derived from fundamental rights to seek our happiness and have liberty), and from this the second half of the organic naturalization right follows (we are literally mixing ourselves rather than our work with the value produced by the implants). Equality for mutants follow from having the same fundamental rights as humans (note that the bill talks about “persons”, and most ethical arguments try to be valid for whatever entities count as persons – this tends to be more than general enough to cover cyborg bodies). We still need some justification of the fundamental rights of life, liberty and happiness, but that is outside the scope of this exercise. Just use your favorite justifications.

The human nature approach would say that cyborg nature is such that these rights fit with it. This might be tricky to use as long as we do not have many cyborgs to study the nature of. In fact, since cyborgs are imagined as self-creating (or at least self-modifying) beings it might be hard to find any shared nature… except maybe the self-creation part. As I often like to argue, this is close to Mirandola’s idea of human dignity deriving from our ability to change ourselves.

The interest approach would ask how the cyborg interests are furthered by these rights. That seems pretty straightforward for most reasonably human-like interests. In fact, the above liberal rights framework is to a large extent an interest-based account.

The autonomy account is also pretty straightforward. All cyborg rights except the last are about autonomy.

Could we skip the ethics and these possibly empty constructions? Perhaps: we could see the cyborg bill of rights as a way of making a cyborg-human society possible to live in. We need to tolerate each other and set boundaries on allowed messing around with each other’s bodies. Universals of property lead to the naturalization right, territoriality the sovereignty right universal that actions under self-control are distinguished from those not under control might be taken as the root for autonomy-like motivations that then support the rest.

Which one is best? That depends. The liberal rights/interest system produces nice modular rules, although there will be much arguments on what has precedence. The human nature approach might be deep and poetic, but potentially easy to disagree on. Autonomy is very straightforward (except when the cyborg starts messing with their brain). Social constructivism allows us to bring in issues of what actually works in a real society, not just what perfect isolated cyborgs (on a frictionless infinite plane) should do.

Parts of rights

Alternative limb projectOne of the cool properties of rights is that they have parts – “the Hohfeldian incidents“, after Wesley Hohfeld (1879–1918) who discovered them. He was thinking of legal rights, but this applies to moral rights too. His system is descriptive – this is how rights work – rather than explaining why the came about or whether this is a good thing. The four parts are:

Privileges (alias liberties): I have a right to eat what I want. Someone with a driver’s licence has the privilege to drive. If you have a duty not do do something, then you have no privilege about it.

Claims: I have a claim on my employer to pay my salary. Children have a claim vis-a-vis every adult not to be abused. My employer is morally and legally dutybound to pay, since they agreed to do so. We are dutybound to refrain from abusing children since it is wrong and illegal.

These two are what most talk about rights deal. In the bill, the freedom from disassembly and freedom of morphology are about privileges and claims. The next two are a bit meta, dealing with rights over the first two:

Powers: My boss has the power to order me to research a certain topic, and then I have a duty to do it. I can invite somebody to my home, and then they have the privilege of being there as long as I give it to them. Powers allow us to change privileges and claims, and sometimes powers (an admiral can relieve a captain of the power to command a ship).

Immunities: My boss cannot order me to eat meat. The US government cannot impose religious duties on citizens. These are immunities: certain people or institutions cannot change other incidents.

These parts are then combined into full rights. For example, my property rights to this computer involve the privilege to use the computer, a claim against others to not use the computer, the power to allow others to use it or to sell it to them (giving them the entire rights bundle), and an immunity of others altering these rights. Sure, in practice the software inside is of doubtful loyalty and there are law-enforcement and emergency situation exceptions, but the basic system is pretty clear. Licence agreements typically give you a far

Sometimes we speak about positive and negative rights: if I have a negative right I am entitled to non-interference from others, while a positive right entitles me to some help or goods. My right to my body is a negative right in the sense that others may not prevent me from using or changing my body as I wish, but I do not have a positive right to demand that they help me with some weird bodymorphing. However, in practice there is a lot of blending going on: public healthcare systems give us positive rights to some (but not all) treatment, policing gives us a positive right of protection (whether we want it or not). If you are a libertarian you will tend to emphasize the negative rights as being the most important, while social democrats tend to emphasize state-supported positive rights.

The cyborg bill of rights starts by talking about privileges and claims. Freedom of morphology clearly expresses an immunity to forced bodily change. The naturalization right is about immunity from unwilling change of the rights of parts, and an expression of a kind of power over parts being integrated into the body. Sovereignty is all about power over entities getting into the body.

The right of bodily sovereignty seems to imply odd things about consensual sex – once there is penetration, there is dominion. And what about entities that are partially inside the body? I think this is because it is trying to reinvent some of the above incidents. The aim is presumably to cover pregnancy/abortion, what doctors may do, and other interventions at the same time. The doctor case is easy, since it is roughly what we agree on today: we have the power to allow doctors to work on our bodies, but we can also withdraw this whenever we want

Some other thoughts

Nigel on the screenThe recent case where the police subpoenad the pacemaker data of a suspected arsonist brings some of these rights into relief. The subpoena occurred with due process, so it was allowed by the freedom from disassembly. In fact, since it is only information and that it is copied one can argue that there was no real “disassembly”. There have been cases where police wanted bullets lodged in people in order to do ballistics on them, but US courts have generally found that bodily integrity trumps the need for evidence. Maybe one could argue for a derived right to bodily privacy, but social needs can presumably trump this just as it trumps normal privacy. Right now views on bodily integrity and privacy are still based on the assumption that bodies are integral and opaque. In a cyborg world this is no longer true, and the law may well move in a more invasive direction.

“Legally recognized mutant”? What about mutants denied legal recognition? Legal recognition makes sense for things that the law must differentiate between, not for things the law is blind to. Legally recognized mutants (whatever they are) would be a group that needs to be treated in some special way. If they are just like natural humans they do not need special recognition. We may have laws making it illegal to discriminate against mutants, but this is a law about a certain kind of behavior rather than the recipient. If I racially discriminate against somebody but happens to be wrong about their race, I am still guilty. So the legal recognition part does not do any work in this right.

And why just mutants? Presumably the aim here is to cover cyborgs, transhumans and other prefix-humans so they are recognized as legal and moral agents with the same standing. The issue is whether this is achieved by arguing that they were human and “mutated”, or are descended from humans, and hence should have the same standing, or whether this is due to them having the right kind of mental states to be persons. The first approach is really problematic: anencephalic infants are mutants but hardly persons, and basing rights on lineage seems ripe for abuse. The second is much simpler, and allows us to generalize to other beings like brain emulations, aliens, hypothetical intelligent moral animals, or the Swampman.

This links to a question that might deserve a section on its own: who are the rightsholders? Normal human rights typically deal with persons, which at least includes adults capable of moral thinking and acting (they are moral agents). Someone who is incapable, for example due to insanity or being a child, have reduced rights but are still a moral patient (someone we have duties towards). A child may not have full privileges and powers, but they do have claims and immunities. I like to argue that once you can comprehend and make use of a right you deserve to have it, since you have capacity relative to the right. Some people also think prepersons like fertilized eggs are persons and have rights; I think this does not make much sense since they lack any form of mind, but others think that having the potential for a future mind is enough to grant immunity. Tricky border cases like persistent vegetative states, cryonics patients, great apes and weird neurological states keep bioethicists busy.

In the cyborg case the issue is what properties make something a potential rightsholder and how to delineate the border of the being. I would argue that if you have a moral agent system it is a rightsholder no matter what it is made of. That is fine, except that cyborgs might have interchangeable parts: if cyborg A gives her arm to cyborg B, have anything changed? I would argue that the arm switched from being a part of/property of A to being a part of/property of B, but the individuals did not change since the parts that make them moral agents are unchanged (this is just how transplants don’t change identity). But what if A gave part of her brain to B? A turns into A’, B turns into B’, and these may be new agents. Or what if A has outsourced a lot of her mind to external systems running in the cloud or in B’s brain? We may still argue that rights adhere to being a moral agent and person rather than being the same person or a person that can easily be separated from other persons or infrastructure. But clearly we can make things really complicated through overlapping bodies and minds.

Summary

I have looked at the cyborg bill of rights and how it fits with rights in law, society and ethics. Overall it is a first stab at establishing social conventions for enhanced, modular people. It likely needs a lot of tightening up to work, and people need to actually understand and care about its contents for it to have any chance of becoming something legally or socially “real”. From an ethical standpoint one can motivate the bill in a lot of ways; for maximum acceptance one needs to use a wide and general set of motivations, but these will lead to trouble when we try to implement things practically since they give no way of trading one off against another one in a principled way. There is a fair bit of work needed to refine the incidences of the rights, not to mention who is a rightsholder (and why). That will be fun.

Solomon’s frozen judgement

A girl dying of cancer wanted to use cryonic preservation to have a chance at being revived in the future. While supported by her mother the father disagreed; in a recent high court ruling, the judge found that she could be cryopreserved.

As the judge noted, the verdict was not a statement on the validity of cryonics itself, but about how to make decisions about prospective orders. In many ways the case would presumably have gone the same way if there had been a disagreement about whether the daughter could have catholic last rites. However, cryonics makes things fresh and exciting (I have been in the media all day thanks to this).

What is the ethics of parents disagreeing about the cryosuspension of their child?

Best interests

One obvious principle is that parents ought to act in the best interest of their children.

If the child is morally mature and with informed consent, then they can clearly have a valid interest in taking a chance on cryonics: they might not be legally adult, but as in normal medical ethics their stated interests have strong weight. Conversely, one could imagine a case where a child would not want to be preserved, in which case I think most people would agree their preferences should dominate.

The general legal consensus in the West is that the child’s welfare is so important that it can overrule the objections of parents. In UK law parents have the right and the duty to give consent for a minor. Children can consent for medical treatment, overriding their parents, at 16. However, if refusing treatment parents and court can override. This mostly comes into play in cases such as avoiding blood transfusions for religious reasons.

In this case the issue was that the parents were disagreeing and the child was not legally old enough.

If one thinks cryonics is reasonable, then one should clearly cryosuspend the child: it is in their best interest. But if one thinks cryonics is not reasonable, is it harming the interest of the child? This seems to require some theory of how cryonics is bad for the interests of the child.

As an analogy, imagine a case where one parent is a Jehovah’s Witness and want to refuse a treatment involving blood transfusion: the child will die without the treatment, and it will be a close call even with it. Here the objecting parent may claim that undergoing the transfusion harms the child in an important spiritual way and refuse consent. The other parent disagrees. Here the law would come down on the side of the pro-transfusion parent.

On this account and if we agree the cases are similar, we might say that parents have a legal duty to consent to cryonics.

Weak and strong reasons

In practice the controversialness of cryonics may speak against this: many people disagree about cryonics being good for one’s welfare. However, most such arguments usually seem to be based on various farfetched scenarios about how the future could be a bad place to end up in. Others bring up loss of social connections or that personal identity would be disrupted. A more rational argument is that it is an unproven treatment of dubious efficacy, which would make it irrational to undertake if there was an alternative; however since there isn’t any alternative this argument has little power. The same goes for the risk of loss of social connection or identity: had there been an alternative to death (which definitely severs connections and dissolves identity) that may have been preferable. If one seriously thinks that the future would be so dark that it is better not to get there, one should probably not have children.

In practice it is likely that the status of cryonics as nonstandard treatment would make the law hesitate to overrule parents. We know blood transfusions work, and while spiritual badness might be a respectable as a private view we as a society do not accept it as a sufficient reason to have somebody die. But in the case of cryonics the unprovenness of the treatment means that hope for revival is on nearly the same epistemic level as spiritual badness: a respectable private view, but not strong enough to be a valid public reason. Cryonicists are doing their best to produce scientific evidence – tissue scans, memory experiments, protocols – that move the reasons to believe in cryonics from the personal faith level to the public evidence level. They already have some relevant evidence. As soon as lab mice are revived or people become convinced the process saves the connectome the reasons would be strengthened and cryonics becomes more akin blood transfusion.

The key difference is that weak private reasons are enough to allow an experimental treatment where there is no alternative but death, but they are generally not enough to go for an experimental treatment when there is some better treatment. When disallowing a treatment weak reasons may work well against unproven or uncertain treatments, but not when it is proven. However, disallowing a treatment with no alternative is equivalent to selecting death.

When two parents disagree about cryonics (and the child does not have a voice) it hence seems that they both have weak reasons, but the asymmetry between having a chance and dying tilts in favor of cryonics. If it was purely a matter of aesthetics or value (for example, arguing about the right kind of last rites) there would be no societal or ethical constraint. But here there is some public evidence, making it at least possible that the interests of the child might be served by cryonics. Better safe than sorry.

When the child also has a voice and can express its desires, then it becomes obvious which way to go.

King Solomon might have solved the question by cryosuspending the child straight away, promising the dissenting parent not to allow revival until they either changed their mind or there was enough public evidence to convince anybody that it would be in the child’s interest to be revived. The nicest thing about cryonics is that it buys you time to think things through.

AI, morality, ethics and metaethics

Next Sunday I will be debating AI ethics at Battle of Ideas. Here is a podcast where I talk AI, morality and ethics: https://soundcloud.com/institute-of-ideas/battle-cry-anders-sandberg-on-ethical-ai

What distinguishes morals from ethics?

There is actually a shocking confusion about what the distinction between morals and ethics is. Differen.com says ethics is about rules of conduct produced by an external source while morals are an individual’s own principles of right and wrong. Grammarist.com says morals are principles on which one’s own judgement of right and wrong are based (abstract, subjective and personal), ethics are the principles of right conduct (practical, social and objective). Ian Welsh gives a soundbite: “morals are how you treat people you know.  Ethics are how you treat people you don’t know.” Paul Walker and Terry Lovat say ethics leans towards decisions based on individual character and subjective understanding of right and wrong, while morals is about widely shared communal or societal norms – here ethics is individual assessment of something being good or bad, while morality is inter-subjective community assessment.

Wikipedia distinguishes between ethics as a research field and the common human ability to think critically about moral values and direct actions appropriately, or a particular persons principles of values. Morality is the differentiation between things that are proper and improper, as well as a body of standards and principles in derived from a code of conduct in some philosophy, religion or culture… or derived from a standard a person believes to be universal.

Dictionary.com regards ethics as a system of moral principles, the rules of conduct recognized in some human environment, an individual’s moral principles (and the branch of philosophy). Morality is about conforming to the rules of right conduct, having moral quality or character, a doctrine or system of morals and a few other meanings. The Cambridge dictionary thinks ethics is the study of what is right or wrong, or the set of beliefs about it, while morality is a set of personal or social standards for good/bad behavior and character.

And so on.

I think most people try to include the distinction between shared systems of conduct and individual codes, and the distinction between things that are subjective, socially agreed on, and maybe objective. Plus that we all agree on that ethics is a philosophical research field.

My take on it

I like to think of it as a AI issue. We have a policy function \pi(s,a) that maps states and action pairs to a probability of acting that way; this is set using a value function Q(s) where various states are assigned values. Morality in my sense is just the policy function and maybe the value function: they have been learned through interacting with the world in various ways.

Ethics in my sense is ways of selecting policies and values. We are able to not only change how we act but also how we evaluate things, and the information that does this change is not just reward signals that update value function directly, but also knowledge about the world, discoveries about ourselves, and interactions with others – in particular ideas that directly change the policy and value functions.

When I realize that lying rarely produces good outcomes (too much work) and hence reduce my lying, then I am doing ethics (similarly, I might be convinced about this by hearing others explain that lying is morally worse than I thought or convincing me about Kantian ethics). I might even learn that short-term pleasure is less valuable than other forms of pleasure, changing how I view sensory rewards.

Academic ethics is all about the kinds of reasons and patterns we should use to update our policies and values, trying to systematize them. It shades over into metaethics, which is trying to understand what ethics is really about (and what metaethics is about: it is its own meta-discipline, unlike metaphysics that has metametaphysics, which I think is its own meta-discipline).

I do not think I will resolve any confusion, but at least this is how I tend to use the terminology. Morals is how I act and evaluate, ethics is how I update how I act and evaluate, metaethics is how I try to think about my ethics.

Doing right and feeling good

My panel at Hay-on-Wye (me, Elaine Glaser, Peter Dews and Simon Baron-Cohen) talked about compassion, the sentiment model of morality, effective altruism and how to really help the world. Now available as video!

My view is largely that moral action is strongly driven and motivated by emotions rather than reason, but outside the world of the blindingly obvious or everyday human activity our intuitions and feelings are not great guides. We do not function well morally when the numbers get too big or the cognitive biases become maladaptive. Morality may be about the heart, but ethics is in the brain.

Being reasonable

DisagreementThe ever readable Scott Alexander stimulated a post on Practical Ethics about defaults, status quo, and disagreements about sex. The quick of it: our culture sets defaults on who is reasonable or unreasonable when couples disagree, and these become particularly troubling when dealing with biomedical enhancements of love and sex. The defaults combine with status quo bias and our scepticism for biomedical interventions to cause biases that can block or push people towards certain interventions.

Universal principles?

Essence of ethicsI got challenged on the extropian list, which is a fun reason to make a mini-lecture.

On 2015-10-02 17:12, William Flynn Wallace wrote:
> ​Anders says above that we have discovered universal timeless principles.​ I’d like to know what they are and who proposed them, because that’s chutzpah of the highest order. Oh boy – let’s discuss that one.

Here is one: a thing is identical to itself. (1)

Here is another one: “All human beings are born free and equal in dignity and rights.” (2)

Here is a third one: “Act only according to that maxim whereby you can, at the same time, will that it should become a universal law.” (3)

(1) was first explicitly mentioned by Plato (in Theaetetus). I think you also agree with it – things that are not identical to themselves are unlikely to even be called “things”, and without the principle very little thinking makes sense.

I am not sure whether it is chutzpah of the highest order or a very humble observation.

(2) is from the UN declaration of universal human rights. This sentence needs enormous amounts of unpacking – “free”, “equal”, “dignity”, “rights”… these words can (and are) used in very different ways. Yet I think it makes sense to say that according to a big chunk of Western philosophy this sentence is a true sentence (in the sense that ethical propositions are true), that it is universal (the truth is not contingent on when and where you are, although the applications may change), and we know historically that we have not known this principle forever. Now *why* it is true quickly branches out into different answers depending on what metaethical positions you hold, not to mention the big topic of what kind of truth moral truth actually is (if anything). The funny thing is that the universal part is way less contentious, because of the widely accepted (and rarely stated) formal ethical principle that if it is moral to P in situation X, then the location in time and space where X happens does not matter.

Chutzpah of the highest order? Totally. So is the UN.

(3) is Immanuel Kant, and he argued that any rational moral agent could through pure reason reach this principle. It is in many ways like (1) almost a consistency requirement of moral will (not action, since he doesn’t actually care about the consequences – we cannot fully control those, but we can control what we decide to do). There is a fair bit of unpacking of the wording, but unlike the UN case he defines his terms fairly carefully in the preceding text. His principle is, if he is right, the supreme principle of morality.

Chuzpah auf höchstem Niveau? Total!

Note that (1) is more or less an axiom: there is no argument for why it is true, because there is little point in even trying. (3) is intended to be like a theorem in geometry: from some axioms and the laws of logic, we end up with the categorical imperative. It is just as audacious or normal as the Pythagorean theorem. (2) is a kind of compromise between different ethical systems: the Kantians would defend it based on their system, while consequentialists could make a rule utilitarian argument for why it is true, and contractualists would say it is true because the UN agrees on it. They agree on the mid-level meaning, but not on the other’s derivations. It is thick, messy and political, yet also represents fairly well what most educated people would conclude (of course, they would then show off by disagreeing loudly with each other about details, obscuring the actual agreement).

Philosopher’s views

Do people who think about these things actually believe in universal principles? One fun source is David Bourget and David J. Chalmers’ survey of professional philosophers (data). 56.4% of the respondents were moral realists (there are moral facts and moral values, and that these are objective and independent of our views), 65.7% were moral cognitivists (ethical sentences can be true or false); these were correlated to 0.562. 25.9% were deontologists, which means that they would hold somewhat Kant-like views that some actions are always or never right (some of the rest of course also believe in principles, but the survey cannot tell us anything more). 71.1% thought there was a priori knowledge (things we know by virtue of being thinking beings rather than experience).

My views

Do I believe in timeless principles? Kind of. There are statements in physics that are invariant of translations, rotations, Lorenz boosts and other transformations, and of course math remains math. Whether physics and math are “out there” or just in minds is hard to tell (I lean towards that at least physics is out there in some form), but clearly any minds that know some subset of correct, invariant physics and math can derive other correct conclusions from it. And other minds with the same information can make the same derivations and reach the same conclusions – no matter when or where. So there are knowable principles in these domains every sufficiently informed and smart mind would know. Things get iffy with values, since they might be far more linked to the entities experiencing them, but clearly we can do analyse game theory and make statements like “If agent A is trying to optimize X, agent B optimizes Y, and X and Y do not interact, then they can get more of X and Y by cooperating”. So I think we can get pretty close to universal principles in this framework, even if it turns out that they merely reside inside minds knowing about the outside world.