Effective Altruism for Ghosts

Halloween is approaching, and that leads to spooky thoughts.

It is known that the dead outnumber the living by a factor of about 13:1. Hence anything that affects the welfare of the dead can affect a large number of people, assuming that the dead are people and have welfare.

The traditional answer is to remember and honour ancestors, a near-universal practice. Assuming this improves ancestor well-being significantly this would seem to be a very effective thing to do. Bigger, better and more frequent All Hallows Eve and Dia Los Muertes celebrations as a new cause area for philanthropists?

Not so fast. First, it is not entirely clear how much well-being is improved (cost effectiveness may be low), but more importantly, most ancestor veneration only goes back a finite number of generations. While there is some general veneration of the dead in general, mostly the focus is on people who are remembered. Since cultural memory only lasts a few generations that means that only a fraction of the dead will benefit. Hence at the very least veneration of all dead seems to scale better and treat each soul neutrally. In a prioritarianism framework veneration of neglected dead is even more important.

However, a more serious issue is the general welfare state of the dead. If there are places of eternal punishment they are obviously major sources of disvalue (unless one thinks they are just punishments, in which case they might be positive) and should be removed. Even improving a fairly dreary afterlife like the Greek one would seem to provide a potential long-lasting benefit to a vast number. While clearly a neglected question, tractability appears low. Still, especially models ascribing near-pessimal suffering lasting eternally would run into the fanaticism problem that improving this would always be the top priority intervention, no matter how hard. One can consider this a form of Pascal’s mugging.

Taking a longtermist perspective on the dead produces other interesting issues. Over the span of the future many people will die, producing a potentially vast number of future dead. If the dead have unlives worth living this can become a dominant contribution to the overall good. If the dead have unlives not worth living on the other hand it becomes a strong argument for either early extinction, or radical life-extension ensuring that future generations do not die. If the afterlife can be improved in the future or future dead can be given unlives worth living this can also outweigh the current issue.

One issue is whether dead are resistant to proton decay and the heat death of the universe. If they are, and their state can be improved to be positive, then this might provide a massive existential hope.

Clearly these considerations are preliminary. We do not have a strong evidence base to even estimate QAUYs (Quality Adjusted Unlife Years) to an order or magnitude. It is very possible that dead have literally zero experience and well-being. But as the above considerations show, even a low credence of nonzero QAUYs provide in expectation a very strong reason to act in some way, if possible. Hence the value of information in regard to the state of the dead is extremely high. This suggests that paranormal investigations should be regarded as a potentially valuable near term cause area for effective altruism.

However, this might miss an even bigger opportunity: ghostly effective altruism. While dead people likely have a fairly weak ability to affect the physical world, if they have the abilities commonly ascribed to them (perceive descendant lives, precognition, nudge things in an eerie way) they could, if they coordinated better, likely improve the life of the living in many ways. Since there are many dead per living individual, that would give each living person a team that could enhance their life. Even if past dead may not have been too effective, we should expect an increasing number of effective altruists in the afterlife. They may of course primarily choose to focus on the biggest risks, haunting nuclear weapons control systems, biowarfare labs and sleep depriving AI researchers with a lacking commitment to safety.

So if you encounter something mysterious and frightening late at night, maybe it is just a nudge from the other side to increase the long-term flourishing of humanity.

Happy Halloween!

Obligatory Covid-19 blogging

SARS-CoV-2 spike ectodomain structure (open state)
SARS-CoV-2 spike ectodomain structure (open state) https://3dprint.nih.gov/discover/3DPX-013160
Over at Practical Ethics I have blogged a bit:

The Unilateralist Curse and Covid-19, or Why You Should Stay Home: why we are in a unilateralist curse situation in regards to staying home, making it rational to stay home even when it seems irrational.

Taleb and Norman had a short letter Ethics of Precaution: Individual and Systemic Risk making a similar point, noting that recognizing the situation type and taking contagion dynamics into account is a reason to be more cautious. It is different from our version in the effect of individual action: we had single actor causing full consequences, the letter has exponential scale-up. Also, way more actors: everyone rather than just epistemic peers, and incentives that are not aligned since actors do not bear the full costs of their actions. The problem is finding strategies robust to stupid, selfish actors. Institutional channeling of collective rationality and coordination is likely the only way for robustness here.

Never again – will we make Covid-19 a warning shot or a dud? deals with the fact that we are surprisingly good at forgetting harsh lessons (1918, 1962, Y2K, 2006…), giving us a moral duty to try to ensure appropriate collective memory of what needs to be recalled.

This is why Our World In Data, the Oxford COVID-19 Government Response Tracker and IMF’s policy responses to Covid-19 are so important. The disjointed international responses act as natural experiments that will tell us important things about best practices and the strengths/weaknesses of various strategies.

And the pedestrians are off! Oh no, that lady is jaywalking!

In 1983 Swedish Television began an all-evening entertainment program named Razzel. It was centred around the state lottery draw, with music, sketch comedy, and television series interspersed between the blocks. Yes, this was back in the day when there was two TV channels to choose from and more or less everybody watched. The ice age had just about ended.

One returning feature consisted of camera footage of a pedestrian crossing in Stockholm. A sports commenter well-known for his coverage of horse-racing narrated the performance of the unknowing pedestrians as if they were competing in a race. In some cases I even think he even showed up to deliver flowers to the “winner”. But you would get disqualified if you had a false start or went outside the stripes!

I suspect this feature noticeably improved traffic safety for a generation.

I was reminded of this childhood memory earlier today when discussing the use of face recognition in China to detect jaywalkers and display them on a billboard to shame them. The typical response in a western audience is fear of what looks like a totalitarian social engineering program. The glee with which many responded to the news that the system had been confused by a bus ad, putting a celebrity on the board of shame, is telling.

Is there a difference?

But compare the Chinese system to the TV program. In the China case the jaywalker may be publicly shamed from the billboard… but in the cheerful 80s TV program they were shamed in front of much of the nation.

There is a degree of increased personalness in the Chinese case since it also displays their name, but no doubt friends and neighbours would recognize you if they saw you on TV (remember, this was back when we only had two television channels and a fair fraction of people watched TV on Friday evening). There may also be SMS messages involved in some versions of the system. This acts differently: now it is you who gets told off when you misbehave.

A fundamental difference may be the valence of the framing. The TV show did this as happy entertainment, more of a parody of sport television than an attempt at influencing people. The Chinese system explicitly aims at discouraging misbehaviour. The TV show encouraged positive behaviour (if only accidentally).

So the dimensions here may be the extent of the social effect (locally, or nationwide), the degree the feedback is directly personal or public, and whether it is a positive or negative feedback. There is also a dimension of enforcement: is this something that happens every time you transgress the rules, or just randomly?

In terms of actually changing behaviour making the social effect broad rather than close and personal might not have much effect: we mostly care about our standing relative to our peers, so having the entire nation laugh at you is certainly worse than your friends laughing, but still not orders of magnitude more mortifying. The personal message on the other hand sends a signal that you were observed; together with an expectation of effective enforcement this likely has a fairly clear deterrence effect (it is often not the size of the punishment that deters people from crime, but their expectation of getting caught). The negative stick of acting wrong and being punished is likely stronger than the positive carrot of a hypothetical bouquet of flowers.

Where is the rub?

From an ethical standpoint, is there a problem here? We are subject to norm enforcement from friends and strangers all the time. What is new is the application of media and automation. They scale up the stakes and add the possibility of automated enforcement. Shaming people for jaywalking is fairly minor, but some people have lost jobs, friends or been physically assaulted when their social transgressions have become viral social media. Automated enforcement makes the panopticon effect far stronger: instead of suspecting a possibility of being observed it is a near certainty. So the net effect is stronger, more pervasive norm enforcement…

…of norms that can be observed and accurately assessed. Jaywalking is transparent in a way being rude or selfish often isn’t. We may end up in a situation where we carefully obey some norms, not because they are the most important but because they can be monitored. I do not think there is anything in principle impossible about a rudeness detection neural network, but I suspect the error rates and lack of context sensitivity would make it worse than useless in preventing actual rudeness. Goodhart’s law may even make it backfire.

So, in the end, the problem is that automated systems encode a formalization of a social norm rather than the actual fluid social norm. Having a TV commenter narrate your actions is filtered through the actual norms of how to behave, while the face recognition algorithm looks for a pattern associated with transgression rather than actual transgression. The problem is that strong feedback may then lock in obedience to the hard to change formalization rather than actual good behaviour.

Arguing against killer robot janissaries

Military robot being shown to families at New Scientist Live 2017.
Military robot being shown to families at New Scientist Live 2017.

I have a piece in Dagens Samhälle with Olle Häggström, Carin Ism, Max Tegmark and Markus Anderljung urging the Swedish parliament to consider banning lethal autonomous weapons.

This is of course mostly symbolic; the real debate is happening right now over in Geneva at the CCW. I also participated in a round-table with the Red Cross that led to their report on the issue, which is one of the working papers presented there.

I am not particularly optimistic that we will get a ban – nor that a ban would actually achieve much. However, I am much more optimistic that this debate may force a general agreement about the importance of getting meaningful human control. This is actually an area where most military and peace groups would agree: nobody wants systems that are unaccountable and impossible to control. Making sure there are international agreements that using such systems is irresponsible and maybe even a war crime would be a big win. But there are lots of devils in the details.

When it comes to arguments for why LAWs are morally bad I am personally not so convinced that the bad comes from a machine making the decision to kill a person. Clearly some machine possible decisionmaking does improve proportionality and reduce arbitrariness. Similarly arguments about whether they would increase or reduce the risk of military action and how this would play out in terms of human suffering and death are interesting empirical arguments but we should not be overconfident in that we know the answers. Given that once LAWs are in use it will be hard to roll them back if the answers are bad, we might find it prudent to try to avoid them (but consider the opposing scenario where since time immemorial robots have fought our wars and somebody now suggests using humans too – there is a status quo bias here).

My main reason for being opposed to LAWs is not that they would be inherently immoral, nor that they would necessarily or even likely make war worse or more likely. My view is that the problem is that they give states too much power. Basically they make their monopoly on violence independent of the wishes of the citizens. Once a sufficiently potent LAW military (or police force) exist it will be able to exert coercive and lethal power as ordered without any mediation through citizens. While having humans in the army certainly doesn’t guarantee moral behavior, if ordered to turn against the citizenry or act in a grossly immoral way they can exert moral agency and resist (with varying levels of overtness). The LAW army will instead implement the orders as long as they are formally lawful (assuming there is at least a constraint against unlawful commands). States know that if they mistreat their population too much their army might side with the population, a reason why some of the nastier governments make use of mercenaries or a special separate class of soldier to reduce the risk. If LAWs become powerful enough they might make dictatorships far more stable by removing a potentially risky key component of state power from the internal politics.

Bans and moral arguments are unlikely to work against despots. But building broad moral consensuses on what is acceptable in war does have effects. If R&D emphasis is directed towards finding solutions to how to manage responsibility for autonomous device decisions that will develop a lot of useful technologies for making such systems at least safer – and one can well imagine similar legal and political R&D into finding better solutions to citizen-independent state power.

In fact, far more important than LAWs is what to do about Lethal Autonomous States. Bad governance kills, many institutions/corporations/states behave just as badly as the worst AI risk visions and have a serious value alignment problem, and we do not have great mechanisms for handling responsibility in inter-state conflicts. The UN system is a first stab at the problem but obviously much, much more can be done. In the meantime, we can try avoiding going too quickly down a risky path while we try to find safe-making technologies and agreements.

Review of the cyborg bill of rights 1.0

Cyborg NewtonThe Cyborg Bill of Rights 1.0 is out. Rich MacKinnon suggests the following rights:

FREEDOM FROM DISASSEMBLY
A person shall enjoy the sanctity of bodily integrity and be free from unnecessary search, seizure, suspension or interruption of function, detachment, dismantling, or disassembly without due process.

FREEDOM OF MORPHOLOGY
A person shall be free (speech clause) to express themselves through temporary or permanent adaptions, alterations, modifications, or augmentations to the shape or form of their bodies. Similarly, a person shall be free from coerced or otherwise involuntary morphological changes.

RIGHT TO ORGANIC NATURALIZATION
A person shall be free from exploitive or injurious 3rd party ownerships of vital and supporting bodily systems. A person is entitled to the reasonable accrual of ownership interest in 3rd party properties affixed, attached, embedded, implanted, injected, infused, or otherwise permanently integrated with a person’s body for a long-term purpose.

RIGHT TO BODILY SOVEREIGNTY
A person is entitled to dominion over intelligences and agents, and their activities, whether they are acting as permanent residents, visitors, registered aliens, trespassers, insurgents, or invaders within the person’s body and its domain.

EQUALITY FOR MUTANTS
A legally recognized mutant shall enjoy all the rights, benefits, and responsibilities extended to natural persons.

As a sometime philosopher with a bit of history of talking about rights regarding bodily modification, I of course feel compelled to comment.

What are rights?

Artifical handFirst, what is a right? Clearly anybody can state that we have a right to X, but only some agents and X-rights make sense or have staying power.

One kind of rights are legal rights of various kinds. This can be international law, national law, or even informal national codes (for example the Swedish allemansrätten, which is actually not a moral/human right and actually fairly recent). Here the agent has to be some legitimate law- or rule-maker. The US Bill of Rights is an example: the result of a political  process that produced legal rights, with relatively little if any moral content. Legal rights need to be enforceable somehow.

Then there are normative moral principles such as fundamental rights (applicable to a person since they are a person), natural rights (applicable because of facts of the world) or divine rights (imposed by God). These are universal and egalitarian: applicable everywhere, everywhen, and the same for everybody. Bentham famously dismissed the idea of natural rights as “nonsense on stilts” and there is a general skepticism today about rights being fundamental norms. But insofar they do exist, anybody can discover and state them. Moral rights need to be doable.

While there may be doubts about the metaphysical nature of rights, if a society agrees on a right it will shape action, rules and thinking in an important way. It is like money: it only gets value by the implicit agreement that it has value and can be exchanged for goods. Socially constructed rights can be proposed by anybody, but they only become real if enough people buy into the construction. They might be unenforceable and impossible to perform (which may over time doom them).

What about the cyborg rights? There is no clear reference to moral principles, and only the last one refers to law. In fact, the preamble states:

Our process begins with a draft of proposed rights that are discussed thoroughly, adopted by convention, and then published to serve as model language for adoption and incorporation by NGOs, governments, and rights organizations.

That is, these rights are at present a proposal for social construction (quite literally) that hopefully will be turned into a convention (a weak international treaty) that eventually may become national law. This also fits with the proposal coming from MacKinnon rather than the General Secretary of the UN – we can all propose social constructions and urge the creation of conventions, treaties and laws.

But a key challenge is to come up with something that can become enforceable at some point. Cyborg bodies might be more finely divisible and transparent than human bodies, so that it becomes hard to regulate these rights. How do you enforce sovereignty against spyware?

Justification

Dragon leg 2Why is a right a right? There has to be a reason for a right (typically hinted at in preambles full of “whereas…”)

I have mostly been interested in moral rights. Patrick D. Hopkins wrote an excellent overview “Is enhancement worthy of being a right?” in 2008 where he looks at how you could motivate morphological freedom. He argues that there are three main strategies to show that a right is fundamental or natural:

  1. That the right conforms to human nature. This requires showing that it fits a natural end. That is, there are certain things humans should aim for, and rights help us live such lives. This is also the approach of natural law accounts.
  2. That the right is grounded in interests. Rights help us get the kinds of experiences or states of the world that we (rightly) care about. That is, there are certain things that are good for us (e.g.  “the preservation of life, health, bodily integrity, play, friendship, classic autonomy, religion, aesthetics, and the pursuit of knowledge”) and the right helps us achieve this. Why those things are good for us is another matter of justification, but if we agree on the laundry list then the right follows if it helps achieve them.
  3. That the right is grounded in our autonomy. The key thing is not what we choose but that we get to choose: without freedom of choice we are not moral agents. Much of rights by this account will be about preventing others from restricting our choices and not interfering with their choices. If something can be chosen freely and does not harm others, it has a good chance to be a right. However, this is a pretty shallow approach to autonomy; there are more rigorous and demanding ideas of autonomy in ethics (see SEP and IEP for more). This is typically how many fundamental rights get argued (I have a right to my body since if somebody can interfere with my body, they can essentially control me and prevent my autonomy).

One can do this in many ways. For example, David Miller writes on grounding human rights that one approach is to allow people from different cultures to live together as equals, or basing rights on human needs (very similar to interest accounts), or the instrumental use of them to safeguard other (need-based) rights. Many like to include human dignity, another tricky concept.

Social constructions can have a lot of reasons. Somebody wanted something, and this was recognized by others for some reason. Certain reasons are cultural universals, and that make it more likely that society will recognize a right. For example, property seems to be universal, and hence a right to one’s property is easier to argue than a right to paid holidays (but what property is, and what rules surround it, can be very different).

Legal rights are easier. They exist because there is a law or treaty, and the reasons for that are typically a political agreement on something.

It should be noted that many declarations of rights do not give any reasons. Often because we would disagree on the reasons, even if we agree on the rights. The UN declaration of human rights give no hint of where these rights come from (compare to the US declaration of independence, where it is “self-evident” that the creator has provided certain rights to all men). Still, this is somewhat unsatisfactory and leaves many questions unanswered.

So, how do we justify cyborg rights?

In the liberal rights framework I used for morphological freedom we could derive things rather straightforwardly: we have a fundamental right to life, and from this follows freedom from disassembly. We have a fundamental right to liberty, and together with the right to life this leads to a right to our own bodies, bodily sovereignty, freedom of morphology and the first half of the right to organic naturalization. We have a right to our property (typically derived from fundamental rights to seek our happiness and have liberty), and from this the second half of the organic naturalization right follows (we are literally mixing ourselves rather than our work with the value produced by the implants). Equality for mutants follow from having the same fundamental rights as humans (note that the bill talks about “persons”, and most ethical arguments try to be valid for whatever entities count as persons – this tends to be more than general enough to cover cyborg bodies). We still need some justification of the fundamental rights of life, liberty and happiness, but that is outside the scope of this exercise. Just use your favorite justifications.

The human nature approach would say that cyborg nature is such that these rights fit with it. This might be tricky to use as long as we do not have many cyborgs to study the nature of. In fact, since cyborgs are imagined as self-creating (or at least self-modifying) beings it might be hard to find any shared nature… except maybe the self-creation part. As I often like to argue, this is close to Mirandola’s idea of human dignity deriving from our ability to change ourselves.

The interest approach would ask how the cyborg interests are furthered by these rights. That seems pretty straightforward for most reasonably human-like interests. In fact, the above liberal rights framework is to a large extent an interest-based account.

The autonomy account is also pretty straightforward. All cyborg rights except the last are about autonomy.

Could we skip the ethics and these possibly empty constructions? Perhaps: we could see the cyborg bill of rights as a way of making a cyborg-human society possible to live in. We need to tolerate each other and set boundaries on allowed messing around with each other’s bodies. Universals of property lead to the naturalization right, territoriality the sovereignty right universal that actions under self-control are distinguished from those not under control might be taken as the root for autonomy-like motivations that then support the rest.

Which one is best? That depends. The liberal rights/interest system produces nice modular rules, although there will be much arguments on what has precedence. The human nature approach might be deep and poetic, but potentially easy to disagree on. Autonomy is very straightforward (except when the cyborg starts messing with their brain). Social constructivism allows us to bring in issues of what actually works in a real society, not just what perfect isolated cyborgs (on a frictionless infinite plane) should do.

Parts of rights

Alternative limb projectOne of the cool properties of rights is that they have parts – “the Hohfeldian incidents“, after Wesley Hohfeld (1879–1918) who discovered them. He was thinking of legal rights, but this applies to moral rights too. His system is descriptive – this is how rights work – rather than explaining why the came about or whether this is a good thing. The four parts are:

Privileges (alias liberties): I have a right to eat what I want. Someone with a driver’s licence has the privilege to drive. If you have a duty not do do something, then you have no privilege about it.

Claims: I have a claim on my employer to pay my salary. Children have a claim vis-a-vis every adult not to be abused. My employer is morally and legally dutybound to pay, since they agreed to do so. We are dutybound to refrain from abusing children since it is wrong and illegal.

These two are what most talk about rights deal. In the bill, the freedom from disassembly and freedom of morphology are about privileges and claims. The next two are a bit meta, dealing with rights over the first two:

Powers: My boss has the power to order me to research a certain topic, and then I have a duty to do it. I can invite somebody to my home, and then they have the privilege of being there as long as I give it to them. Powers allow us to change privileges and claims, and sometimes powers (an admiral can relieve a captain of the power to command a ship).

Immunities: My boss cannot order me to eat meat. The US government cannot impose religious duties on citizens. These are immunities: certain people or institutions cannot change other incidents.

These parts are then combined into full rights. For example, my property rights to this computer involve the privilege to use the computer, a claim against others to not use the computer, the power to allow others to use it or to sell it to them (giving them the entire rights bundle), and an immunity of others altering these rights. Sure, in practice the software inside is of doubtful loyalty and there are law-enforcement and emergency situation exceptions, but the basic system is pretty clear. Licence agreements typically give you a far

Sometimes we speak about positive and negative rights: if I have a negative right I am entitled to non-interference from others, while a positive right entitles me to some help or goods. My right to my body is a negative right in the sense that others may not prevent me from using or changing my body as I wish, but I do not have a positive right to demand that they help me with some weird bodymorphing. However, in practice there is a lot of blending going on: public healthcare systems give us positive rights to some (but not all) treatment, policing gives us a positive right of protection (whether we want it or not). If you are a libertarian you will tend to emphasize the negative rights as being the most important, while social democrats tend to emphasize state-supported positive rights.

The cyborg bill of rights starts by talking about privileges and claims. Freedom of morphology clearly expresses an immunity to forced bodily change. The naturalization right is about immunity from unwilling change of the rights of parts, and an expression of a kind of power over parts being integrated into the body. Sovereignty is all about power over entities getting into the body.

The right of bodily sovereignty seems to imply odd things about consensual sex – once there is penetration, there is dominion. And what about entities that are partially inside the body? I think this is because it is trying to reinvent some of the above incidents. The aim is presumably to cover pregnancy/abortion, what doctors may do, and other interventions at the same time. The doctor case is easy, since it is roughly what we agree on today: we have the power to allow doctors to work on our bodies, but we can also withdraw this whenever we want

Some other thoughts

Nigel on the screenThe recent case where the police subpoenad the pacemaker data of a suspected arsonist brings some of these rights into relief. The subpoena occurred with due process, so it was allowed by the freedom from disassembly. In fact, since it is only information and that it is copied one can argue that there was no real “disassembly”. There have been cases where police wanted bullets lodged in people in order to do ballistics on them, but US courts have generally found that bodily integrity trumps the need for evidence. Maybe one could argue for a derived right to bodily privacy, but social needs can presumably trump this just as it trumps normal privacy. Right now views on bodily integrity and privacy are still based on the assumption that bodies are integral and opaque. In a cyborg world this is no longer true, and the law may well move in a more invasive direction.

“Legally recognized mutant”? What about mutants denied legal recognition? Legal recognition makes sense for things that the law must differentiate between, not for things the law is blind to. Legally recognized mutants (whatever they are) would be a group that needs to be treated in some special way. If they are just like natural humans they do not need special recognition. We may have laws making it illegal to discriminate against mutants, but this is a law about a certain kind of behavior rather than the recipient. If I racially discriminate against somebody but happens to be wrong about their race, I am still guilty. So the legal recognition part does not do any work in this right.

And why just mutants? Presumably the aim here is to cover cyborgs, transhumans and other prefix-humans so they are recognized as legal and moral agents with the same standing. The issue is whether this is achieved by arguing that they were human and “mutated”, or are descended from humans, and hence should have the same standing, or whether this is due to them having the right kind of mental states to be persons. The first approach is really problematic: anencephalic infants are mutants but hardly persons, and basing rights on lineage seems ripe for abuse. The second is much simpler, and allows us to generalize to other beings like brain emulations, aliens, hypothetical intelligent moral animals, or the Swampman.

This links to a question that might deserve a section on its own: who are the rightsholders? Normal human rights typically deal with persons, which at least includes adults capable of moral thinking and acting (they are moral agents). Someone who is incapable, for example due to insanity or being a child, have reduced rights but are still a moral patient (someone we have duties towards). A child may not have full privileges and powers, but they do have claims and immunities. I like to argue that once you can comprehend and make use of a right you deserve to have it, since you have capacity relative to the right. Some people also think prepersons like fertilized eggs are persons and have rights; I think this does not make much sense since they lack any form of mind, but others think that having the potential for a future mind is enough to grant immunity. Tricky border cases like persistent vegetative states, cryonics patients, great apes and weird neurological states keep bioethicists busy.

In the cyborg case the issue is what properties make something a potential rightsholder and how to delineate the border of the being. I would argue that if you have a moral agent system it is a rightsholder no matter what it is made of. That is fine, except that cyborgs might have interchangeable parts: if cyborg A gives her arm to cyborg B, have anything changed? I would argue that the arm switched from being a part of/property of A to being a part of/property of B, but the individuals did not change since the parts that make them moral agents are unchanged (this is just how transplants don’t change identity). But what if A gave part of her brain to B? A turns into A’, B turns into B’, and these may be new agents. Or what if A has outsourced a lot of her mind to external systems running in the cloud or in B’s brain? We may still argue that rights adhere to being a moral agent and person rather than being the same person or a person that can easily be separated from other persons or infrastructure. But clearly we can make things really complicated through overlapping bodies and minds.

Summary

I have looked at the cyborg bill of rights and how it fits with rights in law, society and ethics. Overall it is a first stab at establishing social conventions for enhanced, modular people. It likely needs a lot of tightening up to work, and people need to actually understand and care about its contents for it to have any chance of becoming something legally or socially “real”. From an ethical standpoint one can motivate the bill in a lot of ways; for maximum acceptance one needs to use a wide and general set of motivations, but these will lead to trouble when we try to implement things practically since they give no way of trading one off against another one in a principled way. There is a fair bit of work needed to refine the incidences of the rights, not to mention who is a rightsholder (and why). That will be fun.

AI, morality, ethics and metaethics

Next Sunday I will be debating AI ethics at Battle of Ideas. Here is a podcast where I talk AI, morality and ethics: https://soundcloud.com/institute-of-ideas/battle-cry-anders-sandberg-on-ethical-ai

What distinguishes morals from ethics?

There is actually a shocking confusion about what the distinction between morals and ethics is. Differen.com says ethics is about rules of conduct produced by an external source while morals are an individual’s own principles of right and wrong. Grammarist.com says morals are principles on which one’s own judgement of right and wrong are based (abstract, subjective and personal), ethics are the principles of right conduct (practical, social and objective). Ian Welsh gives a soundbite: “morals are how you treat people you know.  Ethics are how you treat people you don’t know.” Paul Walker and Terry Lovat say ethics leans towards decisions based on individual character and subjective understanding of right and wrong, while morals is about widely shared communal or societal norms – here ethics is individual assessment of something being good or bad, while morality is inter-subjective community assessment.

Wikipedia distinguishes between ethics as a research field and the common human ability to think critically about moral values and direct actions appropriately, or a particular persons principles of values. Morality is the differentiation between things that are proper and improper, as well as a body of standards and principles in derived from a code of conduct in some philosophy, religion or culture… or derived from a standard a person believes to be universal.

Dictionary.com regards ethics as a system of moral principles, the rules of conduct recognized in some human environment, an individual’s moral principles (and the branch of philosophy). Morality is about conforming to the rules of right conduct, having moral quality or character, a doctrine or system of morals and a few other meanings. The Cambridge dictionary thinks ethics is the study of what is right or wrong, or the set of beliefs about it, while morality is a set of personal or social standards for good/bad behavior and character.

And so on.

I think most people try to include the distinction between shared systems of conduct and individual codes, and the distinction between things that are subjective, socially agreed on, and maybe objective. Plus that we all agree on that ethics is a philosophical research field.

My take on it

I like to think of it as a AI issue. We have a policy function \pi(s,a) that maps states and action pairs to a probability of acting that way; this is set using a value function Q(s) where various states are assigned values. Morality in my sense is just the policy function and maybe the value function: they have been learned through interacting with the world in various ways.

Ethics in my sense is ways of selecting policies and values. We are able to not only change how we act but also how we evaluate things, and the information that does this change is not just reward signals that update value function directly, but also knowledge about the world, discoveries about ourselves, and interactions with others – in particular ideas that directly change the policy and value functions.

When I realize that lying rarely produces good outcomes (too much work) and hence reduce my lying, then I am doing ethics (similarly, I might be convinced about this by hearing others explain that lying is morally worse than I thought or convincing me about Kantian ethics). I might even learn that short-term pleasure is less valuable than other forms of pleasure, changing how I view sensory rewards.

Academic ethics is all about the kinds of reasons and patterns we should use to update our policies and values, trying to systematize them. It shades over into metaethics, which is trying to understand what ethics is really about (and what metaethics is about: it is its own meta-discipline, unlike metaphysics that has metametaphysics, which I think is its own meta-discipline).

I do not think I will resolve any confusion, but at least this is how I tend to use the terminology. Morals is how I act and evaluate, ethics is how I update how I act and evaluate, metaethics is how I try to think about my ethics.

Doing right and feeling good

My panel at Hay-on-Wye (me, Elaine Glaser, Peter Dews and Simon Baron-Cohen) talked about compassion, the sentiment model of morality, effective altruism and how to really help the world. Now available as video!

My view is largely that moral action is strongly driven and motivated by emotions rather than reason, but outside the world of the blindingly obvious or everyday human activity our intuitions and feelings are not great guides. We do not function well morally when the numbers get too big or the cognitive biases become maladaptive. Morality may be about the heart, but ethics is in the brain.

Bring back the dead

Swedish childrens booksI recently posted a brief essay on The Conversation about the ethics of trying to regenerate the brains of brain dead patients (earlier version posted later on Practical Ethics). Tonight I am giving interviews on BBC World radio about it.

The quick of it is that it will mess with our definitions of who happens to be dead, but that is mostly a matter of sorting out practice and definitions, and that it is somewhat questionable who is benefiting: the original patient is unlikely to recover, but we might get a moral patient we need to care for even if they are not a person, or even a different person (or most likely, just generally useful medical data but no surviving patient at all). The problem is that partial success might be worse than no success. But the only way of knowing is to try.

Limits of morphological freedom

Alternative limb projectMy talk “Morphological freedom: what are the limits to transforming the body?” was essentially a continuation of my original morphological freedom talk from 2001. Now with added philosophical understanding and linking to some of the responses to the original paper. Here is a quick summary:

Enhancement and extensions

I began by a few cases: Liz Parrish self-experimenting with gene therapy to slow her ageing, Paul Erdös using drugs for cognitive enhancement, Todd Huffman exploring the realm of magnetic vision using an implanted magnet, Neil Harbisson gaining access to the realm of color using sonification, Stelarc doing body modification and extension as performance art, and Erik “The Lizardman” Sprague transforming into a lizard as an existential project.

It is worth noting that several of these are not typical enhancements amplifying an existing ability, but about gaining access to entirely new abilities (I call it “extension”). Their value is not instrumental, but lies in the exploration or self-transformation. They are by their nature subjective and divergent. While some argue enhancement will by their nature be convergent (I disagree) extensions definitely go in all directions – and in fact gain importance from being different.

Morphological freedom and its grounding

Cool tattooMorphological freedom, “The right to modify one’s body (or not modify) according to one’s desires”, can be derived from fundamental rights such as the right to life and the right to pursue happiness. If you are not free to control your own body, your right to life and freedom are vulnerable and contingent: hence you need to be allowed to control your body. But I argue this includes a right to change the body: morphological freedom.

One can argue about what rights are, or if they exist. If there are such things, there is however a fair consensus that life and liberty is on the list. Similarly, morphological freedom seems to be so intrinsically tied together with personhood that it becomes inalienable: you cannot remove it from a person without removing an important aspect of what it means to be a person.

These arguments are about fundamental rights rather than civil and legal rights: while I think we should make morphological freedom legally protected, I do think there is more to it than just mutual agreement. Patrick Hopkins wrote an excellent paper analysing how morphological freedom could be grounded. He argued that there are three primary approaches: grounding it in individual autonomy, in  human nature, or in human interests. Autonomy is very popular, but Hopkins thinks much of current discourse is a juvenile “I want to be allowed to do what I want” autonomy rather than the more rational or practical concepts of autonomy in deontological or consequentialist ethics. One pay-off is that these concepts do imply limits to morphological freedom to undermine one’s own autonomy. Grounding in human nature requires a view of human nature. Transhumanists and many bioconservatives actually find themselves allies against the relativists and constructivists that deny any nature: they merely disagree on what the sacrosanct parts of that nature are (and these define limits of morphological freedom). Transhumanists think most proposed enhancements are outside these parts, the conservatives think they cover nearly any enhancement. Finally, grounding in what makes humans truly flourish again produces some ethically relevant limits. However, the interest account has trouble with extensions: at best it can argue that we need exploration or curiosity.

One can motivate morphological freedom in many other ways. One is that we need to explore: both because there may be posthuman modes of existence of extremely high value, and because we cannot know the value of different changes without trying them – the world is too complex to be reliably predicted, and many valuable things are subjective in nature. One can also argue we have some form of duty to approach posthumanity, because this approach is intrinsically or instrumentally important (consider a transhumanist reading of Nietzsche, or cosmist ideas). This approach typically seem to require some non-person affecting value. Another approach is to argue morphological freedom is socially constructed within different domains; we have one kind of freedom in sport, another one in academia. I am not fond of this approach since it does not explain how to handle the creation of new domains or what to do between domains. Finally, there is the virtue approach: self-transformation can be seen as a virtue. By this perspective we are not only allowed to change ourselves, we ought to since it is part of human excellence and authenticity.

Limits

Limits to morphological freedom can be roughly categorized as practical/prudential, issues of willingness to change/identity, the ethical limits, and the social limits.

Practical/prudential limits

Safety is clearly a constraint. If an enhancement is too dangerous, then the risk outweighs the benefit and it should not be done. This is tricky to evaluate for more subjective benefits. The real risk boundary might not be a risk/benefit trade-off, but whether risk is handled in a responsible manner. The difference between being a grinder and doing self-harm consists in whether one is taking precautions and regard pain and harms as problems rather than the point of the exercise.

There are also obvious technological and biological limits. I did not have the time to discuss them, but I think one can use heuristics like the evolutionary optimality challenge to make judgements about feasibility and safety.

Identity limits

Design your bodyEven in a world where anything could be changed with no risk, economic cost or outside influence it is likely that many traits would remain stable. We express ourselves through what we transform ourselves into, and this implies that we will not change what we consider to be prior to that. The Riis, Simmons and Goodwin study showed that surveyed students were much less willing to enhance traits that were regarded more relevant to personal identity than peripheral traits. Rather than “becoming more than you are” the surveyed students were interested in being who they are – but better at it. Morphological freedom may hence be strongly constrained by the desire to maintain a variant of the present self.

Ethical limits

Beside the limits coming from the groundings discussed above, there are the standard constraints of not harming or otherwise infringing on the rights of others, capacity (what do we do about prepersons, children or the deranged?) and informed consent. The problem here is not any disagreement about the existence of the constraints, but where they actually lie and how they actually play out.

Social limits

There are obvious practical social limits for some morphological freedom. Becoming a lizard affects your career choices and how people react to you – the fact that maybe it shouldn’t does not change the fact that it does.

There are also constraints from externalities: morphological freedom should not unduly externalize its costs on the rest of society.

My original paper has got a certain amount of flak from the direction of disability rights, since I argued morphological freedom is a negative right. You have a right to try to change yourself, but I do not need to help you – and vice versa. The criticism is that this is ableist: to be a true right there must be social support for achieving the inherent freedom. To some extent my libertarian leanings made me favour a negative right, but it was also the less radical choice: I am actually delighted that others think we need to reshape society to help people self-transform, a far more radical view. I have some misgivings about the politics of this, prioritization tends to be nasty business, it means that costs will be socially externalized, and in the literature there seem to be some odd views about who gets to say what bodies are authentic or not, but I am all in favour of a “commitment to the value, standing, and social legibility of the widest possible (and an ever-expanding) variety of desired morphologies and lifeways.”

Another interesting discourse has been about the control of the body. While in medicine there has been much work to normalize it (slowly shifting towards achieving functioning in one’s own life), in science the growth of ethics review has put more and more control in the hands of appointed experts, while in performance art almost anything goes (and attempts to control it would be censorship). As Goodall pointed out, many of the body-oriented art pieces are as much experiments in ethics as they are artistic experiments. They push the boundaries in important ways.

Touch the limits

In the end, I think this is an important realization: we do not fully know the moral limits of morphological freedom. We should not expect all of them to be knowable through prior reasoning. This is a domain where much is unknown and hard for humans to reason about. Hence we need experiments and exploration to learn them. We should support this exploration since there is much of value to be found, and because it embodies much of what humanity is about. Even when we do not know it yet.