Ethics of brain emulations, New Scientist edition

Si elegansI have an opinion piece in New Scientist about the ethics of brain emulation. The content is similar to what I was talking about at IJCNN and in my academic paper (and the comic about it). Here are a few things that did not fit the text:

Ethics that got left out

Due to length constraints I had to cut the discussion about why animals might be moral patients. That made the essay look positively Benthamite in its focus on pain. In fact, I am agnostic on whether experience is necessary for being a moral patient. Here is the cut section:

Why should we care about how real animals are treated? Different philosophers have given different answers. Immanuel Kant did not think animals matter in themselves, but our behaviour towards them matters morally: a human who kicks a dog is cruel and should not do it. Jeremy Bentham famously argued that thinking does not matter, but the capacity to suffer: “…the question is not, Can they reason? nor, Can they talk? but, Can they suffer?” . Other philosophers have argued that it matters that animals experience being subjects of their own life, with desires and goals that make sense to them. While there is a fair bit of disagreement of what this means for our responsibilities to animals and what we may use them for, there is a widespread agreement that they are moral patients, something we ought to treat with some kind of care.

This is of course a super-quick condensation of a debate that fills bookshelves. It also leaves out Christine Korsgaard’s interesting Kantian work on animal rights, which as far as I can tell does not need to rely on particular accounts of consciousness and pain but rather interests. Most people would say that without consciousness or experience there is nobody that is harmed, but I am not entirely certain unconscious systems cannot be regarded as moral patients. There are for example people working in environmental ethics that ascribe moral patient-hood and partial rights to species or natural environments.

Big simulations: what are they good for?

Another interesting thing that had to be left out is comparisons of different large scale neural simulations.

(I am a bit uncertain about where the largest model in the Human Brain Project is right now; they are running more realistic models, so they will be smaller in terms of neurons. But they clearly have the ambition to best the others in the long run.)

Of course, one can argue which approach matters. Spaun is a model of cognition using low resolution neurons, while the slightly larger (in neurons) simulation from the Lansner lab was just a generic piece of cortex, showing some non-trivial alpha and gamma rhythms, and the even larger ones showing some interesting emergent behavior despite the lack of biological complexity in the neurons. Conversely, Cotterill’s CyberChild that I worry about in the opinion piece had just 21 neurons in each region but they formed a fairly complex network with many brain regions that in a sense is more meaningful as an organism than the near-disembodied problem-solver Spaun. Meanwhile SpiNNaker is running rings around the others in terms of speed, essentially running in real-time while the others have slowdowns by a factor of a thousand or worse.

The core of the matter is defining what one wants to achieve. Lots of neurons, biological realism, non-trivial emergent behavior, modelling a real neural system, purposeful (even conscious) behavior, useful technology, or scientific understanding? Brain emulation aims at getting purposeful, whole-organism behavior from running a very large, very complete biologically realistic simulation. Many robotics and AI people are happy without the biological realism and would prefer as small simulation as possible. Neuroscientists and cognitive scientists care about what they can learn and understand based on the simulations, rather than their completeness. They are all each pursuing something useful, but it is very different between the fields. As long as they remember that others are not pursuing the same aim they can get along.

What I hope: more honest uncertainty

What I hope happens is that computational neuroscientists think a bit about the issue of suffering (or moral patient-hood) in their simulations rather than slip into the comfortable “It is just a simulation, it cannot feel anything” mode of thinking by default.

It is easy to tell oneself that simulations do not matter because not only do we know how they work when we make them (giving us the illusion that we actually know everything there is to know about the system – obviously not true since we at least need to run them to see what happens), but institutionally it is easier to regard them as non-problems in terms of workload, conflicts and complexity (let’s not rock the boat at the planning meeting, right?) And once something is in the “does not matter morally” category it becomes painful to move it out of it – many will now be motivated to keep it there.

I rather have people keep an open mind about these systems. We do not understand experience. We do not understand consciousness. We do not understand brains and organisms as wholes, and there is much we do not understand about the parts either. We do not have agreement on moral patient-hood. Hence the rational thing to do, even when one is pretty committed to a particular view, is to be open to the possibility that it might be wrong. The rational response to this uncertainty is to get more information if possible, to hedge our bets, and try to avoid actions we might regret in the future.

The limits of the in vitro burger

New growthStepping on toes everywhere in our circles, Ben Levinstein and me have a post at Practical Ethics about the limitations of in vitro meat for reducing animal suffering.

The basic argument is that while factory farming produces a lot of suffering, a post-industrial world would likely have very few lives of the involved species. It would be better if they had better lives and larger populations instead. So, at least in some views of consequentialism, the ethical good of in vitro meat is reduced from a clear win to possibly even a second best to humane farming.

An analogy can be made with horses, whose population has declined precipitiously from the pre-tractor, pre-car days. Current horses live (I guess) nicer lives than the more work-oriented horses of 1900, but they have much fewer lives. So the current 3 million horses in the US might have lives (say) twice as good as the 25 million horses in the 1920s: the total value has still declined. However, factory farmed animals may have lives that are not worth living, holding negative value. If we assume the about 50 billion chickens in in the world all have lives of value -1 each, then replacing them with in vitro meat would give make the world 50 billion units better. But this could also be achieved by making their lives one unit better (and why stop there? maybe they could get two units more). Whether it matters how many entities are experiencing depends on your approach, as does whether it is an extra value if there is a chicken species around rather than not.

Now, I am not too troubled by this since I think in vitro meat is also very good from a health perspective, a climate perspective, and an existential risk reduction perspective (it is good for space colonization and survival if sunlight is interrupted). But I think most people come to in vitro meat from an ethical angle. And given just that perspective, we should not be too complacent that in the future we will become postagricultural: it may take time, and it might actually not increase total wellfare as much as we expected.

 

The moral responsibility of office software

Si elegansOn practical ethics Ben and me blog about user design ethics: when you make software that a lot of people use, even tiny flaws such as delays mean significant losses when summed over all users, and affordances can entice many people to do the wrong thing. So be careful and perfectionist!

This is in many ways the fundamental problem of the modern era. Since successful things get copied into millions or billions, the impact of a single choice can become tremendous. One YouTube clip or one tweet, and suddenly the attention of millions of people will descend on someone. One bug, and millions of computers are vulnerable. A clever hack, and suddenly millions can do it too.

We ought to be far more careful, yet that is hard to square with a free life. Most of the time, it also does not matter since we get lost in the noise with our papers, tweets or companies – the logic of the power law means the vast majority will never matter even a fraction as much as the biggest.

Ethics for neural networks

PosterI am currently attending IJCNN 2015 in Killarney. Yesterday I gave an invited talk “Ethics and large-scale neural networks: when do we need to start caring for neural networks, rather than about them?” The bulk of the talk was based on my previous WBE ethics paper, looking at the reasons we cannot be certain neural networks have experience or not, leading to my view that we hence ought to handle them with the same care as the biological originals they mimic. Yup, it is the one T&F made a lovely comic about – which incidentally gave me an awesome poster at the conference.

When I started, I looked a bit at ethics in neural network science/engineering. As I see it, there are three categories of ethical issues specific to the topic rather than being general professional ethics issues:

  • First, the issues surrounding applications such as privacy, big data, surveillance, killer robots etc.
  • Second, the issue that machine learning allows machines to learn the wrong things.
  • Third, machines as moral agents or patients.

The first category is important, but I leave that for others to discuss. It is not necessarily linked to neural networks per se, anyway. It is about responsibility for technology and what one works on.

Learning wrong

The second category is fun. Learning systems are not fully specified by their creators – which is the whole point! This means that their actual performance is open-ended (within the domain of possible responses). And from that follows that they can learn things we do not want.

One example is inadvertent discrimination, where the network learns something that would be called racism, sexism or something similar if it happened in a human. One can consider a credit rating neural network trained on customer data to estimate the probability of a customer defaulting. It may develop an internal representation that gets activated by customer’s race and is linked to a negative evaluation of the rating. There is no deliberate programming of racism, just something that emerges from the data – where the race:economy link may well be due to factors in society that are structurally racist.

A similar, real case is advertising algorithms selecting ads online for users in ways that shows some ads for some groups but not others – which, in the case of education, may serve to perpetuate disadvantages or prejudices.

A recent example was the Google Photo captioning system, which captioned a black couple as gorillas. Obvious outrage ensued, and a Google representative tweeted that this was “high on my list of bugs you *never* want to see happen ::shudder::”. The misbehaviour was quickly fixed.

Mislabelling somebody or something else might merely have been amusing: calling some people gorillas will often be met by laughter. But it becomes charged and ethically relevant in a culture like the current American one. This is nothing the recognition algorithm knows about: from its perspective mislabelling chairs is as bad as mislabelling humans. Adding a culturally sensitive loss function to the training is nontrivial. Ad hoc corrections against particular cases – like this one – will only help when a scandalous mislabelling already occurs: we will not know what is misbehaviour until we see it.

[ Incidentally, this suggests a way for automatic insult generation: use computer vision to find matching categories, and select the one that is closest but has the lowest social status (perhaps detected using sentiment analysis). It will be hilarious for the five seconds until somebody takes serious offence. ]

It has been suggested that the behavior was due to training data being biased towards white people, making the model subtly biased. If there are few examples of a category it might be suppressed or overused as a response. This can be very hard to fix, since many systems and data sources have a patchy spread in social space. But maybe we need to pay more attention to the issue of whether data is socially diverse enough. It is worth recognizing that since a machine learning system may be used by very many users once it has been trained, it has the power to project its biased view of the world to many: getting things right in a universal system, rather than something used by a few, may be far more important than it looks. We may also have to have enough online learning over time so such systems update their worldview based on how culture evolves.

Moral actors, proxies and patients

Si elegansMaking machines that act in a moral context is even iffier.

My standard example is of course the autonomous car, which may find itself in situations that would count as moral choices for a human. Here the issue is who sets the decision scheme: presumably they would be held accountable insofar they could predict the consequences of their code or be identified. I have argued that it is good to have the car try to behave as its “driver” would, but it will still be limited by the sensory and cognitive abilities of the vehicle. Moral proxies are doable, even if they are not moral agents.

The manufacture and behavior of killer robots is of course even more contentious. Even if we think they can be acceptable in principle and have a moral system that we think would be the right one to implement, actually implementing it for certain may prove exceedingly hard. Verification of robotics is hard; verification of morally important actions based on real-world data is even worse. And one cannot shirk the responsibility to do so if one deploys the system.

Note that none of this presupposes real intelligence or truly open-ended action abilities. They just make an already hard problem tougher. Machines that can only act within a well-defined set of constraints can be further constrained to not go into parts of state- or action-space we know are bad (but as discussed above, even captioning images is a sufficiently big space that we will find surprise bad actions).

As I mentioned above, the bulk of the talk was my argument that whole brain emulation attempts can produce systems we have good reasons to be careful with: we do not know if they are moral agents, but they are intentionally architecturally and behaviourally close to moral agents.

A new aspect I got the chance to discuss is the problem about non-emulation neural networks. When do we need to consider them? Brian Tomasik has written a paper about whether we should regard reinforcement learning agents as moral patients (see also this supplement). His conclusion is that these programs mimic core motivation/emotion cognitive systems that almost certainly matter for real moral patients’ patient-hood (an organism without a reward system or learning would presumably lose much or all of its patient-hood), and there is a nonzero chance that they are fully or partially sentient.

But things get harder for other architectures. A deep learning network with just a feedforward architecture is presumably unable to be conscious, since many theories of consciousness presuppose some forms of feedback – and that is not possible in that architecture. But at the conference there have been plenty of recurrent networks that have all sorts of feedback. Whether they can have experiential states appears tricky to answer. In some cases we may argue they are too small to matter, but again we do not know if level of consciousness (or moral considerability) necessarily has to follow brain size.

They also inhabit a potentially alien world where their representations could be utterly unrelated to what we humans understand or can express. One might say, paraphrasing Wittgenstein, that if a neural network could speak we would not understand it. However, there might be ways of making their internal representations less opaque. Methods such as inceptionism, deep visualization, or t-SNE can actually help discern some of what is going on on the inside. If we were to discover a set of concepts that were similar to human or animal concepts, we may have reason to thread a bit more carefully – especially if there were concepts linked to some of them in the same way “suffering concepts” may be linked to other concepts. This looks like a very relevant research area, both for debugging our learning systems, but also for mapping out the structures of animal, human and machine minds.

In the end, if we want safe and beneficial smart systems, we better start figuring out how to understand them better.

Baby interrupted

Car frostFrancesca Minerva and me have a new paper out: Cryopreservation of Embryos and Fetuses as a Future Option for Family Planning Purposes (Journal of Evolution and Technology – Vol. 25 Issue 1 – April 2015 – pgs 17-30).

Basically, we analyse the ethics of cryopreserving fetuses, especially as an alternative to abortion. While technologically we do not have any means to bring a separated (yet alone cryopreserved) fetus to term yet, it is not inconceivable that advances in ectogenesis (artificial wombs) or biotechnological production of artificial placentas allowing reinplantation could be achieved. And a cryopreserved fetus would have all the time in the world, just like an adult cryonics patient.

It is interesting to see how many of the standard ethical arguments against abortion fare when dealing with cryopreservation. There is no killing, personhood is not affected, there is no loss of value of the future – just a long delay. One might be concerned that fetuses will not be reinplanted but just left in limbo forever, but clearly this is a better state than being irreversibly aborted: cryopreservation can (eventually) be reversed. I think our paper shows that (regardless of what one thinks of cryonics) the irreversibility is the key ethical issue in abortion.

In the end, it will likely take a long time before this is a viable option. But it seems that there are good reasons to consider cryopreservation and reinplantation of fetuses: animal husbandry, space colonisation, various medical treatments (consider “interrupting” an ongoing pregnancy because the mother needs cytostatic treament), and now this family planning reason.

Crispy embryos

BabiesResearchers at Sun Yat-sen University in Guangzhou have edited the germline genome of human embryos (paper). They used the ever more popular CRISPR/Cas9 method to try to modify the gene involved in beta-thalassaemia in non-viable leftover embryos from a fertility clinic.

As usual there is a fair bit of handwringing, especially since there was a recent call for a moratorium on this kind of thing from one set of researchers, and a more liberal (yet cautious) response from another set. As noted by ethicists, many of the ethical concerns are actually somewhat confused.

That germline engineering can have unpredictable consequences for future generations is as true for normal reproduction. More strongly, somebody making the case that (say) race mixing should be hindred because of unknown future effects would be condemned as a racist: we have overarching reasons to allow people live and procreate freely that morally overrule worries about their genetic endowment – even if there actually were genetic issues (as far as I know all branches of the human family are equally interfertile, but this might just be a historical contingency). For a possible future effect to matter morally it needs to be pretty serious and we need to have some real reason to think it is more likely to happen because of the actions we take now. A vague unease or a mere possibility is not enough.

However, the paper actually gives a pretty good argument for why we should not try this method in humans. They found that the efficiency of the repair was about 50%, but more worryingly that there was off-target mutations and that a similar gene was accidentally modified. These are good reasons not to try it. Not unexpected, but very helpful in that we can actually make informed decisions both about whether to use it (clearly not until the problems have been fixed) and what needs to be investigated (how can it be done well? why does it work worse here than advertised?).

The interesting thing with the paper is that the fairly negative results which would reduce interest in human germline changes is anyway hailed as being unethical. It is hard to make this claim stick, unless one buys into the view that germline changes of human embryos is intrinsically bad. The embryos could not develop into persons and would have been discarded from the fertility clinic, so there was no possible future person being harmed (if one thinks fertilized but non-viable embryos deserve moral protection one has other big problems). The main fear seems to be that if the technology is demonstrated many others will follow, but an early negative result would seem to reduce this slippery slope argument.

I think the real reason people think there is an ethical problem is the association of germline engineering with “designer babies”, and the conditioning that designer babies are wrong. But they can’t be wrong for no reason: there has to be an ethics argument for their badness. There is no shortage of such arguments in the literature, ranging from ideas of the natural order, human dignity, accepting the given, the importance of an open-ended life to issues of equality, just to mention a few. But none of these are widely accepted as slam-dunk arguments that conclusively show designer babies are wrong: each of them also have vigorous criticisms. One can believe one or more of them to be true, but it would be rather premature to claim that settles the debate. And even then, most of these designer baby arguments are irrelevant for the case at hand.

All in all, it was a useful result that probably will reduce both risky and pointless research and focus on what matters. I think that makes it quite ethical.

Do we want the enhanced military?

8 of Information: Trillicom Arms Inc.Some notes on Practical Ethics inspired by Jonathan D. Moreno’s excellent recent talk.

My basic argument is that enhancing the capabilities of military forces (or any other form of state power) is risky if the probability that they can be misused (or the amount of expected/maximal damage in such cases) does not decrease more strongly. This would likely correspond to some form of moral enhancement, but even the morally enhanced army may act in a bad manner because the values guiding it or the state commanding it are bad: moral enhancement as we normally think about it is all about coordination, the ability to act according to given values and to reflect on these values. But since moral enhancement itself is agnostic about the right values those values will be provided by the state or society. So we need to ensure that states/societies have good values, and that they are able to make their forces implement them. A malicious or stupid head commanding a genius army is truly dangerous. As is tails wagging dogs, or keeping the head unaware (in the name of national security) of what is going on.

In other news: an eclipse in a teacup:
Eclipse in a cup

Fair brains?

Yesterday I gave a lecture at the London Futurists, “What is a fair distribution of brains?”:

My slides can be found here (PDF, 4.5 Mb).

My main take-home messages were:

Cognitive enhancement is potentially very valuable to individuals and society, both in pure economic terms but also for living a good life. Intelligence protects against many bad things (from ill health to being a murder victim), increases opportunity, and allows you to create more for yourself and others. Cognitive ability interacts in a virtuous cycle with education, wealth and social capital.

That said, intelligence is not everything. Non-cognitive factors like motivation are also important. And societies that leave out people – due to sexism, racism, class divisions or other factors – will lose out on brain power. Giving these groups education and opportunities is a very cheap way of getting a big cognitive capital boost for society.

I was critiqued for talking about “cognitive enhancement” when I could just have talked about “cognitive change”. Enhancement has a built in assumption of some kind of improvement. However, a talk about fairness and cognitive change becomes rather anaemic: it just becomes a talk about what opportunities we should give people, not whether these changes affect their relationship in a morally relevant way.

Distributive justice

Theories of distributive justice typically try to answer: what goods are to be distributed, among whom, and what is the proper distribution? In our case it would be cognitive enhancements, and the interested parties are at least existing people but could include future generations (especially if we use genetic means).

Egalitarian theories argue that there has to be some form of equality, either equality of opportunity (everybody gets to enhance if they want), equality of outcome (everybody equally smart). Meritocratic theories would say the enhancement should be distributed by merit, presumably mainly to those who work hard at improving themselves or have already demonstrated great potential. Conversely, need-based theories and prioritarians argue we should prioritize those who are worst off or need the enhancement the most. Utilitarian justice requires the maximization of the total or average welfare across all relevant individuals.

Most of these theories agree with Rawls that impartiality is important: it should not matter who you are. Rawls famously argued for two principles of justice: (1) “Each person is to have an equal right to the most extensive total system of equal basic liberties compatible with a similar system of liberty for all.”, and (2) “Social and economic inequalities are to be arranged so that they are both (a) to the greatest benefit of the least advantaged, consistent with the just savings principle, and (b) attached to offices and positions open to all under conditions of fair equality of opportunity.”

It should be noted that a random distribution is impartial: if we cannot afford to give enhancement to everybody, we could have a lottery (meritocrats, prioritarians and utilitarians might want this lottery to be biased by some merit/need weighting, or to be just between the people relevant for getting the enhancement, while egalitarians would want everybody to be in).

Why should we even care about distributive justice? One argument is that we all have individual preferences and life goals we seek to achieve; if all relevant resources are in the hands of a few, there will be less preference satisfaction than if everybody had enough. In some cases there might be satiation, where we do not need more than a certain level of stuff to be satisfied and the distribution of the rest becomes irrelevant, but given the unbounded potential ambitions and desires of people it is unlikely to apply generally.

Many unequal situations are not seen as unjust because that is just the way the world is: it is a brute biological fact that males on average live shorter than females, and that there is a random distribution of cognitive ability. But if we change the technological conditions, these facts become possible to change: now we can redistribute stuff to affect them. Ironically, transhumanism hopes/aims to change conditions so that some states, which are at present not unjust, will become unjust!

Some enhancements are absolute: they help you or society no matter what others do, others are merely positional. Positional enhancements are a zero-sum game. However, doing the reversal test demonstrates that cognitive ability has absolute components: a world where everybody got a bit more stupid is not a better world, despite the unchanged relative rankings. There is more accidents and mistakes, more risk that some joint threat cannot be handled, and many life projects become harder and impossible to achieve. And the Flynn effect demonstrates that we are unlikely to be at some particular optimum right now.

The Rawlsian principles are OK with enhancement of the best-off if that helps the worst-off. This is not unreasonable for cognitive enhancement: the extreme high performers have a disproportionate output (patents, books, lectures) that benefit the rest of society, and the network effects of a generally smarter society might benefit everyone living in it. However, less cognitively able people are also less able to make use of opportunities created by this: intelligence is fundamentally a limit to equality of opportunity, and the more you have, the more you are able to select what opportunities and projects to aim for. So a Rawlsian would likely be fairly keen on giving more enhancement to the worst off.

Would a world where everybody had same intelligence be better than the current one? Intuitively it seems emotionally neutral. The reason is that we have conveniently and falsely talked about intelligence as one thing. As several audience members argued, there are many parts of intelligence. Even if one does not buy Gardner’s multiple intelligence theory, it is clear that there are different styles of problem-solving and problem-posing. This is true even if measurements of the magnitude of mental abilities are fairly correlated. A world where everybody thought in the same way would be a bad place. We might not want bad thinking, but there are many forms of good thinking. And we benefit from diversity of thinking styles. Different styles of cognition can make the world more unequal but not more unjust.

Inequality over time

As I have argued before, enhancements in the forms of gadgets and pills are likely to come down in price and become easy to distribute, while service-based enhancements are more problematic since they will tend to remain expensive. Modelling the spread of enhancement suggests that enhancements that start out expensive but then become cheaper first leads to a growth of inequality and then a decrease. If there is a levelling off effect where it becomes harder to enhance beyond a certain point this eventually leads to a more cognitively equal society as everybody catches up and ends up close to the efficiency boundary.

When considering inequality across time we should likely accept early inequality if it leads to later equality. After all, we should not treat spatially remote people differently from nearby people, and the same is true across time. As Claudio Tamburrini said, “Do not sacrifice poor of the future for the poor of the present.”

The risk is if there is compounding: enhanced people can make more money, and use that to enhance themselves or their offspring more. I seriously doubt this works for biomedical enhancement since there are limits to what biological brains can do (and human generation times are long compared to technology change), but it may be risky in regards to outsourcing cognition to machines. If you can convert capital into cognitive ability by just buying more software, then things could become explosive if the payoffs from being smart in this way are large. However, then we are likely to have an intelligence explosion anyway, and the issue of social justice takes back seat compared to the risks of a singularity. Another reason to think it is not strongly compounding is that geniuses are not all billionaires, and billionaires – while smart – are typically not the very most intelligent people. Pickety’s argument actually suggests that it is better to have a lot of money than a lot of brains since you can always hire smart consultants.

Francis Fukuyama famously argued that enhancement was bad for society because it risks making people fundamentally unequal. However, liberal democracy is already based on idea of common society of unequal individuals – they are different in ability, knowledge and power, yet treated fairly and impartially as “one man, one vote”. There is a difference between moral equality and equality measured in wealth, IQ or anything else. We might be concerned about extreme inequalities in some of the latter factors leading to a shift in moral equality, or more realistically, that those factors allow manipulation of the system to the benefit of the best off. This is why strengthening the “dominant cooperative framework” (to use Allen Buchanan’s term) is important: social systems are resilient, and we can make them more resilient to known or expected future challenges.

Conclusions

My main conclusions were:

  • Enhancing cognition can make society more or less unequal. Whether this is unjust depends both on the technology, one’s theory of justice, and what policies are instituted.
  • Some technologies just affect positional goods, and they make everybody worse off. Some are win-win situations, and I think much of intelligence enhancement is in this category.
  • Cognitive enhancement is likely to individually help the worst off, but make the best off compete harder.
  • Controlling mature technologies is hard, since there are both vested interests and social practices around them. We have an opportunity to affect the development of cognitive enhancement now, before it becomes very mainstream and hard to change.
  • Strengthening the “dominant cooperative framework” of society is a good idea in any case.
  • Individual morphological freedom must be safeguarded.
  • Speeding up progress and diffusion is likely to reduce inequality over time – and promote diversity.
  • Different parts of the world likely to approach CE differently and at different speeds.

As transhumanists, what do we want?

The transhumanist declaration makes wide access a point, not just on fairness or utilitarian grounds, but also for learning more. We have a limited perspective and cannot know well beforehand were the best paths are, so it is better to let people pursue their own inquiry. There may also be intrinsic values in freedom, autonomy and open-ended life projects: not giving many people the chance to this may lose much value.

Existential risk overshadows inequality: achieving equality by dying out is not a good deal. So if some enhancements increases existential risk we should avoid them. Conversely, if enhancements look like they reduce existential risk (maybe some moral or cognitive enhancements) they may be worth pursuing even if they are bad for (current) inequality.

We will likely end up with a diverse world that will contain different approaches, none universal. Some areas will prohibit enhancement, others allow it. No view is likely to become dominant quickly (without rather nasty means or some very surprising philosophical developments). That strongly speaks for the need to construct a tolerant world system.

If we have morphological freedom, then preventing cognitive enhancement needs to point at a very clear social harm. If the social harm is less than existing practices like schooling, then there is no legitimate reason to limit enhancement.  There are also costs of restrictions: opportunity costs, international competition, black markets, inequality, losses in redistribution and public choice issues where regulators become self-serving. Controlling technology is like controlling art: it is an attempt to control human creativity and exploration, and should be done very cautiously.

Born this way

On Practical Ethics I blog about the ethics of attempts to genetically select sexual preferences.

Basically, it can only tilt probabilities and people develop preferences in individual and complex ways. I am not convinced selection is inherently bad, but it can embody bad societal norms. However, those norms are better dealt with on a societal/cultural level than by trying to regulate technology. This essay is very much a tie-in with our brave new love paper.

Contraire de l’esprit de l’escalier: enhancement and therapy

Public healthYesterday I participated in a round-table discussion with professor Miguel Benasyag about the therapy vs. enhancement distinction at the TransVision 2014 conference. Unfortunately I could not get in a word sidewise, so it was not much of a discussion. So here are the responses I wanted to make, but didn’t get the chance to do: in a way this post is the opposite of l’esprit de l’escalier.

Enhancement: top-down, bottom-up, or sideways?

Does enhancements – whether implanted or not – represent a top-down imposition of order on the biosystem? If one accepts that view, one ends up with a dichotomy between that and bottom-up approaches where biosystems are trained or placed in a smart context that produce the desired outcome: unless one thinks imposing order is a good thing, one becomes committed to some form of naturalistic conservatism.

But this ignores something Benasyag brought up himself: the body and brain are flexible and adaptable. The cerebral cortex can reorganize to become a primary cortex for any sense, depending on which input nerve is wired up to it. My friend Todd’s implanted magnet has likely reorganized a small part of his somatosensory cortex to represent his new sense. This enhancement is not a top-down imposition of a desired cortical structure, neither a pure bottom-up training of the biosystem.

Real enhancements integrate, they do not impose a given structure. This also addresses concerns of authenticity: if enhancements are entirely externally imposed – whether through implantation or external stimuli – they are less due to the person using them. But if their function is emergent from the person’s biosystem, the device itself, and how it is being used, then it will function in a unique, personal way. It may change the person, but that change is based on the person.

Complex enhancements

Enhancements are often described as simple, individualistic, atomic, things. But actual enhancements will be systems. A dramatic example was in my ears: since I am both French- and signing-impaired, I could listen to (and respond to) comments thanks to an enhancing system involving three skilled translators, a set of wireless headphones and microphones. This system was not just complex, but it was adaptive (translators know how to improvise, we the users learned how to use it) and social (micro-norms for how to use it emerged organically).

Enhancements need a social infrastructure to function – both a shared, distributed knowledge of how and when to use them (praxis) and possibly a distributed functioning itself. A brain-computer interface is of little use without anybody to talk to. In fact, it is the enhancements that affect communication abilities that are most powerful both in the sense of enhancing cognition (by bringing brains together) and changing how people are socially situated.

Cochlear implants and social enhancement

This aspect of course links to the issues in the adjacent debate about disability. Are we helping children by giving them cochlear implants, or are we undermining a vital deaf cultural community. The unique thing about cochlear implants is that they have this social effect and have to be used early in life for best results. In this case there is a tension between the need to integrate the enhancement with the hearing and language systems in an authentic way, a shift in which social community which will be readily available, and concerns over that this is just used to top-down normalize away the problem of deafness. How do we resolve this?

The value of deaf culture is largely its value to members: there might be some intrinsic value to the culture, but this is true for every culture and subculture. I think it is safe to say there is a fairly broad consensus in western culture today that individuals should not sacrifice their happiness – and especially not be forced to do it – for the sake of the culture. It might be supererogatory: a good thing to do, but not something that can be demanded. Culture is for the members, not the other way around: people are ends, not means.

So the real issue is the social linkages and the normalisation. How do we judge the merits of being able to participate in social networks? One might be small but warm, another vast and mainstream. It seems that the one thing to avoid is not being able to participate in either. But this is not a technical problem as much as a problem of adaptation and culture. Once implants are good enough that learning to use them does not compete with learning signing the real issue becomes the right social upbringing and the question of personal choices. This goes way beyond implant technology and becomes a question of how we set up social adaptation processes – a thick, rich and messy domain where we need to do much more work.

It is also worth considering the next step. What if somebody offered a communications device that would enable an entirely new form of communication, and hence social connection? In a sense we are gaining that using new media, but one could also consider something direct, like Egan’s TAP. As that story suggests, there might be rather subtle effects if people integrate new connections – in his case merely epistemic ones, but one could imagine entirely new forms of social links. How do we evaluate them? Especially since having a few pioneers test them tells us less than for non-social enhancements. That remains a big question.

Justifying off-label enhancement

A somewhat fierce question I got (and didn’t get to respond to) was how I could justify that I occasionally take modafinil, a drug intended for use of narcoleptics.

There seems to be a deontological or intention-oriented view behind the question: the intentions behind making the drug should be obeyed. But many drugs have been approved for one condition and then use expanded to other conditions. Presumably aspirin use for cardiovascular conditions is not unethical. And pharma companies largely intend to make money by making medicines, so the deep intention might be trivial to meet. More generally, claiming the point of drugs is to help sick people (who we have an obligation to help) doesn’t work since there obviously exist drug use for non-sick people (sports medicine, for example). So unless many current practices are deeply unethical this line of argument doesn’t work.

What I think was the real source was the concern that my use somehow deprived a sick person of the use. This is false, since I paid for it myself: the market is flexible enough to produce enough, and it was not the case of splitting a finite healthcare cake. The finiteness case might be applicable if we were talking about how much care me and my neighbours would get for our respective illnesses, and whether they had a claim on my behaviour through our shared healthcare cake. So unless my interlocutor thought my use was likely to cause health problems she would have to pay for, it seems that this line of reasoning fails.

The deep issue is of course whether there is a normatively significant difference between therapy and enhancement. I deny it. I think the goal of healthcare should not be health but wellbeing. Health is just an enabling instrumental thing. And it is becoming increasingly individual: I do not need more muscles, but I do benefit from a better brain for my life project. Yours might be different. Hence there is no inherent reason to separate treatment and enhancement: both aim at the same thing.

That said, in practice people make this distinction and use it to judge what care they want to pay for for their fellow citizens. But this will shift as technology and society changes, and as I said, I do not think this is a normative issue. Political issue, yes, messy, yes, but not foundational.

What do transhumanists think?

One of the greatest flaws of the term “transhumanism” is that it suggests that there is something in particular all transhumanist believe. Benasayag made some rather sweeping claims about what transhumanists (enhancement as embodying body-hate and a desire for control) wanted to do that were most definitely not shared by the actual transhumanists in the audience or stage. It is as problematic as claiming that all French intellectuals believe something: at best a loose generalisation, but most likely utterly misleading. But when you label a group – especially if they themselves are trying to maintain an official label – it becomes easier to claim that all transhumanists believe in something. Outsiders also do not see the sheer diversity inside, assuming everybody agrees on the few samples of writing they have  read.

The fault here lies both in the laziness of outside interlocutors and in transhumanists not making their diversity clearer, perhaps by avoiding slapping the term “transhumanism” on every relevant issue: human enhancement is of interest to transhumanists, but we should be able to discuss it even if there were no transhumanists.