Being reasonable

DisagreementThe ever readable Scott Alexander stimulated a post on Practical Ethics about defaults, status quo, and disagreements about sex. The quick of it: our culture sets defaults on who is reasonable or unreasonable when couples disagree, and these become particularly troubling when dealing with biomedical enhancements of love and sex. The defaults combine with status quo bias and our scepticism for biomedical interventions to cause biases that can block or push people towards certain interventions.

The Biosphere Code

Let's build a smarter planetYesterday I contributed to a piece of manifesto writing, producing the Biosphere Code Manifesto. The Guardian has a version on its blog. Not quite as dramatic as Marinetti’s Futurist Manifesto but perhaps more constructive:

Principle 1. With great algorithmic powers come great responsibilities

Those implementing and using algorithms should consider the impacts of their algorithms.

Principle 2. Algorithms should serve humanity and the biosphere at large.

Algorithms should be considerate of human needs and the biosphere, and facilitate transformations towards sustainability by supporting ecologically responsible innovation.

Principle 3. The benefits and risks of algorithms should be distributed fairly

Algorithm developers should consider issues relating to the distribution of risks and opportunities more seriously. Developing algorithms that provide benefits to the few and present risks to the many are both unjust and unfair.

Principle 4. Algorithms should be flexible, adaptive and context-aware

Algorithms should be open, malleable and easy to reprogram if serious repercussions or unexpected results emerge. Algorithms should be aware of their external effects and be able to adapt to unforeseen changes.

Principle 5. Algorithms should help us expect the unexpected

Algorithms should be used in such a way that they enhance our shared capacity to deal with shocks and surprises – including problems caused by errors or misbehaviors in other algorithms.

Principle 6. Algorithmic data collection should be open and meaningful

Data collection should be transparent and respectful of public privacy. In order to avoid hidden biases, the datasets which feed into algorithms should be validated.

Principle 7. Algorithms should be inspiring, playful and beautiful

Algorithms should be used to enhance human creativity and playfulness, and to create new kinds of art. We should encourage algorithms that facilitate human collaboration, interaction and engagement – with each other, with society, and with nature.

The algorithmic world

World gross economic productThe basic insight is that the geosphere, ecosphere, anthroposphere and technosphere are getting deeply entwined, and algorithms are becoming a key force in regulating this global system.

Some algorithms enable new activities (multimedia is impossible without FFT and CRC), change how activities are done (data centres happen because virtualization and MapReduce make them scale well), or enable faster algorithmic development (compilers and libraries). Algorithms used for decision support are particularly important. Logistics algorithms (routing, linear programming, scheduling, and optimization) affect the scope and efficiency of the material economy. Financial algorithms the scope and efficiency of the economy itself. Intelligence algorithms (data collection, warehousing, mining, network analysis but also human expert judgement combination methods), statistics gathering and risk models affect government policy. Recommender systems (“You May Also Enjoy…”) and advertising influence consumer demand.

Since these algorithms are shared, their properties will affect a multitude of decisions and individuals in the same way even if they think they are acting independently. There are spillover effects from the groups that use algorithms to other stakeholders from the algorithm-caused  actions. And algorithms have a multitude of non-trivial failure modes: machine learning can create opaque bias or sudden emergent misbehaviour, human over-reliance on algorithms can cause accidents or large-scale misallocation of resources, some algorithms produce systemic risks, and others embody malicious behaviours. In short, code – whether in computers or as a formal praxis in an organisation – matters morally.

What is the point?

Photo codeCould a code like the Biosphere Code actually do anything useful? Isn’t this yet another splashy “wouldn’t it be nice if everybody were moral and rational in engineering/politics/international relations?”

I think it is a first step towards something useful.

There are engineering ethics codes, even for software engineers. But algorithms are created in many domains, including by non-engineers. We can not and should not prevent people from thinking, proposing, and trying new algorithms: that would be like attempts to regulate science, art, and thought. But we can as societies create incentives to do constructive things and avoid known destructive things. In order to do so, we should recognize that we need to work on the incentives and start gathering information.

Algorithms and their large-scale results must be studied and measured: we cannot rely on theory, despite its seductive power since there are profound theoretical limitations about our predictive abilities in the world of algorithms, as well as obvious practical limitations. Algorithms also do not exist in a vacuum: the human or biosphere context is an active part of what is going on. An algorithm can be totally correct and yet be misused in a harmful way because of its framing.

But even in the small, if we can make one programmer think a bit more about what they are doing and choosing a better algorithm than they otherwise would have done, the world is better off. In fact, a single programmer can have surprisingly large impact.

I am more optimistic than that. Recognizing algorithms as the key building blocks that they are for our civilization, what peculiarities they have, and learning better ways of designing and using them has transformative power. There are disciplines dealing with parts of this, but the whole requires considering interdisciplinary interactions that are currently rarely explored.

Let’s get started!

Universal principles?

Essence of ethicsI got challenged on the extropian list, which is a fun reason to make a mini-lecture.

On 2015-10-02 17:12, William Flynn Wallace wrote:
> ​Anders says above that we have discovered universal timeless principles.​ I’d like to know what they are and who proposed them, because that’s chutzpah of the highest order. Oh boy – let’s discuss that one.

Here is one: a thing is identical to itself. (1)

Here is another one: “All human beings are born free and equal in dignity and rights.” (2)

Here is a third one: “Act only according to that maxim whereby you can, at the same time, will that it should become a universal law.” (3)

(1) was first explicitly mentioned by Plato (in Theaetetus). I think you also agree with it – things that are not identical to themselves are unlikely to even be called “things”, and without the principle very little thinking makes sense.

I am not sure whether it is chutzpah of the highest order or a very humble observation.

(2) is from the UN declaration of universal human rights. This sentence needs enormous amounts of unpacking – “free”, “equal”, “dignity”, “rights”… these words can (and are) used in very different ways. Yet I think it makes sense to say that according to a big chunk of Western philosophy this sentence is a true sentence (in the sense that ethical propositions are true), that it is universal (the truth is not contingent on when and where you are, although the applications may change), and we know historically that we have not known this principle forever. Now *why* it is true quickly branches out into different answers depending on what metaethical positions you hold, not to mention the big topic of what kind of truth moral truth actually is (if anything). The funny thing is that the universal part is way less contentious, because of the widely accepted (and rarely stated) formal ethical principle that if it is moral to P in situation X, then the location in time and space where X happens does not matter.

Chutzpah of the highest order? Totally. So is the UN.

(3) is Immanuel Kant, and he argued that any rational moral agent could through pure reason reach this principle. It is in many ways like (1) almost a consistency requirement of moral will (not action, since he doesn’t actually care about the consequences – we cannot fully control those, but we can control what we decide to do). There is a fair bit of unpacking of the wording, but unlike the UN case he defines his terms fairly carefully in the preceding text. His principle is, if he is right, the supreme principle of morality.

Chuzpah auf höchstem Niveau? Total!

Note that (1) is more or less an axiom: there is no argument for why it is true, because there is little point in even trying. (3) is intended to be like a theorem in geometry: from some axioms and the laws of logic, we end up with the categorical imperative. It is just as audacious or normal as the Pythagorean theorem. (2) is a kind of compromise between different ethical systems: the Kantians would defend it based on their system, while consequentialists could make a rule utilitarian argument for why it is true, and contractualists would say it is true because the UN agrees on it. They agree on the mid-level meaning, but not on the other’s derivations. It is thick, messy and political, yet also represents fairly well what most educated people would conclude (of course, they would then show off by disagreeing loudly with each other about details, obscuring the actual agreement).

Philosopher’s views

Do people who think about these things actually believe in universal principles? One fun source is David Bourget and David J. Chalmers’ survey of professional philosophers (data). 56.4% of the respondents were moral realists (there are moral facts and moral values, and that these are objective and independent of our views), 65.7% were moral cognitivists (ethical sentences can be true or false); these were correlated to 0.562. 25.9% were deontologists, which means that they would hold somewhat Kant-like views that some actions are always or never right (some of the rest of course also believe in principles, but the survey cannot tell us anything more). 71.1% thought there was a priori knowledge (things we know by virtue of being thinking beings rather than experience).

My views

Do I believe in timeless principles? Kind of. There are statements in physics that are invariant of translations, rotations, Lorenz boosts and other transformations, and of course math remains math. Whether physics and math are “out there” or just in minds is hard to tell (I lean towards that at least physics is out there in some form), but clearly any minds that know some subset of correct, invariant physics and math can derive other correct conclusions from it. And other minds with the same information can make the same derivations and reach the same conclusions – no matter when or where. So there are knowable principles in these domains every sufficiently informed and smart mind would know. Things get iffy with values, since they might be far more linked to the entities experiencing them, but clearly we can do analyse game theory and make statements like “If agent A is trying to optimize X, agent B optimizes Y, and X and Y do not interact, then they can get more of X and Y by cooperating”. So I think we can get pretty close to universal principles in this framework, even if it turns out that they merely reside inside minds knowing about the outside world.

Living forever

Benjamin Zand has made a neat little documentary about transhumanism, attempts to live forever and the posthuman challenge. I show up of course as soon as ethics is being mentioned.

Benjamin and me had a much, much longer (and very fun) conversation about ethics than could even be squeezed into a TV documentary. Everything from personal identity to overpopulation to the meaning of life. Plus the practicalities of cryonics, transhuman compassion and how to test if brain emulation actually works.

I think the inequality and control issues are interesting to develop further.

Would human enhancement boost inequality?

There is a trivial sense in which just inventing an enhancement produces profound inequality since one person has it, and the rest of mankind lacks it. But this is clearly ethically uninteresting: what we actually care about is whether everybody gets to share something good eventually.

However, the trivial example shows an interesting aspect of inequality: it has a timescale. An enhancement that will eventually benefit everyone but is unequally distributed may be entirely OK if it is spreading fast enough. In fact, by being expensive at the start it might even act as a kind of early adopter/rich tax, since they first versions will pay for R&D of consumer versions – compare computers and smartphones. While one could argue that it is bad to get temporary inequality, long-term benefits would outweigh this for most enhancements and most value theories: we should not sacrifice the poor of tomorrow for the poor of today by delaying the launch of beneficial technologies (especially since it is unlikely that R&D to make them truly cheap will happen just due to technocrats keeping technology in their labs – making tech cheap and useful is actually one area where we know empirically the free market is really good).

If the spread of some great enhancement could be faster though, then we may have a problem.

I often encounter people who think that the rich will want to keep enhancements to themselves. I have never encountered any evidence for this being actually true except for status goods or elites in authoritarian societies.

There are enhancements like height that are merely positional: it is good to be taller than others (if male, at least), but if everybody gets taller nobody benefits and everybody loses a bit (more banged heads and heart problems). Other enhancements are absolute: living healthy longer or being smarter is good for nearly all people regardless of how long other people live or how smart they are (yes, there might be some coordination benefits if you live just as long as your spouse or have a society where you can participate intellectually, but these hardly negate the benefit of joint enhancement – in fact, they support it). Most of the interesting enhancements are in this category: while they might be great status goods at first, I doubt they will remain that for long since there are other reasons than status to get them. In fact, there are likely network effects from some enhanchements like intelligence: the more smart people working together in a society, the greater the benefits.

In the video, I point out that limiting enhancement to the elite means the society as a whole will not gain the benefit. Since elites actually reap rents from their society, this means that from their perspective it is actually in their best interest to have a society growing richer and more powerful (as long as they are in charge). This will mean they lose out in the long run to other societies that have broader spreads of enhancement. We know that widespread schooling, free information access and freedom to innovate tend to produce way wealthier and more powerful societies than those where only elites have access to these goods. I have strong faith in the power of diverse societies, despite their messiness.

My real worry is that enhancements may be like services rather than gadgets or pills (which come down exponentially in price). That would keep them harder to reach, and might hold back adoption (especially since we have not been as good at automating services as manufacturing). Still, we do subsidize education at great cost, and if an enhancement is desirable democratic societies are likely to scramble for a way of supplying it widely, even if it is only through an enhancement lottery.

However, even a world with unequal distribution is not necessarily unjust. Beside the standard Nozickian argument that a distribution is just if it was arrived at through just means there is the Rawlsian argument that if the unequal distribution actually produces benefits for the weakest it is OK. This is likely very true for intelligence amplification and maybe brain emulation since they are likely to cause strong economic growth an innovations that produce spillover effects – especially if there is any form of taxation or even mild redistribution.

Who controls what we become? Nobody, we/ourselves/us

The second issue is who gets a say in this.

As I respond in the interview, in a way nobody gets a say. Things just happen.

People innovate, adopt technologies and change, and attempts to control that means controlling creativity, business and autonomy – you better have a very powerful ethical case to argue for limitations in these, and an even better political case to implement any. A moral limitation of life extension needs to explain how it averts consequences worse than 100,000 dead people per day. Even if we all become jaded immortals that seems less horrible than a daily pile of corpses 12.3 meters high and 68 meters across (assuming an angle of repose of 20 degrees – this was the most gruesome geometry calculation I have done so far). Saying we should control technology is a bit like saying society should control art: it might be more practically useful, but it springs from the same well of creativity and limiting it is as suffocating as limiting what may be written or painted.

Technological determinism is often used as an easy out for transhumanists: the future will arrive no matter what you do, so the choice is just between accepting or resisting it. But this is not the argument I am making. That nobody is in charge doesn’t mean the future is not changeable.

The very creativity, economics and autonomy that creates the future is by its nature something individual and unpredictable. While we can relatively safely assume that if something can be done it will be done, what actually matters is whether it will be done early or late, or seldom or often. We can try to hurry beneficial or protective technologies so they arrive before the more problematic ones. We can try to aim at beneficial directions in favour over more problematic ones. We can create incentives that make fewer want to use the bad ones. And so on. The “we” in this paragraph is not so much a collective coordinated “us” as the sum of individuals, companies and institutions, “ourselves”: there is no requirement to get UN permission before you set out to make safe AI or develop life extension. It just helps if a lot of people support your aims.

John Stuart Mill’s harm principle allows society to step in an limit freedom when it causes harms to others, but most enhancements look unlikely to produce easily recognizable harms. This is not a ringing endorsement: as Nick Bostrom has pointed out, there are some bad directions of evolution we might not want to go down, yet it is individually rational for each of us to go slightly in that direction. And existential risk is so dreadful that it actually does provide a valid reason to stop certain human activities if we cannot find alternative solutions. So while I think we should not try to stop people from enhancing themselves we should want to improve our collective coordination ability to restrain ourselves. This is the “us” part. Restraint does not just have to happen in the form of rules: we restrain ourselves already using socialization, reputations, and incentive structures. Moral and cognitive enhancement could add restraints we currently do not have: if you can clearly see the consequences of your actions it becomes much harder to do bad things. The long-term outlook fostered by radical life extension may also make people more risk aversive and willing to plan for long-term sustainability.

One could dream of some enlightened despot or technocrat deciding. A world government filled with wise, disinterested and skilled members planning our species future. But this suffers from essentially the economic calculation problem: while a central body might have a unified goal, it will lack information about the preferences and local states among the myriad agents in the world. Worse, the cognitive abilities of the technocrat will be far smaller than the total cognitive abilities of the other agents. This is why rules and laws tend to get gamed – there are many diverse entities thinking about ways around them. But there are also fundamental uncertainties and emergent phenomena that will bubble up from the surrounding agents and mess up the technocratic plans. As Virginia Postrel noted, the typical solution is to try to browbeat society into a simpler form that can be managed more easily… which might be acceptable if the stakes are the very survival of the species, but otherwise just removes what makes a society worth living in. So we better maintain our coordination ourselves, all of us, in our diverse ways.

 

Ethics of brain emulations, New Scientist edition

Si elegansI have an opinion piece in New Scientist about the ethics of brain emulation. The content is similar to what I was talking about at IJCNN and in my academic paper (and the comic about it). Here are a few things that did not fit the text:

Ethics that got left out

Due to length constraints I had to cut the discussion about why animals might be moral patients. That made the essay look positively Benthamite in its focus on pain. In fact, I am agnostic on whether experience is necessary for being a moral patient. Here is the cut section:

Why should we care about how real animals are treated? Different philosophers have given different answers. Immanuel Kant did not think animals matter in themselves, but our behaviour towards them matters morally: a human who kicks a dog is cruel and should not do it. Jeremy Bentham famously argued that thinking does not matter, but the capacity to suffer: “…the question is not, Can they reason? nor, Can they talk? but, Can they suffer?” . Other philosophers have argued that it matters that animals experience being subjects of their own life, with desires and goals that make sense to them. While there is a fair bit of disagreement of what this means for our responsibilities to animals and what we may use them for, there is a widespread agreement that they are moral patients, something we ought to treat with some kind of care.

This is of course a super-quick condensation of a debate that fills bookshelves. It also leaves out Christine Korsgaard’s interesting Kantian work on animal rights, which as far as I can tell does not need to rely on particular accounts of consciousness and pain but rather interests. Most people would say that without consciousness or experience there is nobody that is harmed, but I am not entirely certain unconscious systems cannot be regarded as moral patients. There are for example people working in environmental ethics that ascribe moral patient-hood and partial rights to species or natural environments.

Big simulations: what are they good for?

Another interesting thing that had to be left out is comparisons of different large scale neural simulations.

(I am a bit uncertain about where the largest model in the Human Brain Project is right now; they are running more realistic models, so they will be smaller in terms of neurons. But they clearly have the ambition to best the others in the long run.)

Of course, one can argue which approach matters. Spaun is a model of cognition using low resolution neurons, while the slightly larger (in neurons) simulation from the Lansner lab was just a generic piece of cortex, showing some non-trivial alpha and gamma rhythms, and the even larger ones showing some interesting emergent behavior despite the lack of biological complexity in the neurons. Conversely, Cotterill’s CyberChild that I worry about in the opinion piece had just 21 neurons in each region but they formed a fairly complex network with many brain regions that in a sense is more meaningful as an organism than the near-disembodied problem-solver Spaun. Meanwhile SpiNNaker is running rings around the others in terms of speed, essentially running in real-time while the others have slowdowns by a factor of a thousand or worse.

The core of the matter is defining what one wants to achieve. Lots of neurons, biological realism, non-trivial emergent behavior, modelling a real neural system, purposeful (even conscious) behavior, useful technology, or scientific understanding? Brain emulation aims at getting purposeful, whole-organism behavior from running a very large, very complete biologically realistic simulation. Many robotics and AI people are happy without the biological realism and would prefer as small simulation as possible. Neuroscientists and cognitive scientists care about what they can learn and understand based on the simulations, rather than their completeness. They are all each pursuing something useful, but it is very different between the fields. As long as they remember that others are not pursuing the same aim they can get along.

What I hope: more honest uncertainty

What I hope happens is that computational neuroscientists think a bit about the issue of suffering (or moral patient-hood) in their simulations rather than slip into the comfortable “It is just a simulation, it cannot feel anything” mode of thinking by default.

It is easy to tell oneself that simulations do not matter because not only do we know how they work when we make them (giving us the illusion that we actually know everything there is to know about the system – obviously not true since we at least need to run them to see what happens), but institutionally it is easier to regard them as non-problems in terms of workload, conflicts and complexity (let’s not rock the boat at the planning meeting, right?) And once something is in the “does not matter morally” category it becomes painful to move it out of it – many will now be motivated to keep it there.

I rather have people keep an open mind about these systems. We do not understand experience. We do not understand consciousness. We do not understand brains and organisms as wholes, and there is much we do not understand about the parts either. We do not have agreement on moral patient-hood. Hence the rational thing to do, even when one is pretty committed to a particular view, is to be open to the possibility that it might be wrong. The rational response to this uncertainty is to get more information if possible, to hedge our bets, and try to avoid actions we might regret in the future.

The limits of the in vitro burger

New growthStepping on toes everywhere in our circles, Ben Levinstein and me have a post at Practical Ethics about the limitations of in vitro meat for reducing animal suffering.

The basic argument is that while factory farming produces a lot of suffering, a post-industrial world would likely have very few lives of the involved species. It would be better if they had better lives and larger populations instead. So, at least in some views of consequentialism, the ethical good of in vitro meat is reduced from a clear win to possibly even a second best to humane farming.

An analogy can be made with horses, whose population has declined precipitiously from the pre-tractor, pre-car days. Current horses live (I guess) nicer lives than the more work-oriented horses of 1900, but they have much fewer lives. So the current 3 million horses in the US might have lives (say) twice as good as the 25 million horses in the 1920s: the total value has still declined. However, factory farmed animals may have lives that are not worth living, holding negative value. If we assume the about 50 billion chickens in in the world all have lives of value -1 each, then replacing them with in vitro meat would give make the world 50 billion units better. But this could also be achieved by making their lives one unit better (and why stop there? maybe they could get two units more). Whether it matters how many entities are experiencing depends on your approach, as does whether it is an extra value if there is a chicken species around rather than not.

Now, I am not too troubled by this since I think in vitro meat is also very good from a health perspective, a climate perspective, and an existential risk reduction perspective (it is good for space colonization and survival if sunlight is interrupted). But I think most people come to in vitro meat from an ethical angle. And given just that perspective, we should not be too complacent that in the future we will become postagricultural: it may take time, and it might actually not increase total wellfare as much as we expected.

 

Ethics for neural networks

PosterI am currently attending IJCNN 2015 in Killarney. Yesterday I gave an invited talk “Ethics and large-scale neural networks: when do we need to start caring for neural networks, rather than about them?” The bulk of the talk was based on my previous WBE ethics paper, looking at the reasons we cannot be certain neural networks have experience or not, leading to my view that we hence ought to handle them with the same care as the biological originals they mimic. Yup, it is the one T&F made a lovely comic about – which incidentally gave me an awesome poster at the conference.

When I started, I looked a bit at ethics in neural network science/engineering. As I see it, there are three categories of ethical issues specific to the topic rather than being general professional ethics issues:

  • First, the issues surrounding applications such as privacy, big data, surveillance, killer robots etc.
  • Second, the issue that machine learning allows machines to learn the wrong things.
  • Third, machines as moral agents or patients.

The first category is important, but I leave that for others to discuss. It is not necessarily linked to neural networks per se, anyway. It is about responsibility for technology and what one works on.

Learning wrong

The second category is fun. Learning systems are not fully specified by their creators – which is the whole point! This means that their actual performance is open-ended (within the domain of possible responses). And from that follows that they can learn things we do not want.

One example is inadvertent discrimination, where the network learns something that would be called racism, sexism or something similar if it happened in a human. One can consider a credit rating neural network trained on customer data to estimate the probability of a customer defaulting. It may develop an internal representation that gets activated by customer’s race and is linked to a negative evaluation of the rating. There is no deliberate programming of racism, just something that emerges from the data – where the race:economy link may well be due to factors in society that are structurally racist.

A similar, real case is advertising algorithms selecting ads online for users in ways that shows some ads for some groups but not others – which, in the case of education, may serve to perpetuate disadvantages or prejudices.

A recent example was the Google Photo captioning system, which captioned a black couple as gorillas. Obvious outrage ensued, and a Google representative tweeted that this was “high on my list of bugs you *never* want to see happen ::shudder::”. The misbehaviour was quickly fixed.

Mislabelling somebody or something else might merely have been amusing: calling some people gorillas will often be met by laughter. But it becomes charged and ethically relevant in a culture like the current American one. This is nothing the recognition algorithm knows about: from its perspective mislabelling chairs is as bad as mislabelling humans. Adding a culturally sensitive loss function to the training is nontrivial. Ad hoc corrections against particular cases – like this one – will only help when a scandalous mislabelling already occurs: we will not know what is misbehaviour until we see it.

[ Incidentally, this suggests a way for automatic insult generation: use computer vision to find matching categories, and select the one that is closest but has the lowest social status (perhaps detected using sentiment analysis). It will be hilarious for the five seconds until somebody takes serious offence. ]

It has been suggested that the behavior was due to training data being biased towards white people, making the model subtly biased. If there are few examples of a category it might be suppressed or overused as a response. This can be very hard to fix, since many systems and data sources have a patchy spread in social space. But maybe we need to pay more attention to the issue of whether data is socially diverse enough. It is worth recognizing that since a machine learning system may be used by very many users once it has been trained, it has the power to project its biased view of the world to many: getting things right in a universal system, rather than something used by a few, may be far more important than it looks. We may also have to have enough online learning over time so such systems update their worldview based on how culture evolves.

Moral actors, proxies and patients

Si elegansMaking machines that act in a moral context is even iffier.

My standard example is of course the autonomous car, which may find itself in situations that would count as moral choices for a human. Here the issue is who sets the decision scheme: presumably they would be held accountable insofar they could predict the consequences of their code or be identified. I have argued that it is good to have the car try to behave as its “driver” would, but it will still be limited by the sensory and cognitive abilities of the vehicle. Moral proxies are doable, even if they are not moral agents.

The manufacture and behavior of killer robots is of course even more contentious. Even if we think they can be acceptable in principle and have a moral system that we think would be the right one to implement, actually implementing it for certain may prove exceedingly hard. Verification of robotics is hard; verification of morally important actions based on real-world data is even worse. And one cannot shirk the responsibility to do so if one deploys the system.

Note that none of this presupposes real intelligence or truly open-ended action abilities. They just make an already hard problem tougher. Machines that can only act within a well-defined set of constraints can be further constrained to not go into parts of state- or action-space we know are bad (but as discussed above, even captioning images is a sufficiently big space that we will find surprise bad actions).

As I mentioned above, the bulk of the talk was my argument that whole brain emulation attempts can produce systems we have good reasons to be careful with: we do not know if they are moral agents, but they are intentionally architecturally and behaviourally close to moral agents.

A new aspect I got the chance to discuss is the problem about non-emulation neural networks. When do we need to consider them? Brian Tomasik has written a paper about whether we should regard reinforcement learning agents as moral patients (see also this supplement). His conclusion is that these programs mimic core motivation/emotion cognitive systems that almost certainly matter for real moral patients’ patient-hood (an organism without a reward system or learning would presumably lose much or all of its patient-hood), and there is a nonzero chance that they are fully or partially sentient.

But things get harder for other architectures. A deep learning network with just a feedforward architecture is presumably unable to be conscious, since many theories of consciousness presuppose some forms of feedback – and that is not possible in that architecture. But at the conference there have been plenty of recurrent networks that have all sorts of feedback. Whether they can have experiential states appears tricky to answer. In some cases we may argue they are too small to matter, but again we do not know if level of consciousness (or moral considerability) necessarily has to follow brain size.

They also inhabit a potentially alien world where their representations could be utterly unrelated to what we humans understand or can express. One might say, paraphrasing Wittgenstein, that if a neural network could speak we would not understand it. However, there might be ways of making their internal representations less opaque. Methods such as inceptionism, deep visualization, or t-SNE can actually help discern some of what is going on on the inside. If we were to discover a set of concepts that were similar to human or animal concepts, we may have reason to thread a bit more carefully – especially if there were concepts linked to some of them in the same way “suffering concepts” may be linked to other concepts. This looks like a very relevant research area, both for debugging our learning systems, but also for mapping out the structures of animal, human and machine minds.

In the end, if we want safe and beneficial smart systems, we better start figuring out how to understand them better.

Harming virtual bodies

BodyI was recently interviewed by Anna Denejkina for Vertigo, and references to the article seems to be circulating around. Given the hot button topic – transhumanism and virtual rape – I thought it might be relevant to bring out what I said in the email interview.

(Slightly modified for clarity, grammar and links)

> How are bioethicists and philosophers coping with the ethical issues which may arise from transhumanist hacking, and what would be an outcome of hacking into the likes of full body haptic suit, a smart sex toy, e-spot implant, i.e.: would this be considered act of kidnapping, or rape, or another crime?

There is some philosophy of virtual reality and augmented reality, and a lot more about the ethics of cyberspace. The classic essay is this 1998 one, dealing with a text-based rape in the mid-90s.

My personal view is that our bodies are the interfaces between our minds and the world. The evil of rape is that it involves violating our ability to interact with the world in a sensual manner: it involves both coercion of bodies and inflicting a mental violation. So from this perspective it does not matter much if the rape happens to a biological body, or a virtual body connected via a haptic suit, or some brain implant. There might of course be lesser violations if the coercion is limited (you can easily log out) or if there is a milder violation (a hacked sex toy might infringe on privacy and ones sexual integrity, but it is not able to coerce): the key issue is that somebody is violating the body-mind interface system, and we are especially vulnerable when this involves our sexual, emotional and social sides.

Widespread use of virtual sex will no doubt produce many tricky ethical situations. (what about recording the activities and replaying them without the partner’s knowledge? what if the partner is not who I think it is? what mapping the sexual encounter onto virtual or robot bodies that look like children and animals? what about virtual sexual encounters that break the laws in one country but not another?)

Much of this will sort itself out like with any new technology: we develop norms for it, sometimes after much debate and anguish. I suspect we will become much more tolerant of many things that are currently weird and taboo. The issue ethicists may worry about is whether we would also become blasé about things that should not be accepted. I am optimistic about it: I think that people actually do react to things that are true violations.

> If such a violation was to occur, what can be done to ensure that today’s society is ready to treat this as a real criminal issue?
Criminal law tends to react slowly to new technology, and usually tries to map new crimes onto old ones (if I steal your World of Warcraft equipment I might be committing fraud rather than theft, although different jurisdictions have very different views – some even treat this as gambling debts). This is especially true for common law systems like the US and UK. In civil law systems like most of Europe laws tend to get passed when enough people convince politicians that There Ought To Be a Law Against It (sometimes unwisely).

So to sum up, look at whether people involuntarily actually suffer real psychological anguish, loss of reputations or lose control over important parts of their exoselves due to the actions of other people. If they do, then at least something immoral has happened. Whether laws, better software security, social norms or something else (virtual self defence? built-in safewords?) is the best remedy may depend on the technology and culture.

I think there is an interesting issue in what role the body plays here. As I said, the body is an interface between our minds and the world around us. It is also a nontrivial thing: it has properties and states of its own, and these affect how we function. Even if one takes a nearly cybergnostic view that we are merely minds interfacing with the world rather than a richer embodiment view this plays an important role. If I have a large, small, hard or vulnerable body, it will affect how I can act in the world – and this will undoubtedly affect how I think of myself. Our representations of ourselves are strongly tied to our bodies and the relationship between them and our environment. Our somatosensory cortex maps itself to how touch distributes itself on our skin, and our parietal cortex not only represents the body-environment geometry but seems involved in our actual sense of self.

This means that hacking the body is more serious than hacking other kinds of software or possessions. Currently it is our only way of existing in the world. Even in an advanced VR/transhuman society where people can switch bodies simply and freely, infringing on bodies has bigger repercussions than changing other software outside the mind – especially if it is subtle. The violations discussed in the article are crude, overt ones. But subtle changes to ourselves may fly under the radar of outrage, yet do harm.

Most people are no doubt more interested in the titillating combination of sex and tech – there is a 90’s cybersex vibe coming off this discussion, isn’t it? The promise of new technology to give us new things to be outraged or dream about. But the philosophical core is about the relation between the self, the other, and what actually constitutes harm – very abstract, and not truly amenable to headlines.

 

Baby interrupted

Car frostFrancesca Minerva and me have a new paper out: Cryopreservation of Embryos and Fetuses as a Future Option for Family Planning Purposes (Journal of Evolution and Technology – Vol. 25 Issue 1 – April 2015 – pgs 17-30).

Basically, we analyse the ethics of cryopreserving fetuses, especially as an alternative to abortion. While technologically we do not have any means to bring a separated (yet alone cryopreserved) fetus to term yet, it is not inconceivable that advances in ectogenesis (artificial wombs) or biotechnological production of artificial placentas allowing reinplantation could be achieved. And a cryopreserved fetus would have all the time in the world, just like an adult cryonics patient.

It is interesting to see how many of the standard ethical arguments against abortion fare when dealing with cryopreservation. There is no killing, personhood is not affected, there is no loss of value of the future – just a long delay. One might be concerned that fetuses will not be reinplanted but just left in limbo forever, but clearly this is a better state than being irreversibly aborted: cryopreservation can (eventually) be reversed. I think our paper shows that (regardless of what one thinks of cryonics) the irreversibility is the key ethical issue in abortion.

In the end, it will likely take a long time before this is a viable option. But it seems that there are good reasons to consider cryopreservation and reinplantation of fetuses: animal husbandry, space colonisation, various medical treatments (consider “interrupting” an ongoing pregnancy because the mother needs cytostatic treament), and now this family planning reason.

Crispy embryos

BabiesResearchers at Sun Yat-sen University in Guangzhou have edited the germline genome of human embryos (paper). They used the ever more popular CRISPR/Cas9 method to try to modify the gene involved in beta-thalassaemia in non-viable leftover embryos from a fertility clinic.

As usual there is a fair bit of handwringing, especially since there was a recent call for a moratorium on this kind of thing from one set of researchers, and a more liberal (yet cautious) response from another set. As noted by ethicists, many of the ethical concerns are actually somewhat confused.

That germline engineering can have unpredictable consequences for future generations is as true for normal reproduction. More strongly, somebody making the case that (say) race mixing should be hindred because of unknown future effects would be condemned as a racist: we have overarching reasons to allow people live and procreate freely that morally overrule worries about their genetic endowment – even if there actually were genetic issues (as far as I know all branches of the human family are equally interfertile, but this might just be a historical contingency). For a possible future effect to matter morally it needs to be pretty serious and we need to have some real reason to think it is more likely to happen because of the actions we take now. A vague unease or a mere possibility is not enough.

However, the paper actually gives a pretty good argument for why we should not try this method in humans. They found that the efficiency of the repair was about 50%, but more worryingly that there was off-target mutations and that a similar gene was accidentally modified. These are good reasons not to try it. Not unexpected, but very helpful in that we can actually make informed decisions both about whether to use it (clearly not until the problems have been fixed) and what needs to be investigated (how can it be done well? why does it work worse here than advertised?).

The interesting thing with the paper is that the fairly negative results which would reduce interest in human germline changes is anyway hailed as being unethical. It is hard to make this claim stick, unless one buys into the view that germline changes of human embryos is intrinsically bad. The embryos could not develop into persons and would have been discarded from the fertility clinic, so there was no possible future person being harmed (if one thinks fertilized but non-viable embryos deserve moral protection one has other big problems). The main fear seems to be that if the technology is demonstrated many others will follow, but an early negative result would seem to reduce this slippery slope argument.

I think the real reason people think there is an ethical problem is the association of germline engineering with “designer babies”, and the conditioning that designer babies are wrong. But they can’t be wrong for no reason: there has to be an ethics argument for their badness. There is no shortage of such arguments in the literature, ranging from ideas of the natural order, human dignity, accepting the given, the importance of an open-ended life to issues of equality, just to mention a few. But none of these are widely accepted as slam-dunk arguments that conclusively show designer babies are wrong: each of them also have vigorous criticisms. One can believe one or more of them to be true, but it would be rather premature to claim that settles the debate. And even then, most of these designer baby arguments are irrelevant for the case at hand.

All in all, it was a useful result that probably will reduce both risky and pointless research and focus on what matters. I think that makes it quite ethical.