A clean well-lighted challenge: those eyes

On Extropy-chat my friend Spike suggested a fun writing challenge:

“So now I have a challenge for you.  Write a Hemmingway-esque story (or a you-esque story if you are better than Papa) which will teach me something, anything.  The Hemmingway story has memorable qualities, but taught me nada.  I am looking for a short story that is memorable and instructive, on any subject that interests you. Since there is so much to learn in this tragically short life, the shorter the story the better, but it should create memorable images like Hemmingway’s Clean, it must teach me something, anything. “

Here is my first attempt. (V 1.1, slightly improved from my list post and with some links). References and comments below.

Those eyes

“Customers!”
“Ah, yes, customers.”
“Cannot live with them, cannot live without them.”
“So, who?”
“The optics guys.”
“Those are the worst.”
“I thought that was the security guys.”
“Maybe. What’s the deal?”
Antireflective coatings. Dirt repelling.”
“That doesn’t sound too bad.”
“Some of the bots need to have diffraction spread, some should not. Ideally determined just when hatching.”
“Hatching? Self-assembling bots?”
“Yes. Can not do proper square root index matching in those. No global coordination.”
“Crawly bugbots?”
“Yes. Do not even think about what they want them for.”
“I was thinking of insect eyes.”
“No. The design is not faceted. The optics people have some other kind of sensor.”
“Have you seen reflections from insect eyes?”
“If you shine a flashlight in the garden at night you can see jumping spiders looking back at you.”
“That’s their tapeta, like a cat’s. I was talking about reflections from the surface.”
“I have not looked, to be honest.”
“There aren’t any glints when light glance across fly eyes. And dirt doesn’t stick.”
“They polish them a lot.”
“Sure. Anyway, they have nipples on their eyes.”
“Nipples?”
“Nipple like nanostructures. A whole field of them on the cornea.”
“Ah, lotus coatings. Superhydrophobic. But now you get diffraction and diffraction glints.”
“Not if they are sufficiently randomly distributed.”
“It needs to be an even density. Some kind of Penrose pattern.”
“That needs global coordination. Think Turing pattern instead.”
“Some kind of tape?”
“That’s Turing machine. This is his last work from ’52, computational biology.”
“Never heard of it.”
“It uses two diffusing signal substances: one that stimulates production of itself and an inhibitor, and the inhibitor diffuses further.”
“So a blob of the first will be self-supporting, but have a moat where other blobs cannot form.”
“Yep. That is the classic case. It all depends on the parameters: spots, zebra stripes, labyrinths, even moving leopard spots and oscillating modes.”
“All generated by local rules.”
“You see them all over the place.”
“Insect corneas?”
“Yes. Some Russians catalogued the patterns on insect eyes. They got the entire Turing catalogue.”
“Changing the parameters slightly presumably changes the pattern?”
“Indeed. You can shift from hexagonal nipples to disordered nipples to stripes or labyrinths, and even over to dimples.”
“Local interaction, parameters easy to change during development or even after, variable optics effects.”
“Stripes or hexagons would do diffraction spread for the bots.”
“Bingo.”

References and comments

Blagodatski, A., Sergeev, A., Kryuchkov, M., Lopatina, Y., & Katanaev, V. L. (2015). Diverse set of Turing nanopatterns coat corneae across insect lineages. Proceedings of the National Academy of Sciences, 112(34), 10750-10755.

My old notes on models of development for a course, with a section on Turing patterns. There are many far better introductions, of course.

Nanostructured chitin can do amazing optics stuff, like the wings of the Morpho butterflyP. Vukusic, J.R. Sambles, C.R. Lawrence, and R.J. Wootton (1999). “Quantified interference and diffraction in single Morpho butterfly scales“. Proceedings of the Royal Society B 266 (1427): 1403–11.

Another cool example of insect nano-optics: Land, M. F., Horwood, J., Lim, M. L., & Li, D. (2007). Optics of the ultraviolet reflecting scales of a jumping spider. Proceedings of the Royal Society of London B: Biological Sciences, 274(1618), 1583-1589.

One point Blagodatski et al. make is that the different eye patterns are scattered all over the insect phylogenetic tree: since it is easy to change parameters one can get whatever surface is needed by just turning a few genetic knobs (for example in snake skins or number of digits in mammals). I found a local paper looking at figuring out phylogenies based on maximum likelihood inference from pattern settings. While that paper was pretty optimistic on being able to figure out phylogenies this way, I suspect the Blagodatski paper shows that they can change so quickly that this will only be applicable to closely related species.

It is fun to look at how the Fourier transform changes as the parameters of the pattern change:
Leopard spot pattern

Random spot pattern

Zebra stripe pattern

Hexagonal dimple pattern

In this case I move the parameter b up from a low value to a higher one. At first I get “leopard spots” that divide and repel each other (very fun to watch), arraying themselves to fit within the boundary. This produces the vertical and horizontal stripes in the Fourier transform. As b increases the spots form a more random array, and there is no particular direction favoured in the transform: there is just an annulus around the center, representing the typical inter-blob distance. As b increases more, the blobs merge into stripes. For these parameters they snake around a bit, producing an annulus of uneven intensity. At higher values they merge into a honeycomb, and now the annulus collapses to six peaks (plus artefacts from the low resolution).

Brewing bad policy

Weak gravityThe New York Times reports that yeast has been modified to make THC. The paper describing the method uses precursor molecules, so it is not a hugely radical step forward. Still, it dovetails nicely with the recent paper in Science about full biosynthesis of opiates from sugar (still not very efficient compared to plants, though). Already this spring there was a comment piece in Nature about how to regulate the possibly imminent drug microbrewing, which I commented on at Practical Ethics.

Rob Carlsson has an excellent commentary on the problems with the regulatory reflex about new technology. He is basically arguing a “first, do no harm” principle for technology policy.

Policy conversations at all levels regularly make these same mistakes, and the arguments are nearly uniform in structure. “Here is something we don’t know about, or are uncertain about, and it might be bad – really, really bad – so we should most certainly prepare policy options to prevent the hypothetical worst!” Exclamation points are usually just implied throughout, but they are there nonetheless. The policy options almost always involve regulation and restriction of a technology or process that can be construed as threatening, usually with little or no consideration of what that threatening thing might plausibly grow into, nor of how similar regulatory efforts have fared historically.

This is such a common conversation that in many fields like AI even bringing up that there might be a problem makes practitioners think you are planning to invoke regulation. It fits with the hyperbolic tendency of many domains. For the record, if there is one thing we in the AI safety research community agree on, it is that more research is needed before we can give sensible policy recommendations.

Figuring out what policies can work requires understanding both what the domain actually is about (including what it can actually do, what it likely will be able to do one day, and what it cannot do), how different policy options have actually worked in the past, and what policy options actually exist in policy-making. This requires a fair bit of interdisciplinary work between researchers and policy professionals. Clearly we need more forums where this can happen.

And yes, even existential risks need to be handled carefully like this. If their importance overshadows everything, then getting policies that actually reduce the risk is a top priority: dramatic, fast policies doesn’t guarantee working risk reduction, and once a policy is in place it is hard to shift. For most low-probability threats we do not gain much survival by rushing policies into place compared to getting better policies.

Why Cherry 2000 should not be banned, Terminator should, and what this has to do with Oscar Wilde

Binary curious[This is what happens when I blog after two glasses of wine. Trigger warning for possibly stupid cultural criticism and misuse of Oscar Wilde.]

From robots to artificiality

On practical ethics I discuss what kind of robots we ought to campaign against. I have signed up against autonomous military robots, but I think sex robots are fine. The dividing line is that the harm done (if any) is indirect and victimless, and best handled through sociocultural means rather than legislation.

I think the campaign against sex robots has a point in that there are some pretty creepy ideas floating around in the world of current sex bots. But I also think it assumes these ideas are the only possible motivations. As I pointed out in my comments on another practical ethics post, there are likely people turned on by pure artificiality – human sexuality can be far queerer than most think.

Going off on a tangent, I am reminded of Oscar Wilde’s epigram

“The first duty in life is to be as artificial as possible. What the second duty is no one has as yet discovered.”

Being artificial is not the same thing as being an object. As noted by Barris, Wilde’s artificiality actually fits in with pluralism and liberalism. Things could be different. Yes, in the artificial world nothing is absolutely given, everything is the result of some design choices. But assuming some eternal Essence/Law/God is necessary for meaning or moral exposes one to a fruitless search for that Thing (or worse, a premature assumption one has found It, typically when looking in the mirror). Indeed, as Dorian Gray muses, “Is insincerity such a terrible thing? I think not. It is merely a method by which we can multiply our personalities.” We are not single personas with unitary identities and well defined destinies, and this is most clearly visible in our social plays.

Sex, power and robots

Continuing on my Wildean binge, I encountered another epigram:

“Everything in the world is about sex except sex. Sex is about power.”

I think this cuts close to the Terminator vs. Cherry 2000 debate. Most modern theorists of gender and sex are of course power-obsessed (let’s blame Foucault). The campaign against sex robots clearly see the problem as the robots embodying and perpetuating a problematic unequal power structure. I detect a whiff of paternalism there, where women and children – rather than people – seem to be assumed to be the victims and in the need of being saved from this new technology (at least it is not going as far as some other campaigns that fully assume they are also suffering from false consciousness and must be saved from themselves, the poor things). But sometimes a cigar is just a cigar… I mean sex is sex: it is important to recognize that one of the reasons for sex robots (and indeed prostitution) is the desire for sex and the sometimes awkward social or biological constraints of experiencing it.

The problem with autonomous weapons is that power really comes out of a gun. (Must resist making a Zardoz reference…) It might be wielded arbitrarily by an autonomous system with unclear or bad orders, or it might wielded far too efficiently by an automated armed force perfectly obedient to its commanders – removing the constraint that soldiers might turn against their rulers if being aimed against their citizenry. Terminator is far more about unequal and dangerous power than sex (although I still have fond memories of seeing a naked Arnie back in 1984). The cultural critic may argue that the power games in the bedroom are more insidious and affect more of our lives than some remote gleaming gun-metal threat, but I think I’d rather have sexism than killing and automated totalitarianism. The uniforms of the killer robots are not even going to look sexy.

It is for your own good

Trying to ban sex robots is about trying to shape society into an appealing way – the goal of the campaign is to support “development of ethical technologies that reflect human principles of dignity, mutuality and freedom” and the right for everybody to have their subjectivity recognized without coercion. But while these are liberal principles when stated like this, I suspect the campaign or groups like it will have a hard time keeping out of our bedrooms. After all, they need to ensure that there is no lack of mutuality or creepy sex robots there. The liberal respect for mutuality can become a very non-liberal worship of Mutuality, embodied in requiring partners to sign consent forms, demanding trigger warnings, and treating everybody who is not responding right to its keywords as suspects of future crimes. The fact that this absolutism comes from a very well-meaning impulse to protect something fine makes it even more vicious, since any criticism is easily mistaken as an attack on the very core Dignity/Mutuality/Autonomy of humanity (and hence any means of defence are OK). And now we have all the ingredients for a nicely self-indulgent power trip.

This is why Wilde’s pluralism is healthy. Superficiality, accepting the contrived and artificial nature of our created relationships, means that we become humble in asserting their truth and value. Yes, absolute relativism is stupid and self defeating. Yes, we need to treat each other decently, but I think it is better to start from the Lockean liberalism that allows people to have independent projects rather than assume that society and its technology must be designed to embody the Good Values. Replacing “human dignity” with the word “respect” usually makes ethics clearer.

Instead of assuming we can a priori figure out how technology will change us and then select the right technology, we try and learn. We can make some predictions with reasonable accuracy, which is why trying to rein in autonomous weapons makes sense (the probability that they lead to a world of stability and peace seems remote). But predicting cultural responses to technology is not something we have any good track record of: most deliberate improvements of our culture have come from social means and institutions, not banning technology.

“The fact is, that civilisation requires slaves. The Greeks were quite right there. Unless there are slaves to do the ugly, horrible, uninteresting work, culture and contemplation become almost impossible. Human slavery is wrong, insecure, and demoralising. On mechanical slavery, on the slavery of the machine, the future of the world depends.”

Living forever

Benjamin Zand has made a neat little documentary about transhumanism, attempts to live forever and the posthuman challenge. I show up of course as soon as ethics is being mentioned.

Benjamin and me had a much, much longer (and very fun) conversation about ethics than could even be squeezed into a TV documentary. Everything from personal identity to overpopulation to the meaning of life. Plus the practicalities of cryonics, transhuman compassion and how to test if brain emulation actually works.

I think the inequality and control issues are interesting to develop further.

Would human enhancement boost inequality?

There is a trivial sense in which just inventing an enhancement produces profound inequality since one person has it, and the rest of mankind lacks it. But this is clearly ethically uninteresting: what we actually care about is whether everybody gets to share something good eventually.

However, the trivial example shows an interesting aspect of inequality: it has a timescale. An enhancement that will eventually benefit everyone but is unequally distributed may be entirely OK if it is spreading fast enough. In fact, by being expensive at the start it might even act as a kind of early adopter/rich tax, since they first versions will pay for R&D of consumer versions – compare computers and smartphones. While one could argue that it is bad to get temporary inequality, long-term benefits would outweigh this for most enhancements and most value theories: we should not sacrifice the poor of tomorrow for the poor of today by delaying the launch of beneficial technologies (especially since it is unlikely that R&D to make them truly cheap will happen just due to technocrats keeping technology in their labs – making tech cheap and useful is actually one area where we know empirically the free market is really good).

If the spread of some great enhancement could be faster though, then we may have a problem.

I often encounter people who think that the rich will want to keep enhancements to themselves. I have never encountered any evidence for this being actually true except for status goods or elites in authoritarian societies.

There are enhancements like height that are merely positional: it is good to be taller than others (if male, at least), but if everybody gets taller nobody benefits and everybody loses a bit (more banged heads and heart problems). Other enhancements are absolute: living healthy longer or being smarter is good for nearly all people regardless of how long other people live or how smart they are (yes, there might be some coordination benefits if you live just as long as your spouse or have a society where you can participate intellectually, but these hardly negate the benefit of joint enhancement – in fact, they support it). Most of the interesting enhancements are in this category: while they might be great status goods at first, I doubt they will remain that for long since there are other reasons than status to get them. In fact, there are likely network effects from some enhanchements like intelligence: the more smart people working together in a society, the greater the benefits.

In the video, I point out that limiting enhancement to the elite means the society as a whole will not gain the benefit. Since elites actually reap rents from their society, this means that from their perspective it is actually in their best interest to have a society growing richer and more powerful (as long as they are in charge). This will mean they lose out in the long run to other societies that have broader spreads of enhancement. We know that widespread schooling, free information access and freedom to innovate tend to produce way wealthier and more powerful societies than those where only elites have access to these goods. I have strong faith in the power of diverse societies, despite their messiness.

My real worry is that enhancements may be like services rather than gadgets or pills (which come down exponentially in price). That would keep them harder to reach, and might hold back adoption (especially since we have not been as good at automating services as manufacturing). Still, we do subsidize education at great cost, and if an enhancement is desirable democratic societies are likely to scramble for a way of supplying it widely, even if it is only through an enhancement lottery.

However, even a world with unequal distribution is not necessarily unjust. Beside the standard Nozickian argument that a distribution is just if it was arrived at through just means there is the Rawlsian argument that if the unequal distribution actually produces benefits for the weakest it is OK. This is likely very true for intelligence amplification and maybe brain emulation since they are likely to cause strong economic growth an innovations that produce spillover effects – especially if there is any form of taxation or even mild redistribution.

Who controls what we become? Nobody, we/ourselves/us

The second issue is who gets a say in this.

As I respond in the interview, in a way nobody gets a say. Things just happen.

People innovate, adopt technologies and change, and attempts to control that means controlling creativity, business and autonomy – you better have a very powerful ethical case to argue for limitations in these, and an even better political case to implement any. A moral limitation of life extension needs to explain how it averts consequences worse than 100,000 dead people per day. Even if we all become jaded immortals that seems less horrible than a daily pile of corpses 12.3 meters high and 68 meters across (assuming an angle of repose of 20 degrees – this was the most gruesome geometry calculation I have done so far). Saying we should control technology is a bit like saying society should control art: it might be more practically useful, but it springs from the same well of creativity and limiting it is as suffocating as limiting what may be written or painted.

Technological determinism is often used as an easy out for transhumanists: the future will arrive no matter what you do, so the choice is just between accepting or resisting it. But this is not the argument I am making. That nobody is in charge doesn’t mean the future is not changeable.

The very creativity, economics and autonomy that creates the future is by its nature something individual and unpredictable. While we can relatively safely assume that if something can be done it will be done, what actually matters is whether it will be done early or late, or seldom or often. We can try to hurry beneficial or protective technologies so they arrive before the more problematic ones. We can try to aim at beneficial directions in favour over more problematic ones. We can create incentives that make fewer want to use the bad ones. And so on. The “we” in this paragraph is not so much a collective coordinated “us” as the sum of individuals, companies and institutions, “ourselves”: there is no requirement to get UN permission before you set out to make safe AI or develop life extension. It just helps if a lot of people support your aims.

John Stuart Mill’s harm principle allows society to step in an limit freedom when it causes harms to others, but most enhancements look unlikely to produce easily recognizable harms. This is not a ringing endorsement: as Nick Bostrom has pointed out, there are some bad directions of evolution we might not want to go down, yet it is individually rational for each of us to go slightly in that direction. And existential risk is so dreadful that it actually does provide a valid reason to stop certain human activities if we cannot find alternative solutions. So while I think we should not try to stop people from enhancing themselves we should want to improve our collective coordination ability to restrain ourselves. This is the “us” part. Restraint does not just have to happen in the form of rules: we restrain ourselves already using socialization, reputations, and incentive structures. Moral and cognitive enhancement could add restraints we currently do not have: if you can clearly see the consequences of your actions it becomes much harder to do bad things. The long-term outlook fostered by radical life extension may also make people more risk aversive and willing to plan for long-term sustainability.

One could dream of some enlightened despot or technocrat deciding. A world government filled with wise, disinterested and skilled members planning our species future. But this suffers from essentially the economic calculation problem: while a central body might have a unified goal, it will lack information about the preferences and local states among the myriad agents in the world. Worse, the cognitive abilities of the technocrat will be far smaller than the total cognitive abilities of the other agents. This is why rules and laws tend to get gamed – there are many diverse entities thinking about ways around them. But there are also fundamental uncertainties and emergent phenomena that will bubble up from the surrounding agents and mess up the technocratic plans. As Virginia Postrel noted, the typical solution is to try to browbeat society into a simpler form that can be managed more easily… which might be acceptable if the stakes are the very survival of the species, but otherwise just removes what makes a society worth living in. So we better maintain our coordination ourselves, all of us, in our diverse ways.

 

ET, phone for you!

TelescopeI have been in the media recently since I became the accidental spokesperson for UKSRN at the British Science Festival in Bradford:

BBC / The Telegraph / The Guardian / Iol SciTech / The Irish Times / Bt.com

(As well as BBC 5 Live, BBC Newcastle and BBC Berkshire… so my comments also get sent to space as a side effect).

My main message is that we are going to send in something for the Breakthrough Message initiative: a competition to write a good message to be sent to aliens. The total pot is a million dollars (it seems that was misunderstood in some reporting: it is likely not going to be a huge prize, but rather several). The message will not actually be sent to the stars: this is an intellectual exercise rather than a practical one.

(I also had some comments about the link between Langsec and SETI messages – computer security is actually a bit of an issue for fun reasons. Watch this space.)

Should we?

One interesting issue is whether there are any good reasons not to signal. Stephen Hawking famously argued against it (but he is a strong advocate of SETI), as does David Brin. A recent declaration argues that we should not signal unless there was a widespread agreement about it. Yet others have made the case that we should signal, perhaps a bit cautiously. In fact, an eminent astronomer just told he could not take concerns about sending a message seriously.

Some of the arguments are (in no particular order):

Pro Con
SETI will not work if nobody speaks. Malign ETI.
ETI is likely to be far more advanced than us and could help us. Past meetings between different civilizations have often ended badly.
Knowing if there is intelligence out there is important. Giving away information about ourselves may expose us to accidental or deliberate hacking.
Hard to prevent transmissions.  Waste of resources.
 Radio transmissions are already out there.  If the ETI is quiet, it is for a reason.
 Maybe they are waiting for us to make the first move.  We should listen carefully first, then transmit.

It is actually an interesting problem: how do we judge the risks and benefits in a situation like this? Normal decision theory runs into trouble (not that it stops some of my colleagues). The problem here is that the probability and potential gain/loss are badly defined. We may have our own personal views on the likelihood of intelligence within radio reach and its nature, but we should be extremely uncertain given the paucity of evidence.

[ Even the silence in the sky is some evidence, but it is somewhat tricky to interpret given that it is compatible with both no intelligence (because of rarity or danger), intelligence not communicating or looking in spectra we see, cultural convergence towards quietness (the zoo hypothesis, everybody hiding, everybody becoming Jupiter brains), or even the simulation hypothesis. The first category is at least somewhat concise, while the later categories have endless room for speculation. One could argue that since later categories can fit any kind of evidence they are epistemically weak and we should not trust them much.]

Existential risks also tends to take precedence over almost anything. If we can avoid doing something that could cause existential risk the maxiPOK principle tells us not to do it: we can avoid sending and sending might bring down the star wolves on us, so we should avoid it.

There is also a unilateralist curse issue. It is enough that one group somewhere thinks transmitting is a good idea and hence do it to get the consequences, whatever they are. So the more groups that consider transmitting, even if they are all rational, well-meaning and consider the issue at length the more likely it is that somebody will do it even if it is a stupid thing to do. In situations like this we have argued it behoves us to be more conservative individually than we would otherwise have been – we should simply think twice just because sending messages is in the unilateralist curse category. We also argue in that paper that it is even better to share information and make collectively coordinated decisions.

That these arguments strengthen the con side – but largely independently of what the actual anti-message arguments are. They are general arguments that we should be careful, not final arguments.

Conversely, Alan Penny argued that given the high existential risk to humanity we may actually have little to lose: if our risk per century is 12-40% of extinction, then adding a small ETI risk has little effect on the overall risk level, yet a small chance of friendly ETI advice (“By the way, you might want to know about this…”) that decreases existential risk may be an existential hope. Suppose we think it is 50% likely that ETI is friendly, and 1% chance it is out there. If it is friendly it might give us advice that reduces our existential risk by 50%, otherwise it will eat us with 1% probability. So if we do nothing our risk is (say) 12%. If we signal, then the risk is 0.12*0.99 + 0.01*(0.5*0.12*0.5 + 0.5*(0.12*0.99+0.01))=11.9744% – a slight improvement. Like the Drake equation one can of course plug in different numbers and get different effects.

Truth to the stars

Considering the situation over time, sending a message now may also be irrelevant since we could wipe ourselves out before any response will arrive. That brings to mind a discussion we had at the press conference yesterday about what the point of sending messages far away would be: wouldn’t humanity be gone by then? Also, we were discussing what to present to ETI: an honest or whitewashed version of ourselves? (my co-panelist Dr Jill Stuart made some great points about the diversity issues in past attempts).

My own view is that I’d rather have an honest epitaph for our species than a polished but untrue one. This is both relevant to us, since we may want to be truthful beings even if we cannot experience the consequences of the truth, and relevant to ETI, who may find the truth more useful than whatever our culture currently would like to present.

Ethics of brain emulations, New Scientist edition

Si elegansI have an opinion piece in New Scientist about the ethics of brain emulation. The content is similar to what I was talking about at IJCNN and in my academic paper (and the comic about it). Here are a few things that did not fit the text:

Ethics that got left out

Due to length constraints I had to cut the discussion about why animals might be moral patients. That made the essay look positively Benthamite in its focus on pain. In fact, I am agnostic on whether experience is necessary for being a moral patient. Here is the cut section:

Why should we care about how real animals are treated? Different philosophers have given different answers. Immanuel Kant did not think animals matter in themselves, but our behaviour towards them matters morally: a human who kicks a dog is cruel and should not do it. Jeremy Bentham famously argued that thinking does not matter, but the capacity to suffer: “…the question is not, Can they reason? nor, Can they talk? but, Can they suffer?” . Other philosophers have argued that it matters that animals experience being subjects of their own life, with desires and goals that make sense to them. While there is a fair bit of disagreement of what this means for our responsibilities to animals and what we may use them for, there is a widespread agreement that they are moral patients, something we ought to treat with some kind of care.

This is of course a super-quick condensation of a debate that fills bookshelves. It also leaves out Christine Korsgaard’s interesting Kantian work on animal rights, which as far as I can tell does not need to rely on particular accounts of consciousness and pain but rather interests. Most people would say that without consciousness or experience there is nobody that is harmed, but I am not entirely certain unconscious systems cannot be regarded as moral patients. There are for example people working in environmental ethics that ascribe moral patient-hood and partial rights to species or natural environments.

Big simulations: what are they good for?

Another interesting thing that had to be left out is comparisons of different large scale neural simulations.

(I am a bit uncertain about where the largest model in the Human Brain Project is right now; they are running more realistic models, so they will be smaller in terms of neurons. But they clearly have the ambition to best the others in the long run.)

Of course, one can argue which approach matters. Spaun is a model of cognition using low resolution neurons, while the slightly larger (in neurons) simulation from the Lansner lab was just a generic piece of cortex, showing some non-trivial alpha and gamma rhythms, and the even larger ones showing some interesting emergent behavior despite the lack of biological complexity in the neurons. Conversely, Cotterill’s CyberChild that I worry about in the opinion piece had just 21 neurons in each region but they formed a fairly complex network with many brain regions that in a sense is more meaningful as an organism than the near-disembodied problem-solver Spaun. Meanwhile SpiNNaker is running rings around the others in terms of speed, essentially running in real-time while the others have slowdowns by a factor of a thousand or worse.

The core of the matter is defining what one wants to achieve. Lots of neurons, biological realism, non-trivial emergent behavior, modelling a real neural system, purposeful (even conscious) behavior, useful technology, or scientific understanding? Brain emulation aims at getting purposeful, whole-organism behavior from running a very large, very complete biologically realistic simulation. Many robotics and AI people are happy without the biological realism and would prefer as small simulation as possible. Neuroscientists and cognitive scientists care about what they can learn and understand based on the simulations, rather than their completeness. They are all each pursuing something useful, but it is very different between the fields. As long as they remember that others are not pursuing the same aim they can get along.

What I hope: more honest uncertainty

What I hope happens is that computational neuroscientists think a bit about the issue of suffering (or moral patient-hood) in their simulations rather than slip into the comfortable “It is just a simulation, it cannot feel anything” mode of thinking by default.

It is easy to tell oneself that simulations do not matter because not only do we know how they work when we make them (giving us the illusion that we actually know everything there is to know about the system – obviously not true since we at least need to run them to see what happens), but institutionally it is easier to regard them as non-problems in terms of workload, conflicts and complexity (let’s not rock the boat at the planning meeting, right?) And once something is in the “does not matter morally” category it becomes painful to move it out of it – many will now be motivated to keep it there.

I rather have people keep an open mind about these systems. We do not understand experience. We do not understand consciousness. We do not understand brains and organisms as wholes, and there is much we do not understand about the parts either. We do not have agreement on moral patient-hood. Hence the rational thing to do, even when one is pretty committed to a particular view, is to be open to the possibility that it might be wrong. The rational response to this uncertainty is to get more information if possible, to hedge our bets, and try to avoid actions we might regret in the future.

The limits of the in vitro burger

New growthStepping on toes everywhere in our circles, Ben Levinstein and me have a post at Practical Ethics about the limitations of in vitro meat for reducing animal suffering.

The basic argument is that while factory farming produces a lot of suffering, a post-industrial world would likely have very few lives of the involved species. It would be better if they had better lives and larger populations instead. So, at least in some views of consequentialism, the ethical good of in vitro meat is reduced from a clear win to possibly even a second best to humane farming.

An analogy can be made with horses, whose population has declined precipitiously from the pre-tractor, pre-car days. Current horses live (I guess) nicer lives than the more work-oriented horses of 1900, but they have much fewer lives. So the current 3 million horses in the US might have lives (say) twice as good as the 25 million horses in the 1920s: the total value has still declined. However, factory farmed animals may have lives that are not worth living, holding negative value. If we assume the about 50 billion chickens in in the world all have lives of value -1 each, then replacing them with in vitro meat would give make the world 50 billion units better. But this could also be achieved by making their lives one unit better (and why stop there? maybe they could get two units more). Whether it matters how many entities are experiencing depends on your approach, as does whether it is an extra value if there is a chicken species around rather than not.

Now, I am not too troubled by this since I think in vitro meat is also very good from a health perspective, a climate perspective, and an existential risk reduction perspective (it is good for space colonization and survival if sunlight is interrupted). But I think most people come to in vitro meat from an ethical angle. And given just that perspective, we should not be too complacent that in the future we will become postagricultural: it may take time, and it might actually not increase total wellfare as much as we expected.