Thanks for the razor, Bill!

Thermocouple, Piotr KowalskiI like the idea of a thanksgiving day, leaving out all the Americana turkeys, problematic immigrant-native relations and family logistics: just the moment to consider what really matters to you and why life is good. And giving thanks for intellectual achievements and tools makes eminent sense: This thanksgiving Sean Carroll gave thanks for the Fourier transform.

Inspired by this, I want to give thanks for Occam’s razor.

These days a razor in philosophy denotes a rule of thumb that allows one to eliminate something unnecessary or unlikely. Occam’s was the first: William of Ockham (ca. 1285-1349) stated “Pluralitas non est ponenda sine neccesitate” (“plurality should not be posited without necessity.”) Today we usually phrase it as “the simplest theory that fits is best”.

Principles of parsimony have been suggested for a long time; Aristotle had one, so did Maimonides and various other medieval thinkers. But let’s give Bill from Ockham the name in the spirit of Stigler’s law of eponymy.

Of course, it is not always easy to use. Is the many worlds interpretation of quantum mechanics possible to shave away? It posits an infinite number of worlds that we cannot interact with… except that it does so by taking the quantum mechanics formalism seriously (each possible world is assigned a probability) and not adding extra things like wavefunction collapse or pilot waves. In many ways it is conceptually simpler: just because there are a lot of worlds doesn’t mean they are wildly different. Somebody claiming there is a spirit world is doubling the amount of stuff in the universe, but that there is a lot of ordinary worlds is not too different from the existence of a lot of planets.

Simplicity is actually quite complicated. One can argue about which theory has the fewest and most concise basic principles, but also the number of kinds of entities postulated by the theory. Not to mention why one should go for parsimony at all.

In my circles, we like to think of the principle in terms of Bayesian statistics and computational complexity. The more complex a theory is, the better it can typically fit known data – but it will also generalize worse to new data, since it overfits the first set of data points. Parsimonious theories have fewer degrees of freedom, so they cannot fit as well as complex theories, but they are less sensitive to noise and generalize better. One can operationalize the optimal balance using various statistical information criteria (AIC = minimize the information lost when fitting, BIC = maximize likeliehood of the model). And Solomonoff gave a version of the razor in theoretical computer science: for computable sequences of bits there exists a unique (up to choice of Turing machine) prior that promotes sequences generated by simple programs and has awesome powers of inference.

But in day-to-day life Occam works well, especially with a maximum probability principle (you are more likely to see likely things than unlikely; if you see hoofprints in the UK, think horses not zebras). A surprising number of people fall for the salient stories inherent in unlikely scenarios and then choose to ignore Occam (just think of conspiracy theories). If the losses from low-probability risks are great enough one should rationally focus on them, but then one must check one’s priors for such risks. Starting out with a possibilistic view that anything is possible (and hence have roughly equal chance) means that one becomes paranoid or frozen with indecision. Occam tells you to look for the simple, robust ways of reasoning about the world. When they turn out to be wrong, shift gears and come up with the next simplest thing.

Simplicity might sometimes be elegant, but that is not why we should choose it. To me it is the robustness that matters: given our biased, flawed thought processes and our limited and noisy data, we should not build too elaborate castles on those foundations.

 

Transvision 2014

My talk from TransVision 2014 is now up:

Indeed, all of the talks are there – thanks!

Some talks of note: Gabriel Dorthe’s talk introduced a nice taxonomy/map of transhumanism along the axes argumentation/fantasy and experimentation/speculation. It goes well together with James Hughes’ talk (not up when I write this) where he mapped out the various transhumanisms. David Wood gave a talk where he clearly mapped out concerns about inequality; not sure I agree with all parts, but it was a good overall structure for thinking.

Laurent Alexandre made a great talk where among other things he pointed out how medical ethics may be dead (in the Nietzschean ‘God is dead’ sense) and being replaced by code. Francesco Paolo Adorno argued that immortality and politics are opposed; I disagreed rather profoundly, but it is a good talk to start a conversation from. Marina Maestrutti gave a talk about the shift in transhumanism from a happy cyborg to pessimistic virtue-culturing: she has a good point, and I share some of the misgivings about the moral enhancement project, yet I do think the “xrisk is paramount” argument does hold water and might force us to be a bit less happy-go-lucky about emerging tech. Vincent Billard gave a talk about why to become posthuman; I think he is short-selling the arguments in the transhumanist literature and overstating how good anti-enhancement arguments are, but his use of David Benatars arguments that it may have been better to never have been born to (through an act of philosophical jiu-jitsu) argue in favor of posthumanity made me cheer!

Maël le Mées demonstration of the comfort organs from the Benway Institute was hilarious.

We have gone a long way with the conference from 20 guys in a hotel cellar in Weesp.

Contraire de l’esprit de l’escalier: enhancement and therapy

Public healthYesterday I participated in a round-table discussion with professor Miguel Benasyag about the therapy vs. enhancement distinction at the TransVision 2014 conference. Unfortunately I could not get in a word sidewise, so it was not much of a discussion. So here are the responses I wanted to make, but didn’t get the chance to do: in a way this post is the opposite of l’esprit de l’escalier.

Enhancement: top-down, bottom-up, or sideways?

Does enhancements – whether implanted or not – represent a top-down imposition of order on the biosystem? If one accepts that view, one ends up with a dichotomy between that and bottom-up approaches where biosystems are trained or placed in a smart context that produce the desired outcome: unless one thinks imposing order is a good thing, one becomes committed to some form of naturalistic conservatism.

But this ignores something Benasyag brought up himself: the body and brain are flexible and adaptable. The cerebral cortex can reorganize to become a primary cortex for any sense, depending on which input nerve is wired up to it. My friend Todd’s implanted magnet has likely reorganized a small part of his somatosensory cortex to represent his new sense. This enhancement is not a top-down imposition of a desired cortical structure, neither a pure bottom-up training of the biosystem.

Real enhancements integrate, they do not impose a given structure. This also addresses concerns of authenticity: if enhancements are entirely externally imposed – whether through implantation or external stimuli – they are less due to the person using them. But if their function is emergent from the person’s biosystem, the device itself, and how it is being used, then it will function in a unique, personal way. It may change the person, but that change is based on the person.

Complex enhancements

Enhancements are often described as simple, individualistic, atomic, things. But actual enhancements will be systems. A dramatic example was in my ears: since I am both French- and signing-impaired, I could listen to (and respond to) comments thanks to an enhancing system involving three skilled translators, a set of wireless headphones and microphones. This system was not just complex, but it was adaptive (translators know how to improvise, we the users learned how to use it) and social (micro-norms for how to use it emerged organically).

Enhancements need a social infrastructure to function – both a shared, distributed knowledge of how and when to use them (praxis) and possibly a distributed functioning itself. A brain-computer interface is of little use without anybody to talk to. In fact, it is the enhancements that affect communication abilities that are most powerful both in the sense of enhancing cognition (by bringing brains together) and changing how people are socially situated.

Cochlear implants and social enhancement

This aspect of course links to the issues in the adjacent debate about disability. Are we helping children by giving them cochlear implants, or are we undermining a vital deaf cultural community. The unique thing about cochlear implants is that they have this social effect and have to be used early in life for best results. In this case there is a tension between the need to integrate the enhancement with the hearing and language systems in an authentic way, a shift in which social community which will be readily available, and concerns over that this is just used to top-down normalize away the problem of deafness. How do we resolve this?

The value of deaf culture is largely its value to members: there might be some intrinsic value to the culture, but this is true for every culture and subculture. I think it is safe to say there is a fairly broad consensus in western culture today that individuals should not sacrifice their happiness – and especially not be forced to do it – for the sake of the culture. It might be supererogatory: a good thing to do, but not something that can be demanded. Culture is for the members, not the other way around: people are ends, not means.

So the real issue is the social linkages and the normalisation. How do we judge the merits of being able to participate in social networks? One might be small but warm, another vast and mainstream. It seems that the one thing to avoid is not being able to participate in either. But this is not a technical problem as much as a problem of adaptation and culture. Once implants are good enough that learning to use them does not compete with learning signing the real issue becomes the right social upbringing and the question of personal choices. This goes way beyond implant technology and becomes a question of how we set up social adaptation processes – a thick, rich and messy domain where we need to do much more work.

It is also worth considering the next step. What if somebody offered a communications device that would enable an entirely new form of communication, and hence social connection? In a sense we are gaining that using new media, but one could also consider something direct, like Egan’s TAP. As that story suggests, there might be rather subtle effects if people integrate new connections – in his case merely epistemic ones, but one could imagine entirely new forms of social links. How do we evaluate them? Especially since having a few pioneers test them tells us less than for non-social enhancements. That remains a big question.

Justifying off-label enhancement

A somewhat fierce question I got (and didn’t get to respond to) was how I could justify that I occasionally take modafinil, a drug intended for use of narcoleptics.

There seems to be a deontological or intention-oriented view behind the question: the intentions behind making the drug should be obeyed. But many drugs have been approved for one condition and then use expanded to other conditions. Presumably aspirin use for cardiovascular conditions is not unethical. And pharma companies largely intend to make money by making medicines, so the deep intention might be trivial to meet. More generally, claiming the point of drugs is to help sick people (who we have an obligation to help) doesn’t work since there obviously exist drug use for non-sick people (sports medicine, for example). So unless many current practices are deeply unethical this line of argument doesn’t work.

What I think was the real source was the concern that my use somehow deprived a sick person of the use. This is false, since I paid for it myself: the market is flexible enough to produce enough, and it was not the case of splitting a finite healthcare cake. The finiteness case might be applicable if we were talking about how much care me and my neighbours would get for our respective illnesses, and whether they had a claim on my behaviour through our shared healthcare cake. So unless my interlocutor thought my use was likely to cause health problems she would have to pay for, it seems that this line of reasoning fails.

The deep issue is of course whether there is a normatively significant difference between therapy and enhancement. I deny it. I think the goal of healthcare should not be health but wellbeing. Health is just an enabling instrumental thing. And it is becoming increasingly individual: I do not need more muscles, but I do benefit from a better brain for my life project. Yours might be different. Hence there is no inherent reason to separate treatment and enhancement: both aim at the same thing.

That said, in practice people make this distinction and use it to judge what care they want to pay for for their fellow citizens. But this will shift as technology and society changes, and as I said, I do not think this is a normative issue. Political issue, yes, messy, yes, but not foundational.

What do transhumanists think?

One of the greatest flaws of the term “transhumanism” is that it suggests that there is something in particular all transhumanist believe. Benasayag made some rather sweeping claims about what transhumanists (enhancement as embodying body-hate and a desire for control) wanted to do that were most definitely not shared by the actual transhumanists in the audience or stage. It is as problematic as claiming that all French intellectuals believe something: at best a loose generalisation, but most likely utterly misleading. But when you label a group – especially if they themselves are trying to maintain an official label – it becomes easier to claim that all transhumanists believe in something. Outsiders also do not see the sheer diversity inside, assuming everybody agrees on the few samples of writing they have  read.

The fault here lies both in the laziness of outside interlocutors and in transhumanists not making their diversity clearer, perhaps by avoiding slapping the term “transhumanism” on every relevant issue: human enhancement is of interest to transhumanists, but we should be able to discuss it even if there were no transhumanists.

Manufacturing love cheaply, slowly and in an evidence based way

Lecture yurtSince I am getting married tomorrow, it is fitting that the Institute of Art and Ideas TV has just put my lecture from Hay-on-Wye this year online: Manufacturing love.

It was a lovely opportunity to sit in a very comfy armchair and feel like a real philosopher. Of course, armchair philosophy is what it is: tomorrow I will do an empirical experiment with N=2 (with ethics approval from mother-in-law). We’ll see how it works out.

While I suspect my theoretical understanding is limited and the biomedical tools I have written about are not available, there is actually some nice empirical research on what makes good wedding vows. My husband and me also went for a cheap, simple wedding for just the closest friends and family, which seems to be a good sign (the rest of my friends come to an informal party the day after). And it is probably a good sign that we got together in a slow and gradual way: we have very compatible personalities.

A fun project.

Back to the future: neurohacking and the microcomputer revolution

My talk at LIFT 2014 Basel, <a href=”http://videos.liftconference.com/video/10528321/anders-sandberg-the-man-machine-are”>The Man-Machine: Are we living in the 1970s of brain hacking?</a> can now be viewed online (slides (pdf)).

My basic thesis is that the late 70s and early 80s microcomputer revolution might be a useful analogy for what might be happening with neurohacking today: technology changes are enabling amateur and startup experimentation that will in the long run lead to useful, society-changing products that people grow up with and accept. 20 years down the line we may be living in an enhanced society where some of the big neuro-players started up in the near future and became the Apple and Microsoft of enhancement.

A bit more detail:

Dreams from the early 80sIn the 70s the emergence of microprocessors enabled the emergence of microcomputers, far cheaper than the mainframes and desk-sized minicomputers that had come before. They were simple enough to be sold as kits to amateurs. The low threshold to entry led to the emergence of a hobbyist community, small start-ups and cluster formation (helped by the existence of pre-existing tech clusters like Silicon Valley). The result was intense growth, diversity and innovation. The result was a generation exposed to the technology, accepting it as cool or even ordinary, laying the foundation for later useful social and economic effects – it took 20+ years to actually have an effect on productivity! Some things take time, and integrating a new technology into society may take a generation.

Right now we have various official drivers for neural enhancement like concerns about an ageing society, chronic diseases, stress, lifelong learning and health costs. But the actual drivers of much of the bottom-up neurohacking seem to be exploration, innovation and the hacker ethos of taking something beyond its limits. Neurotechnologies may be leaving the confines of hospitals and becoming part of home or lifestyle technologies. Some, like enhancer drugs, are tightly surrounded by regulations and are likely to be heavily contested. Meanwhile many brain stimulation, neurofeedback and life monitoring devices exist in a regulatory vacuum and can evolve unimpeded. The fact that they are currently not very good is no matter: the 70s home computers were awful too, but they could do enough cool things to motivate users and innovation.

PartsWhy is there a neurotech revolution brewing? Part of it is technology advances: we already have platforms like computers, smartphones, wifi and arduinos available, enabling easy construction of new applications. Powerful signal processing can be done onboard, data mining in the cloud. Meanwhile costs for these technologies have been falling fast, and crowdsourcing is enabling new venues of funding for niche applications. The emergence of social consumers allow fast feedback and customisation. Information sharing has become orders of magnitude more efficient since the 70s, and technical results in neuroscience can now easily be re-used by amateurs as soon as they are published (the hobbyists are unlikely to get fMRI anytime soon, but results gained through expensive methods may then be re-used using cheap rigs).

It seems likely that at least some of these technologies are going to become more acceptable through exposure. Especially since they fit with the move towards individualized health concepts, preventative medicine and self-monitoring. The difference between a FitBit, a putative sleep-enhancing phone app, Google Glass and a brain stimulator might be smaller in practice than one expects.

Emlyn setting up interfaceThe technology is also going to cause a fair bit of business disruption by challenging traditional monopolies by enabling new platforms, new routes of distribution and access. The personal genomics battle between diagnostic monopolies and companies selling gene tests (not to mention individuals putting genetic information online) is a hint of what may come. Enhancement demands a certain amount of personalisation/fine tuning to work, and might hint of a future of ‘drugs as services’ – drugs may be part of a system of measurement, genetic testing and training rather than standalone chemicals.

Still, hype kills. Neuroenhancement is still slowly making its way up the slope of the hype curve, and will sooner or later reach a hype peak (moral panics, cover of Time Magazine) followed by inevitable disappointment when it doesn’t change everything (or make its entrepreneurs rich overnight). Things tend to take time, and there will be consolidation of the neuromarket after the first bubble.

The long-run effects of a society with neuroenhancement are of course hard to predict. It seems likely that we are going to continue towards a broader health concept – health as an individual choice, linked to individual life projects but also perhaps more of an individual responsibility. Mindstates will be personal choices, with an infrastructure for training, safety and control. Issues of morphological freedom – the right to change onself, and the right not to have change imposed – will be important social negotiations. Regulation can be expected to lag as always, and usually regulate devices based on older and different systems.

Generally, we can expect that our predictions will be bad. But past examples show that early design choices may stay with us for a long time: the computer mouse has not evolved that far from its original uses at Bell Labs and Xerox Parc. Designs also contain ideology: the personal microcomputer was very much a democratisation of computing. Neurohacking may be a democratisation of control over bodies and minds. We may hence want to work hard at shaping the future by designing well today.