I attended the Human Enhancement Technologies and Human Rights conference at Stanford Law School last weekend. I blogged a bit in general on CNE Health about it, but here are some details.
The real strength of this conference was in bringing together otherwise separate communities of people interested in diability rights, human rights, cognitive liberty and transhumanism. There were even some bioconservatives present, although they were in minority - I actually hope we will have more of them in the future, it makes the meetings more interesting. And as I blogged, we need to tie these issues to current medical policy issues and not just far future conjecture.
The initial panel with Ron Bailey (biolibertarian), Erik Davis (postmodern) and William Hurlbut (cautious bioconservative) gave a nice spread if perhaps no particular surprises. Davis suggested ythat the psychedelic subculture might actually be a model of a future enhancement subculture, a freer zone thanks to its illegality than if the drugs had been left in the hands of the psychiatrists. Hurlbut demonstrated that Leon Kass' poetic prose is apparently contagious as he talked about our "fragile flexibility", our tendency to "follow the gradient of our desires" and how we as creatures of the Earth need to appreciate our temporality. Fine sentiments, but stripped of their poetry they just become a caution about knowing what tradeoffs we make. He also suggested that we entertain the thought that we are indeed optimal, the "ultimate expression of physical freedom", but completely failed to grasp when he got an audience question referring to the reversal test.
Patrick Hopkins as always sharply analysed the problems of the right to enhance ourseves (since such a right would be based on human needs and desires, making a move beyond problematic). He pointed out that autonomy may be too nonspecific, while using interest-based reasons holds more promise since we then can get a thick debate about what constitutes life worth living.
Chris Gray pointed out the need to ensure citizenship before the right to enhance: without having the first, we cannot build societies enabling the second. The question what we want is also ill-posed, since we cannot predict participatory evolution - attempts to reduce enhancement to some particular goals will miss the real discoveries that will be made as we explore. This is an interesting counterpoint to Hopkins analysis.
Nigel Cameron did a reasonable bioconservative overview of caveats for enhancers, most of which are IMHO common sense. An interesting issue he brought up that I definitely agree with is that enhancements are dependent on their context, environment and the surrounding community. Ultra-concentration is useful mostly in a symbol-handling context, not as a salesperson.
Robin Zebrowski demolished the myth of a "standard body" while Anita Silvers argued for the right to be non-normal. They pointed out the need for the enhancement community to view the goal not as a normalization or quest for an ideal body but an opening up of new kinds of bodies. I completely agree; maybe we should drop the 'enhancement' talk and just speak of biological change.
Robert Schwartz analysed the ethical reasons to regulate whether health care providers ought to be allowed to enhance and found that most reasons were rather irrational or archaic. In the end it ought to be left to the community of helath care people to decide what services they wish to provide. Laura Colleton however pointed out that health care access at least in the US is set by insurance firms and that they will decide lines between enhancement and treatment even if doctors and philosophers cannot.
George Dvorsky argued for the moral imperative of uplifting animals. I didn't quite get what philosophical base he based this on, and I think that might be a problematic even if many of his examples and arguments were fun. Of course, as a neuroscientist I'm also interested in the actual problem of doing it. My guess is that implants and external software brains is an easier way of doing it than genetic or memetic help. In the end the big issue might be how to help uplifts to become trans-animals rather than just approximate humans, a fairly tricky problem. But I was reminded of the scene in John Vareley's Steel beach where neural interfaces are used to negotiate with non-intelligent dinosaurs: various options are presented to them as things they understand (predators, starvation, large and small herds etc) and their reactions can be used as a kind of guide to their preferences.
There were some examples of High Postmodernist talk, as always entertaining but hard to grasp any policy ideas from (I especially liked Nikki Sullivan's and Susan Stryker's talk on transexuality, voluntary amputation and the politics of bodily integrity). There was also at least one talk that can best be described as stoner gaia hypothesis. While the audience might have been sympathetic to allowing exploration of altered cognitive states and the idea that this might be healthy for human culture, this talk didn't quite support the point. I had the urge to point out the need to commune psychedelically with the military industrial complex and not just the Amazon - after all, Haliburton is as much a living part of the noosphere as a forest.
At the end Nick Bostrom presented a mid level list of rights for artificial minds: a right not to be discriminated due to substrate, ontogeny, a preference for actual rather than potential beings, procreative beneficience, a life worth living and so on. I think most would agree on something similar, but as an avid universe creator I had a problem. Wouldn't accepting such rights make it ethically extremely hard to do simulations where such minds might emerge? Should I try to limit the emergence of mind, try to make non-suffering worlds, or save the minds whenever they appeared and run them in an open-ended afterlife as a compensation for the first simulation?
Lots of interesting ideas, but it feels like we have just started down the rights track.