Sweden looked set to abandon the law making sterilization mandatory for transgendered people, until a last minute effort of the Christian Democrats managed to derail the change.
In today's Svenska Dagbladet one of their ideologists, Lennart Sacrédeus, argues their position: "allowing half a gender reassignment opens for a third gender" - his argument is that people desiring to change their gender to the other one is OK, but transexual males that can be pregnant are a third gender.
The whole argument is of course based on the assumption that there only can exist two *true* genders. A female wanting to become a male that can be pregnant is not truly desiring to be a real male, and hence the desire is for something inconsistent or wrong (and societal support for the transition should presumably not be given).
There are several problems here.
Even a cursory look at the psychological and anthropological literature shows that genders are much more complex out there in reality than simple gender binaries. From a religious perspective it might make sense to argue for a strict binary - it is an essential part of many religious interpretations after all. While the inside perspective from these interpretations is that deviations are wrong (morally or logically), the outside perspective is of course that the empirical evidence undermines the claims of the interpretations to represent reality (and hence undermine any moral force they have beyond encoding the local mores of different cultures at different times).
In a pluralistic society that doesn't buy a particular religious narrative about gender as anything more than (possibly) respectable point of view among others, decisions about shared rules cannot be based on just that narrative. Either it has to claim that the decisions affects people believing in it to a high degree and we should respect their rights (e.g. debates about circumcision and halal slaughter), or it has to propose general ethical or pragmatic principles that others can also largely agree with. In this case it seems unlikely that pregnant males will cause more distress among conservatively minded people than vanilla transexuals, and as far as I know nobody has mustered any convincing prudential or ethical argument why it would be a bad thing if there were more of them - in fact, there is a pretty broad consensus in Swedish society (at least the people who talk about it, and the political associations) that it is acceptable.
The growing awareness and acceptance of intersexuals show that a third gender might appear regardless of what transsexuals are allowed to do or not.
The assumption that transsexuals can only legitimately desire to become the opposite sex is problematic. What about somebody desiring to become intersexual or asexual? Leaving aside the inertia of the legal and medical system (where it will no doubt take a long time until the idea takes root), there doesn't seem to be any moral reason not to accept that desire if we accept the desires of some people to have another gender. The moral motivation for gender reassignment (besides autonomy and morphological freedom) is that it likely will increase well-being by providing a congruent body to the mental gender. If the same is true for desiring to becoming intersexual or a male with utreus, why not? This might be a very rare state of desire, but that doesn't automatically invalidate it.
By turning the question into a political question the Christian Democrats also inadvertently make gender even more of a socio-political matter and less of a metaphysical matter. If an argument against a change in the law is that third genders must be prevented, then that implies that the number of genders is a political question, open for whatever majorities and alliances that exist to decide upon.
Public opinion seems to be moving towards accepting more gender diversity and less paternalist control over reproduction: in the future we are likely to see far more blurring of gender binaries. I think that is fine: let people choose the bodies they want. And let us tolerate those choices, just like we tolerate religious choices (and for the same reasons). Toleration doesn't imply lack of critique: we should do our best to figure out how to make choices that actually improve well-being and comment on what we find - but that requires freedom to choose so we have experience to learn from. Armchair moralism is always trumped by real world experience.
Another practical ethics blog, Experimenting with oversight with more bite?
I blog about the issue of whether there is a need for some mandatory international oversight of potentially dangerous biotechnology. It is a tricky issue, and my view is that while I do think it might be needed, we *really* need to figure out how such an oversight should function before we implement it.
As I mention in the post, I saw Contagion in the holidays. Typical cheerful Swedish holiday viewing. Very good and understated film, strongly recommended - although it will make you a bit more of a germophobe. It is so nice to know it was based on a real virus, merely given pandemic properties.
(This started as a post on the Extropians list)
On 2012-01-01 12:55, Stefano Vaj wrote:
> I do think that utilitarian ethical systems can be consistent, that is
> that they need not be intrinsically contradictory, but certainly most of
> them are dramatically at odd with actual ethical traditions not to
> mention everyday intuitions of most of us.
Of course, being at odds with tradition and everyday positions doesn't tell us much about the actual validity of a position. Which brings up the interesting question of just how weird a true objective morality might be if it exists - and how hard it would be for us to approximate it.
Our intuitions typically pertain to a small domain of everyday situations, where they have been set by evolution, culture and individual experience. When we go outside this domain our intuitions often fail spectacularly (mathematics, quantum mechanics, other cultures).
A moral system typically maps situations or actions to the set {"right", "wrong"} or some value scale. It can be viewed as a function F(X) -> Y. We can imagine looking for a function F that fits the "data" of our normal intuitions.
(I am ignoring the issue of computability here: there might very well be uncomputable moral problems of various kinds. Let's for the moment assume that F is an oracle that always provides an answer.)
This is a function fitting problem and the usual issues discussed in machine learning or numerics textbooks apply.
We could select F from a very large and flexible set, allowing it to perfectly fit all our intuitive data - but at the price of overfitting: it would very rapidly diverge from anything useful just outside our everyday domain. Even inside it would be making all sorts of weird contortions in between the cases we have given it ("So drinking tea is OK, drinking coffee is OK, but mixing them is as immoral as killing people?") since it would be fluctuating wildly in order to correctly categorize all cases. Any noise in our training data like a mislabelled case would be made part of this mess - it would require our intuitions to be exactly correct and entered exactly right in order to fit morality.
We can also select F from a more restricted set, in which case the fit to our intuitions would not be perfect (the moral system would tell us that some things we normally think are OK are wrong, and vice versa) but it could have various "nice" properties. For example, it might not change wildly from case to case, avoiding the coffee-mixing problem above. This would correspond to using a function with fewer free parameters, like a low degree polynomial. This embodies an intuition many ethicists seem to have: the true moral system cannot be enormously complex. We might also want to restrict some aspects of F, like adding reasonable constraints like the axioms of formal ethics (prescriptivity ("Practice what you preach"), consistency, ends-means rationality ("To achieve an end, do the necessary means") - in this case we get the universalizability axiom for free by using a deterministic F) - these would be constraints on the shape of F.
The problem is that F will behave strangely outside our everyday domain. The strangeness will partly be due to our lack of intuitions about what it should look like out there, but partly because it is indeed getting weird and extreme - it is extrapolating local intuitions towards infinity. Consider fitting a polynomial to the sequence 1,2,1,2,1,2 - unless it is constant it will go off towards positive infinity in at least one direction. So we might also want to prescribe limiting behaviours of F. But now we are prescribing things that are far outside our own domain of experience and our intuitions are not going to give us helpful information, just bias.
Attempts at extrapolating a moral system that can give answers for any case will hence either lead to
Not extrapolating will mean that you cannot make judgements about new situations (what does the bible say about file sharing?)
Of course, Zen might have a point:
'Master Kyogen said, "It is like a man up a tree who hangs from a branch by his mouth. His hands cannot grasp a bough, his feet cannot touch the tree. Another man comes under the tree and asks him the meaning of Bodhidharma's coming from the West. If he does not answer, he does not meet the questioner's need. If he answers, he will lose his life. At such a time, how should he answer?"'
Sometimes questions have to be un-asked. Sometimes the point of a question is not the answer.
My own take on this is exercise is that useful work can be done by looking at what constitutes reasonable restrictions on F, restrictions that are not tied to our moral intuitions but rather linked to physical constraints (F actually has to be computable in the universe, if we agree with Kant's dictum "ought implies can" ought should also imply "can figure it out"), formal constraints (like the formal ethics axioms) and perhaps other kinds of desiderata - is it reasonable to argue that moral systems have to be infinitely differentiable, for example? Can the intuition that moral systems have to be simple be expressed as a Bayesian Jeffreys-Jaynes type argument that we should give a higher prior probability to few parameter moral systems?
This might not tell us enough to determine what kind of function F to use, but it can still rule out a lot of behaviour outside our everyday domain. And it can help us figure out where our everyday intuitions have the most variance against less biased approaches: those are the sore spots where we need to investigate our moral thinking the most.
Another important aspect is for investigating AI safety. If an AI were to extrapolate away from human intuitions, what would we end up with? Are there ways of ensuring that this extrapolation hits what is right - or survivable?