Catastrophizing for not-so-fun and non-profit

T-valuesOren Cass has an article in Foreign Affairs about the problem of climate catastrophizing. It is basically how it becomes driven by motivated reasoning but also drives motivated reasoning in a vicious circle. Regardless of whether he himself has motivated reasoning too, I think the text is relevant beyond the climate domain.

Some of FHI research and reports are mentioned in passing. Their role is mainly in showing that there could be very bright futures or other existential risks, which undercuts the climate catastrophists that he is really criticising:

Several factors may help to explain why catastrophists sometimes view extreme climate change as more likely than other worst cases. Catastrophists confuse expected and extreme forecasts and thus view climate catastrophe as something we know will happen. But while the expected scenarios of manageable climate change derive from an accumulation of scientific evidence, the extreme ones do not. Catastrophists likewise interpret the present-day effects of climate change as the onset of their worst fears, but those effects are no more proof of existential catastrophes to come than is the 2015 Ebola epidemic a sign of a future civilization-destroying pandemic, or Siri of a coming Singularity

I think this is an important point for the existential risk community to be aware of. We are mostly interested in existential risks and global catastrophes that look possible but could be impossible (or avoided), rather than trying to predict risks that are going to happen. We deal in extreme cases that are intrinsically uncertain, and leave the more certain things to others (unless maybe they happen to be very under-researched). Siri gives us some singularity-evidence, but we think it is weak evidence, not proof (a hypothetical AI catastrophist would instead say “so, it begins”).

Confirmation bias is easy to fall for. If you are looking for signs of your favourite disaster emerging you will see them, and presumably loudly point at them in order to forestall the disaster. That suggests extra value in checking what might not be xrisks and shouldn’t be emphasised too much.

Catastrophizing is not very effective

The nuclear disarmament movement also used a lot of catastrophizing, with plenty of archetypal cartoons showing Earth blowing up as a result of nuclear war or commonly claiming it would end humanity. The fact that the likely outcome merely would be mega- or gigadeath and untold suffering was apparently not regarded as rhetorically punchy enough. Ironically, Threads, The Day After or the Charlottesville scenario in Effects of Nuclear War may have been far more effective in driving home the horror and undesirability of nuclear war better, largely by giving a smaller-scale more relateable scenarios. Scope insensitivity, psychic numbing, compassion fade and related effects make catastrophizing a weak, perhaps even counterproductive, tool.

Defending bad ideas

Another take-home message: when arguing for the importance of xrisk we should make sure we do not end up in the stupid loop he describes. If something is the most important thing ever, we better argue for it well and backed up with as much evidence and reason as can possibly be mustered. Turning it all into a game of overcoming cognitive bias through marketing or attributing psychological explanations to opposing views is risky.

The catastrophizing problem for very important risks is related to Janet Radcliffe-Richards’ analysis of what is wrong with political correctness (in an extended sense). A community argues for some high-minded ideal X using some arguments or facts Y. Someone points out a problem with Y. The rational response would be to drop Y and replace it with better arguments or facts Z (or, if it is really bad, drop X). The typical human response is to (implicitly or explicitly) assume that since Y is used to argue for X, then criticising Y is intended to reduce support for X. Since X is good (or at least of central tribal importance) the critic must be evil or at least a tribal enemy – get him! This way bad arguments or unlikely scenarios get embedded in a discourse.

Standard groupthink where people with doubts figure out that they better keep their heads down if they want to remain in the group strengthens the effect, and makes criticism even less common (and hence more salient and out-groupish when it happens).

Reasons to be cheerful?

An interesting detail about the opening: the GCR/Xrisk community seems to be way more optimistic than the climate community as described. I mentioned Warren Ellis little novel Normal earlier on this blog, which is about a mental asylum for futurists affected by looking into the abyss. I suspect he was maybe modelling them on the moody climate people but adding an overlay of other futurist ideas/tropes for the story.

Assuming climate people really are that moody.

The Annihilation Score as Satirical Sociology

Violin storeToday I read The Annihilation Score by Charles Stross during a flight. It is the sixth Laundry novel, and in many ways the weakest. But it might be the intellectually and satirically best.

The Laundry novels are a mix of horror, spy story, geekiness, and satire. This is both a reader-winning combination (transitions from one side of the mixture to another can provide intense contrast, and Stross can give readers a bit of everything) and a balancing problem: each story needs to maintain the right mixture, and the readers often have their own favourite ratios. The Annihilation Score goes further in the direction of satire, reducing the horror and geekiness fairly significantly. This no doubt makes many Laundry fans unhappy. Me too, to some extent: there is nothing more delightful than noticing wordplay based on obscure hermetica and computer science, or the distinctly unsettling implications of thinking through some of the metaphysical assumptions of the setting. However, I think Stross hit on something different in this novel: an important argument disguised as satire.

On the surface the novel suffers from bad pacing: the bulk of it is about management. Not intense action, but rather the issue of how to set up an office, from personnel management to furniture to keeping the funding body happy despite contradictory goals. There is plenty of agency-spotting, with numerous acronymical organisations criss-crossing the story with their interleaved agendas. And finally, in the last fifth, a climactic battle. Typically Laundry novels spend a lot of times establishing a mood and tension for a relatively brief finale where they get unleashed. The Annihilation Score takes this even further, but at least I did not feel much of a build-up. In fact, despite the pressure on the main character she comes across as almost a Westminster Mary Sue: she persists and succeeds at nearly everything, from turning what ought to be a social nightmare into a cozy core team, to handling unseen budgetary constraints.

However, on a deeper level this is not a horror story about inhuman entities from other dimensions threatening to invade our world and their misguided human servants. This is a horror story about the inhuman entity inhabiting Whitehall: government.

Taking jabs at the absurdity, stupidity and inhumanity of bureaucracy has been a staple in the Laundry books. What makes the Annihilation Score stand out is that it actually has a fairly well thought out argument and exposition of why. The basics are familiar from the earlier novels: the iron law of bureaucracy (framed here as the emergent instrumental goal of organisations to preserve themselves), Parkinson’s law, the Snafu principle, empire building, not invented here, in-group out-group dynamics, Something Must Be Done, and so on. The novel does a sociological dive into the internal culture of the subset of bureaucracy dealing with policing. Here there exists a strong ethos about what purpose it actually has, which both serves to recruit and advance people with a compatible mindset and actually maintain some mission focus. Presumably because it would be very noticeable if the police force began too drift too far from its necessary function; compare this with how some branches of academia are kept honest by constant interaction with an unyielding real world, and others diffuse into obscure absurdity since there are only social forces constraining them. But even when a purpose has an apparently clear meaning it can get subtly (or not so subtly) twisted. This is especially true at the top, where the constraints of external practical reality are weakest.

Stross examines the case where bureaucracy recognizes it has an out-of-context problem. Something important yet unknown is intruding, and clearly something must be done to handle it. The problem is of course that following the politician’s syllogism means that whatever fast and decisive action is taken is not going to be based on good knowledge. Worse, if the organisation is centred on dealing with something Very Important like national security it will hence be (1) extremely motivated to do it, (2) discount signals from unimportant (as described by its own value system) organisations or sources. A not so subtle analogy to the Annihilation Score is government handling of many emerging technologies such as encryption. Internal expertise is lacking not just on the technology itself and its full implications, but there is also a lack of expertise in judging the consequences of different actions and expertise in recognizing this kind of expertise.

This is where I think the novel actually succeeds: it plays out a satirical scenario, but the parts are all-too-familiar. Well-meaning people work hard to ensure something agreed to be good, and the result is Moloch. The Sleeper in the Pyramid is not half as scary as the Dweller in Whitehall. Because the later is real.

Why Cherry 2000 should not be banned, Terminator should, and what this has to do with Oscar Wilde

Binary curious[This is what happens when I blog after two glasses of wine. Trigger warning for possibly stupid cultural criticism and misuse of Oscar Wilde.]

From robots to artificiality

On practical ethics I discuss what kind of robots we ought to campaign against. I have signed up against autonomous military robots, but I think sex robots are fine. The dividing line is that the harm done (if any) is indirect and victimless, and best handled through sociocultural means rather than legislation.

I think the campaign against sex robots has a point in that there are some pretty creepy ideas floating around in the world of current sex bots. But I also think it assumes these ideas are the only possible motivations. As I pointed out in my comments on another practical ethics post, there are likely people turned on by pure artificiality – human sexuality can be far queerer than most think.

Going off on a tangent, I am reminded of Oscar Wilde’s epigram

“The first duty in life is to be as artificial as possible. What the second duty is no one has as yet discovered.”

Being artificial is not the same thing as being an object. As noted by Barris, Wilde’s artificiality actually fits in with pluralism and liberalism. Things could be different. Yes, in the artificial world nothing is absolutely given, everything is the result of some design choices. But assuming some eternal Essence/Law/God is necessary for meaning or moral exposes one to a fruitless search for that Thing (or worse, a premature assumption one has found It, typically when looking in the mirror). Indeed, as Dorian Gray muses, “Is insincerity such a terrible thing? I think not. It is merely a method by which we can multiply our personalities.” We are not single personas with unitary identities and well defined destinies, and this is most clearly visible in our social plays.

Sex, power and robots

Continuing on my Wildean binge, I encountered another epigram:

“Everything in the world is about sex except sex. Sex is about power.”

I think this cuts close to the Terminator vs. Cherry 2000 debate. Most modern theorists of gender and sex are of course power-obsessed (let’s blame Foucault). The campaign against sex robots clearly see the problem as the robots embodying and perpetuating a problematic unequal power structure. I detect a whiff of paternalism there, where women and children – rather than people – seem to be assumed to be the victims and in the need of being saved from this new technology (at least it is not going as far as some other campaigns that fully assume they are also suffering from false consciousness and must be saved from themselves, the poor things). But sometimes a cigar is just a cigar… I mean sex is sex: it is important to recognize that one of the reasons for sex robots (and indeed prostitution) is the desire for sex and the sometimes awkward social or biological constraints of experiencing it.

The problem with autonomous weapons is that power really comes out of a gun. (Must resist making a Zardoz reference…) It might be wielded arbitrarily by an autonomous system with unclear or bad orders, or it might wielded far too efficiently by an automated armed force perfectly obedient to its commanders – removing the constraint that soldiers might turn against their rulers if being aimed against their citizenry. Terminator is far more about unequal and dangerous power than sex (although I still have fond memories of seeing a naked Arnie back in 1984). The cultural critic may argue that the power games in the bedroom are more insidious and affect more of our lives than some remote gleaming gun-metal threat, but I think I’d rather have sexism than killing and automated totalitarianism. The uniforms of the killer robots are not even going to look sexy.

It is for your own good

Trying to ban sex robots is about trying to shape society into an appealing way – the goal of the campaign is to support “development of ethical technologies that reflect human principles of dignity, mutuality and freedom” and the right for everybody to have their subjectivity recognized without coercion. But while these are liberal principles when stated like this, I suspect the campaign or groups like it will have a hard time keeping out of our bedrooms. After all, they need to ensure that there is no lack of mutuality or creepy sex robots there. The liberal respect for mutuality can become a very non-liberal worship of Mutuality, embodied in requiring partners to sign consent forms, demanding trigger warnings, and treating everybody who is not responding right to its keywords as suspects of future crimes. The fact that this absolutism comes from a very well-meaning impulse to protect something fine makes it even more vicious, since any criticism is easily mistaken as an attack on the very core Dignity/Mutuality/Autonomy of humanity (and hence any means of defence are OK). And now we have all the ingredients for a nicely self-indulgent power trip.

This is why Wilde’s pluralism is healthy. Superficiality, accepting the contrived and artificial nature of our created relationships, means that we become humble in asserting their truth and value. Yes, absolute relativism is stupid and self defeating. Yes, we need to treat each other decently, but I think it is better to start from the Lockean liberalism that allows people to have independent projects rather than assume that society and its technology must be designed to embody the Good Values. Replacing “human dignity” with the word “respect” usually makes ethics clearer.

Instead of assuming we can a priori figure out how technology will change us and then select the right technology, we try and learn. We can make some predictions with reasonable accuracy, which is why trying to rein in autonomous weapons makes sense (the probability that they lead to a world of stability and peace seems remote). But predicting cultural responses to technology is not something we have any good track record of: most deliberate improvements of our culture have come from social means and institutions, not banning technology.

“The fact is, that civilisation requires slaves. The Greeks were quite right there. Unless there are slaves to do the ugly, horrible, uninteresting work, culture and contemplation become almost impossible. Human slavery is wrong, insecure, and demoralising. On mechanical slavery, on the slavery of the machine, the future of the world depends.”

Clashing discourses

Freezing vs PlastinationOn Practical Ethics I blogged about Limiting the damage from cultures in collision, how clashing cultures of discourse can make a debate chaotic or even destructive. I took a bit of risk since the post dealt with things tangential to Gamergate, and I did indeed get some vigorous commenting – some of which was on target. A fair bit was a neat illustration of my thesis instead.

One interesting tip I got was from Adam Hyland about the paper 4chan and /b/: An Analysis of Anonymity and Ephemerality in a Large Online Community by Bernstein et al. They give some support for the ideas in the essay that started my post, how forums with high anonymity and ephemerality can produce very different discourse cultures. As some commenters in the twitter threat point out, however these forums also have methods of retaining memory – but it is a non-individual collective memory, rather a strict memory of who said what.

We can play around with how anonymous/pseudonymous/true nameish, ephemeral/permanent, quick/middle/long messages  are on a forum we build. It seems likely that somewhat predictable consequences  on the culture of discourse and how identity works would ensue: it would be a great project to test.

Plotting morality

Pew Research has posted their Morality Interactive Topline Results for their spring 2013 and winter 2013-2014 survey of moral views around the world. These are national samples, so for each moral issue the survey gives how many thinks it is morally unacceptable, morally acceptable, not a moral issue or whether it depends on the situation.

Plotting countries by whether issues are morally acceptable, morally unacceptable or morally irrelevant gives the following distributions.

Traingular plot of Pew Morality Survey

Overall, there are many countries that are morally against everything, and a tail pointing towards some balance between acceptable or morally irrelevant.

The situation-dependence scores tended to be low: most people do think there are moral absolutes. The highest situation-dependency scores tended to be in the middle between the morally unacceptable point and the OK side; I suspect there was just a fair bit of confusion going on.

pewcorr

Looking at the correlations between morally unacceptable answers suggested that unmarried sex and homosexuality stands out: views there were firmly correlated but not strongly influenced by views on other things. I regard this as a “sex for fun” factor. However, it should be noted that almost everything is firmly correlated: if a country is against X, it is likely against Y too. Looking at correlations between acceptable or no issue answers did not show any clear picture.

pewpca2d pewpca3d

The real sledgehammer is of course principal component analysis. Running it for the whole data produces a firm conclusion: the key factor is something we could call “moral conservatism”, which explains 73% of the variance. Countries that score high find unmarried sex, homosexuality, alcohol, gambling, abortion and divorce unacceptable.

The second factor, explaining 9%, seems to denote whether things are morally acceptable or simply morally not an issue. However, it has some unexpected interaction with whether unmarried sex is unacceptable. This links to the third factor, explaining 7%, which seems to be linked to views on divorce and contraception. Looking at the 3D plot of the data, it becomes clear that for countries scoring low on the moral conservatism scale (“modern countries”) there is a negative correlation between these two factors, while for conservative countries there is a positive correlation.

Plotting the most conservative (red) and least (blue) countries supports this. The lower blue corner is the typical Western countries (France, Canada, US, Australia) while the upper blue corner is more traditionalist (?) countries (Czech republic, Chile, Spain). The lower red corner has Ghana, Uganda, Pakistan and Nigeria, while the upper red is clearly Arab: Egypt, the Palestinian territories, Jordan.

In the end, I guess the data doesn’t tell us that much truly new. A large part of the world hold traditional conservative moral views. Perhaps the most interesting part is that the things people regard as morally salient or not interacts in a complicated manner with local culture. There are also noticeable differences even within the same cultural sphere: Tunisia has very different views from Egypt on divorce.

For those interested, here is my somewhat messy Matlab code and data to generate these pictures.

Truth and laughter

ReassuringSlate Star Codex has another great post: If the media reported on other dangers like it does AI risk.

The new airborne superplague is said to be 100% fatal, totally untreatable, and able to spread across an entire continent in a matter of days. It is certainly fascinating to think about if your interests tend toward microbiology, and we look forward to continuing academic (and perhaps popular) discussion and debate on the subject.

I have earlier discussed how AI risk suffers from the silliness heuristic.

Of course, one can argue that AI risk is less recognized as a serious issue than superplagues, meteors or economic depressions (although, given what news media have been writing recently about Ebola and 1950 DA, their level of understanding can be debated). There is disagreement on AI risk among people involved in the subject, with some rather bold claims of certainty among some, rational reasons to be distrustful of predictions, and plenty of vested interests and motivated thinking. But this internal debate is not the reason media makes a hash of things: it is not like there is an AI safety denialist movement pushing the message that worrying about AI risk is silly, or planting stupid arguments to discredit safety concerns. Rather, the whole issue is so out there that not only the presumed reader but the journalist too will not know what to make of it. It is hard to judge credibility, how good arguments are and the size of risks. So logic does not apply very strongly – anyway, it does not sell.

This is true for climate change and pandemics too. But here there is more of an infrastructure of concern, there are some standards (despite vehement disagreements) and the risks are not entirely unprecedented. There are more ways of dealing with the issue than referring to fiction or abstract arguments that tend to fly over the heads of most. The discussion has moved further from the frontiers of the thinkable not just among experts but also among journalists and the public.

How do discussions move from silly to mainstream? Part of it is mere exposure: if the issue comes up again and again, and other factors do not reinforce it as being beyond the pale, it will become more thinkable. This is how other issues creep up on the agenda too: small stakeholder groups drive their arguments, and if they are compelling they will eventually leak into the mainstream. High status groups have an advantage (uncorrelated to the correctness of arguments, except for the very rare groups that gain status from being documented as being right about a lot of things).

Another is demonstrations. They do not have to be real instances of the issue, but close enough to create an association: a small disease outbreak, an impressive AI demo, claims that the Elbonian education policy really works. They make things concrete, acting as a seed crystal for a conversation. Unfortunately these demonstrations do not have to be truthful either: they focus attention and update people’s probabilities, but they might be deeply flawed. Software passing a Turing test does not tell us much about AI. The safety of existing AI software or biohacking does not tell us much about their future safety. 43% of all impressive-sounding statistics quoted anywhere is wrong.

Truth likely makes argumentation easier (reality is biased in your favour, opponents may have more freedom to make up stuff but it is more vulnerable to disproof) and can produce demonstrations. Truth-seeking people are more likely to want to listen to correct argumentation and evidence, and even if they are a minority they might be more stable in their beliefs than people who just view beliefs as clothing to wear (of course, zealots are also very stable in their beliefs since they isolate themselves from inconvenient ideas and facts).

Truth alone can not efficiently win the battle of bringing an idea in from the silliness of the thinkability frontier to the practical mainstream. But I think humour can act as a lubricant: by showing the actual silliness of mainstream argumentation, we move them outwards towards the frontier, making a bit more space for other things to move inward. When demonstrations are wrong, joke about their flaws. When ideas are pushed merely because of status, poke fun at the hot air cushions holding them up.