This fall I have been chairing a programme at the Gothenburg Centre for Advanced Studies on existential risk, thanks to Olle Häggström. Visiting researchers come and participate in seminars and discussions on existential risk, ranging from the very theoretical (how do future people count?) to the very applied (should we put existential risk on the school curriculum? How?). I gave a Petrov Day talk about how to calculate risks of nuclear war and how observer selection might mess this up, beside seminars on everything from the Fermi paradox to differential technology development. In short, I have been very busy.
To open the programme we had a workshop on existential risk September 7-8 2017. Now we have the videos up of our talks:
- Anders Sandberg: Tipping points, uncertainty and systemic risks: what to do when the whole is worse than its parts? (I was mistaken about my time allotment, so the talk runs at breakneck speed.)
- Karin Kuhlemann: Complexity, creeping normalcy, and conceit: Why certain catastrophic risks are sexier than others
- Phil Torres: Agential risks: Implications for existential risk reduction
- Karim Jebari: Resetting the tape of history
- Due to a technical mishap, we have no video for David Denkenberger‘s talk on Cost of non-sunlight dependent food for agricultural catastrophes. Try instead watching his talk Feeding everyone no matter what given at CSER in Cambridge last year, which covers much of the same ground.
- Thore Husfeldt: Plausibility and utility of apocalyptic AI scenarios
- Roman Yampolskiy: Artificial intelligence as an existential risk to humanity
- Stuart Armstrong: Practical methods to make safe AI
- Robin Hanson: Disasters in the Age of Em and after
- Katja Grace: Empirical evidence on the future of AI
- James Miller: Hints from the Fermi paradox for surviving existential risks
- Catherine Rhodes: International governance of existential risk
- Seth Baum: In search of the biggest risk reduction opportunities
I think so far a few key realisations and themes have in my opinion been
(1) the pronatalist/maximiser assumptions underlying some of the motivations for existential risk reduction were challenged; there is an interesting issue of how “modest futures” rather than “grand futures” play a role and non-maximising goals imply existential risk reduction.
(2) the importance of figuring out how “suffering risks”, potential states of astronomical amounts of suffering, relate to existential risks. Allocating effort between them rationally touches on some profound problems.
(3) The under-determination problem of inferring human values from observed behaviour (a talk by Stuart) resonated with the under-determination of AI goals in Olle’s critique of the convergent instrumental goal thesis and other discussions. Basically, complex agent-like systems might be harder to succinctly describe than we often think.
(4) Stability of complex adaptive systems – brains, economies, trajectories of human history, AI. Why are some systems so resilient in a reliable way, and can we copy it?
(5) The importance of estimating force projection abilities in space and as the limits of physics are approached. I am starting to suspect there is a deep physics answer to the question of attacker advantage, and a trade-off between information and energy in attacks.
We will produce an edited journal issue with papers inspired by our programme, stay tuned. Avancez!
Re considering the risks of nuclear war, obviously the North Korea situation springs to mind.
I’ve just read a (long) interesting article that opposes the suggestion that the people who voted for Trump must have been insane (non-rational). He reasons that they were indeed rational from their own perspective. Or, at least equally as rational as the Clinton voters.
So how do you deal with nuclear war contestants who both see themselves as acting rationally?
http://quillette.com/2017/09/28/trump-voters-irrational/
written by Keith Stanovich
Keith Stanovich is a professor emeritus of applied psychology and human development at the University of Toronto. His latest book is The Rationality Quotient.