No terrorist attack in the USA will kill > 100 people: 95%
1 (Orlando: 50)
I will be involved in at least one published/accepted-to-publish research paper by the end of 2015: 95%
Vesuvius will not have a major eruption: 95%
I will remain at my same job through the end of 2015: 90%
MAX IV in Lund delivers X-rays: 90%
Andart II will remain active: 90%
Israel will not get in a large-scale war (ie >100 Israeli deaths) with any Arab state: 90%
US will not get involved in any new major war with death toll of > 100 US soldiers: 90%
New Zeeland has not decided to change current flag at end of year: 85%
No multi-country Ebola outbreak: 80%
Assad will remain President of Syria: 80%
ISIS will control less territory than it does right now: 80%
North Korea’s government will survive the year without large civil war/revolt: 80%
The US NSABB will allow gain of function funding: 80%
1 [Their report suggests review before funding, currently it is up to the White House to respond. ]
US presidential election: democratic win: 75%
A general election will be held in Spain: 75%
Syria’s civil war will not end this year: 75%
There will be no NEO with Torino Scale >0 on 31 Dec 2016: 75%
0 (2016 XP23 showed up on the scale according to JPL, but NEODyS Risk List gives it a zero.)
The Atlantic basin ACE will be below 96.2: 70%
0 (ACE estimate on Jan 1 is 132)
Sweden does not get a seat on the UN Security Council: 70%
Bitcoin will end the year higher than $200: 70%
Another major eurozone crisis: 70%
Brent crude oil will end the year lower than $60 a barrel: 70%
I will actually apply for a UK citizenship: 65%
UK referendum votes to stay in EU: 65%
China will have a GDP growth above 5%: 65%
Evidence for supersymmetry: 60%
UK larger GDP than France: 60%
1 (although it is a close call; estimates put France at 2421.68 and UK at 2848.76 – quite possibly this might change)
France GDP growth rate less than 2%: 60%
I will have made significant progress (4+ chapters) on my book: 55%
Iran nuclear deal holding: 50%
Apple buys Tesla: 50%
The Nikkei index ends up above 20,000: 50%
0 (nearly; the Dec 20 max was 19,494)
Overall, my Brier score is 0.1521. Which doesn’t feel too bad.
Plotting the results (where I bin together things in [0.5,0.55], [0.5,0.65], [0.7 0.75], [0.8,0.85], [0.9,0.99] bins) give this calibration plot:
Overall, I did great on my “sure bets” and fairly weakly on my less certain bets. I did not have enough questions to make this very statistically solid (coming up with good prediction questions is hard!), but the overall shape suggests that I am a bit overconfident, which is not surprising.
Time to come up with good 2017 prediction questions.
But what if we looked at real functions? If we use a single function the zeros will typically form a curve in the plane. In order to get discrete zeros we typically need to have two functions to produce a zero set. We can think of it as a map from R2 to R2 where the x’es are 2D vectors. In this case Newton’s method turns into solving the linear equation system where is the Jacobian matrix () and now denotes the n’th iterate.
The simplest case of nontrivial dynamics of the method is for third degree polynomials, and we want the x- and y-directions to interact a bit, so a first try is . Below is a plot of the first and second components (red and green), as well as a blue plane for zero values. The zeros of the function are the three points where red, green and blue meet.
We have three zeros, one at , one at , and one at The middle one has a region of troublesomely similar function values – the red and green surfaces are tangent there.
The resulting fractal has a decided modernist bent to it, all hyperbolae and swooshes:
The troublesome region shows up, as well as the red and blue regions where iterates run off to large or small values: the three roots are green shades.
Why is the style modernist?
In complex iterations you typically multiply with complex numbers, and if they have an imaginary component (they better have, to be complex!) that introduces a rotation or twist. Hence baroque filaments are ubiquitous, and you get the typical complex “style”.
But here we are essentially multiplying with a real matrix. For a real 2×2 matrix to be a rotation matrix it has to have a pair of imaginary eigenvalues, and for it to at least twist things around the trace needs to be small enough compared to the determinant so that there are complex eigenvalues: (where and if the matrix has the usual form). So if the trace and determinant are randomly chosen, we should expect a majority of cases to be non-rotational.
Moreover, in this particular case, the Jacobian tends to be diagonally dominant (quadratic terms on the diagonal) and that makes the inverse diagonally dominant too: the trace will be big, and hence the chance of seeing rotation goes down even more. The two “knots” where a lot of basins of attraction come together are the points where the trace does vanish, but since the Jacobian is also symmetric there will not be any rotation anyway. Double guarantee.
Can we make a twisty real Newton fractal? If we start with a vanilla rotation matrix and try to find a function that produces it the simplest case is . This is of course just a rotation by the angle theta, and it does not have very interesting zeros.
To get something fractal we need more zeros, and a few zeros in the derivatives too (why? because they cause iterates to be thrown away far from where they were, ensuring a complicated structure of the basin boundaries). One attempt is . The result is fun, but still far from baroque:
The problem might be that the twistiness is not changing. So we can make to make the dynamics even more complex:
Quite lovely, although still not exactly what I wanted (sounds like a Christmas present).
Back to the classics?
It might be easier just to hide the complex dynamics in an apparently real function like (which produces the archetypal Newton fractal).
It is interesting to see how much perturbing it causes a modernist shift. If we use , then for we get:
As we make the function more perturbed, it turns more modernist, undergoing a topological crisis for epsilon between 3.5 and 4:
In the end, we can see that the border between classic baroque complex fractals and the modernist swooshy real fractals is fuzzy. Or, indeed, fractal.
Warren Ellis’ Normalis a little story about the problem of being serious about the future.
As I often point out, most people in the futures game are basically in the entertainment industry: telling wonderful or frightening stories that allow us to feel part of a bigger sweep of history, reflect a bit, and then return to the present with the reassurance that we have some foresight. Relatively little future studies is about finding decision-relevant insights and then acting on it. It exists, but it is not the bulk of future-oriented people. Taking the future seriously might require colliding with your society as you try to tell it it is going the wrong way. Worse, the conclusions might tell you that your own values and goals are wrong.
Normal takes place at a sanatorium for mad futurists in the wilds of Oregon. The idea is that if you spend too much time thinking too seriously about the big and horrifying things in the future mental illness sets in. So when futurists have nervous breakdowns they get sent by their sponsors to Normal to recover. They are useful, smart, and dedicated people but since the problems they deal with are so strange their conditions are equally unusual. The protagonist arrives just in time to encounter a bizarre locked room mystery – exactly the worst kind of thing for a place like Normal with many smart and fragile minds – driving him to investigate what is going on.
As somebody working with the future, I think the caricatures of these futurists (or rather their ideas) are spot on. There are the urbanists, the singularitarians, the neoreactionaries, the drone spooks, and the invented professional divisions. Of course, here they are mad in a way that doesn’t allow them to function in society which softballs the views: singletons and Molochs are serious real ideas that should make your stomach lurch.
The real people I know who take the future seriously are overall pretty sane. I remember a documentary filmmaker at a recent existential risk conference mildly complaining that people where so cheerful and well-adapted: doubtless some darkness and despair would have made a far more compelling imagery than chummy academics trying to salvage the bioweapons convention. Even the people involved in developing the Mutually Assured Destruction doctrine seem to have been pretty healthy. People who go off on the deep end tend to do it not because of The Future but because of more normal psychological fault lines. Maybe we are not taking the future seriously enough, but I suspect it is more a case of an illusion of control: we know we are at least doing something.
This book convinced me that I need to seriously start working on my own book project, the “glass is half full” book. Much of our research at FHI seems to be relentlessly gloomy: existential risk, AI risk, all sorts of unsettling changes to the human condition that might slurp us down into a valueless attractor asymptoting towards the end of time. But that is only part of it: there are potential futures so bright that we do not just need sunshades, but we have problems with even managing the positive magnitude in an intellectually useful way. The reason we work on existential risk is that we (1) think there is enormous positive potential value at stake, and (2) we think actions can meaningfully improve chances. That is no pessimism, quite the opposite. I can imagine Ellis or one of his characters skeptically looking at me across the table at Normal and accusing me of solutionism and/or a manic episode. Fine. I should lay out my case in due time, with enough logos, ethos and pathos to convince them (Muhahaha!).
I think the fundamental horror at the core of Normal – and yes, I regard this more as a horror story than a techno-thriller or satire – is the belief that The Future is (1) pretty horrifying and (2) unstoppable. I think this is a great conceit for a story and a sometimes necessary intellectual tonic to consider. But it is bad advice for how to live a functioning life or actually make a saner future.
Full disclosure: they interviewed me while they were writing their book Beyond Earth: Our Path to a New Home in the Planets, which I have not read yet, and I will only be basing the following on the SciAm essay. It is not really about settling Titan either, but something that bothers me with a lot of scenario-making.
A weak case for Titan and against Luna and Mars
Basically the essay outlines reasons why other locations in the solar system are not good: Mercury too hot, Venus way too hot, Mars and Luna have too much radiation. Only Titan remains, with a cold environment but not too much radiation.
A lot of course hinges on the assumptions:
We expect human nature to stay the same. Human beings of the future will have the same drives and needs we have now. Practically speaking, their home must have abundant energy, livable temperatures and protection from the rigors of space, including cosmic radiation, which new research suggests is unavoidably dangerous for biological beings like us.
I am not that confident in that we will remain biological or vulnerable to radiation. But even if we decide to accept the assumptions, the case against the Moon and Mars is odd:
Practically, a Moon or Mars settlement would have to be built underground to be safe from this radiation.Underground shelter is hard to build and not flexible or easy to expand. Settlers would need enormous excavations for room to supply all their needs for food, manufacturing and daily life.
So making underground shelters is much harder than settling Titan, where buildings need to be isolated against a -179 C atmosphere and ice ground full with complex and quite likely toxic hydrocarbons. They suggest that there is no point in going to the moon to live in an underground shelter when you can do it on Earth, which is not too unreasonable – but is there a point in going to live inside an insulated environment on Titan either? The actual motivations would likely be less of a desire for outdoor activities and more scientific exploration, reducing existential risk, and maybe industrialization.
Also, while making underground shelters in space may be hard, it does not look like an insurmountable problem. The whole concern is a bit like saying submarines are not practical because the cold of the depths of the ocean will give the crew hypothermia – true, unless you add heating.
Anyone, from the most clueless amateur to the best cryptographer, can create an algorithm that he himself can’t break.
It is not hard to find a major problem with a possible plan that you cannot see a reasonable way around. That doesn’t mean there isn’t one.
Settling for scenarios
Maybe Wohlforth and Hendrix spent a lot of time thinking about lunar excavation issues and consistent motivations for settlements to reach a really solid conclusion, but I suspect that they came to the conclusion relatively lightly. It produces an interesting scenario: Titan is not the standard target when we discuss where humanity ought to go, and it is an awesome environment.
Similarly the “humans will be humans” scenario assumptions were presumably chosen not after a careful analysis of relative likelihood of biological and postbiological futures, but just because it is similar to the past and makes an interesting scenario. Plus human readers like reading about humans rather than robots. All together it makes for a good book.
Clearly I have different priors compared to them on the ease and rationality of Lunar/Martian excavation and postbiology. Or even giving us D. radiodurans genes.
In The Age of Em Robin Hanson argues that if we get the brain emulation scenario space settlement will be delayed until things get really weird: while postbiological astronauts are very adaptable, so much of the mainstream of civilization will be turning inward towards a few dense centers (for economics and communications reasons). Eventually resource demand, curiosity or just whatever comes after the Age of Ems may lead to settling the solar system. But that process will be pretty different even if it is done by mentally human-like beings that do need energy and protection. Their ideal environments would be energy-gradient rich, with short communications lags: Mercury, slowly getting disassembled into a hot Dyson shell, might be ideal. So here the story will be no settlement, and then wildly exotic settlement that doesn’t care much about the scenery.
But even with biological humans we can imagine radically different space settlement scenarios, such as the Gerhard O’Neill scenario where planetary surfaces are largely sidestepped for asteroids and space habitats. This is Jeff Bezo’s vision rather than Elon Musk’s and Wohlforth/Hendrix’s. It also doesn’t tell the same kind of story: here our new home is not in the planets but between them.
My gripe is not against settling Titan, or even thinking it is the best target because of some reasons. It is against settling too easily for nice scenarios.
Beyond the good story
Sometimes we settle for scenarios because they tell a good story. Sometimes because they are amenable to study among other, much less analyzable possibilities. But ideally we should aim at scenarios that inform us in a useful way about options and pathways we have.
That includes making assumptions wide enough to cover relevant options, even the less glamorous or tractable ones.
That requires assuming future people will be just as capable (or more) at solving problems: just because I can’t see a solution to X doesn’t mean it is not trivially solved in the future.
In standard scenario literature there are often admonitions not just to select a “best case scenario”, “worst case scenario” and “business as usual scenario” – scenario planning comes into its own when you see nontrivial, mixed value possibilities. In particular, we want decision-relevant scenarios that make us change what we will do when we hear about them (rather than good stories, which entertain but do not change our actions). But scenarios on their own do not tell us how to make these decisions: they need to be built from our rationality and decision theory applied to their contents. Easy scenarios make it trivial to choose (cake or death?), but those choices would have been obvious even without the scenarios: no forethought needed except to bring up the question. Complex scenarios force us to think in new ways about relevant trade-offs.
As the judge noted, the verdict was not a statement on the validity of cryonics itself, but about how to make decisions about prospective orders. In many ways the case would presumably have gone the same way if there had been a disagreement about whether the daughter could have catholic last rites. However, cryonics makes things fresh and exciting (I have been in the media all day thanks to this).
What is the ethics of parents disagreeing about the cryosuspension of their child?
One obvious principle is that parents ought to act in the best interest of their children.
If the child is morally mature and with informed consent, then they can clearly have a valid interest in taking a chance on cryonics: they might not be legally adult, but as in normal medical ethics their stated interests have strong weight. Conversely, one could imagine a case where a child would not want to be preserved, in which case I think most people would agree their preferences should dominate.
In this case the issue was that the parents were disagreeing and the child was not legally old enough.
If one thinks cryonics is reasonable, then one should clearly cryosuspend the child: it is in their best interest. But if one thinks cryonics is not reasonable, is it harming the interest of the child? This seems to require some theory of how cryonics is bad for the interests of the child.
As an analogy, imagine a case where one parent is a Jehovah’s Witness and want to refuse a treatment involving blood transfusion: the child will die without the treatment, and it will be a close call even with it. Here the objecting parent may claim that undergoing the transfusion harms the child in an important spiritual way and refuse consent. The other parent disagrees. Here the law would come down on the side of the pro-transfusion parent.
On this account and if we agree the cases are similar, we might say that parents have a legal duty to consent to cryonics.
Weak and strong reasons
In practice the controversialness of cryonics may speak against this: many people disagree about cryonics being good for one’s welfare. However, most such arguments usually seem to be based on various farfetched scenarios about how the future could be a bad place to end up in. Others bring up loss of social connections or that personal identity would be disrupted. A more rational argument is that it is an unproven treatment of dubious efficacy, which would make it irrational to undertake if there was an alternative; however since there isn’t any alternative this argument has little power. The same goes for the risk of loss of social connection or identity: had there been an alternative to death (which definitely severs connections and dissolves identity) that may have been preferable. If one seriously thinks that the future would be so dark that it is better not to get there, one should probably not have children.
In practice it is likely that the status of cryonics as nonstandard treatment would make the law hesitate to overrule parents. We know blood transfusions work, and while spiritual badness might be a respectable as a private view we as a society do not accept it as a sufficient reason to have somebody die. But in the case of cryonics the unprovenness of the treatment means that hope for revival is on nearly the same epistemic level as spiritual badness: a respectable private view, but not strong enough to be a valid public reason. Cryonicists are doing their best to produce scientific evidence – tissue scans, memory experiments, protocols – that move the reasons to believe in cryonics from the personal faith level to the public evidence level. They already have some relevant evidence. As soon as lab mice are revived or people become convinced the process saves the connectome the reasons would be strengthened and cryonics becomes more akin blood transfusion.
The key difference is that weak private reasons are enough to allow an experimental treatment where there is no alternative but death, but they are generally not enough to go for an experimental treatment when there is some better treatment. When disallowing a treatment weak reasons may work well against unproven or uncertain treatments, but not when it is proven. However, disallowing a treatment with no alternative is equivalent to selecting death.
When two parents disagree about cryonics (and the child does not have a voice) it hence seems that they both have weak reasons, but the asymmetry between having a chance and dying tilts in favor of cryonics. If it was purely a matter of aesthetics or value (for example, arguing about the right kind of last rites) there would be no societal or ethical constraint. But here there is some public evidence, making it at least possible that the interests of the child might be served by cryonics. Better safe than sorry.
When the child also has a voice and can express its desires, then it becomes obvious which way to go.
King Solomon might have solved the question by cryosuspending the child straight away, promising the dissenting parent not to allow revival until they either changed their mind or there was enough public evidence to convince anybody that it would be in the child’s interest to be revived. The nicest thing about cryonics is that it buys you time to think things through.
There is actually a shocking confusion about what the distinction between morals and ethics is. Differen.com says ethics is about rules of conduct produced by an external source while morals are an individual’s own principles of right and wrong. Grammarist.com says morals are principles on which one’s own judgement of right and wrong are based (abstract, subjective and personal), ethics are the principles of right conduct (practical, social and objective). Ian Welsh gives a soundbite: “morals are how you treat people you know. Ethics are how you treat people you don’t know.” Paul Walker and Terry Lovat say ethics leans towards decisions based on individual character and subjective understanding of right and wrong, while morals is about widely shared communal or societal norms – here ethics is individual assessment of something being good or bad, while morality is inter-subjective community assessment.
Wikipedia distinguishes between ethics as a research field and the common human ability to think critically about moral values and direct actions appropriately, or a particular persons principles of values. Morality is the differentiation between things that are proper and improper, as well as a body of standards and principles in derived from a code of conduct in some philosophy, religion or culture… or derived from a standard a person believes to be universal.
Dictionary.com regards ethics as a system of moral principles, the rules of conduct recognized in some human environment, an individual’s moral principles (and the branch of philosophy). Morality is about conforming to the rules of right conduct, having moral quality or character, a doctrine or system of morals and a few other meanings. The Cambridge dictionary thinks ethics is the study of what is right or wrong, or the set of beliefs about it, while morality is a set of personal or social standards for good/bad behavior and character.
And so on.
I think most people try to include the distinction between shared systems of conduct and individual codes, and the distinction between things that are subjective, socially agreed on, and maybe objective. Plus that we all agree on that ethics is a philosophical research field.
My take on it
I like to think of it as a AI issue. We have a policy function that maps states and action pairs to a probability of acting that way; this is set using a value function where various states are assigned values. Morality in my sense is just the policy function and maybe the value function: they have been learned through interacting with the world in various ways.
Ethics in my sense is ways of selecting policies and values. We are able to not only change how we act but also how we evaluate things, and the information that does this change is not just reward signals that update value function directly, but also knowledge about the world, discoveries about ourselves, and interactions with others – in particular ideas that directly change the policy and value functions.
When I realize that lying rarely produces good outcomes (too much work) and hence reduce my lying, then I am doing ethics (similarly, I might be convinced about this by hearing others explain that lying is morally worse than I thought or convincing me about Kantian ethics). I might even learn that short-term pleasure is less valuable than other forms of pleasure, changing how I view sensory rewards.
Academic ethics is all about the kinds of reasons and patterns we should use to update our policies and values, trying to systematize them. It shades over into metaethics, which is trying to understand what ethics is really about (and what metaethics is about: it is its own meta-discipline, unlike metaphysics that has metametaphysics, which I think is its own meta-discipline).
I do not think I will resolve any confusion, but at least this is how I tend to use the terminology. Morals is how I act and evaluate, ethics is how I update how I act and evaluate, metaethics is how I try to think about my ethics.
Robin Hanson mentions that some people take him to task for working on one scenario (WBE) that might not be the most likely future scenario (“standard AI”); he responds by noting that there are perhaps 100 times more people working on standard AI than WBE scenarios, yet the probability of AI is likely not a hundred times higher than WBE. He also notes that there is a tendency for thinkers to clump onto a few popular scenarios or issues. However:
In addition, due to diminishing returns, intellectual attention to future scenarios should probably be spread out more evenly than are probabilities. The first efforts to study each scenario can pick the low hanging fruit to make faster progress. In contrast, after many have worked on a scenario for a while there is less value to be gained from the next marginal effort on that scenario.
This is very similar to my own thinking about research effort. Should we focus on things that are likely to pan out, or explore a lot of possibilities just in case one of the less obvious cases happens? Given that early progress is quick and easy, we can often get a noticeable fraction of whatever utility the topic has by just a quick dip. The effective altruist heuristic of looking at neglected fields also is based on this intuition.
But under what conditions does this actually work? Here is a simple model:
There are possible scenarios, one of which () will come about. They have probability . We allocate a unit budget of effort to the scenarios: . For the scenario that comes about, we get utility (diminishing returns).
Here is what happens if we allocate proportional to a power of the scenarios, . corresponds to even allocation, 1 proportional to the likelihood, >1 to favoring the most likely scenarios. In the following I will run Monte Carlo simulations where the probabilities are randomly generated each instantiation. The outer bluish envelope represents the 95% of the outcomes, the inner ranges from the lower to the upper quartile of the utility gained, and the red line is the expected utility.
This is the case: we have two possible scenarios with probability and (where is uniformly distributed in [0,1]). Just allocating evenly gives us utility on average, but if we put in more effort on the more likely case we will get up to 0.8 utility. As we focus more and more on the likely case there is a corresponding increase in variance, since we may guess wrong and lose out. But 75% of the time we will do better than if we just allocated evenly. Still, allocating nearly everything to the most likely case means that one does lose out on a bit of hedging, so the expected utility declines slowly for large .
The case (where the probabilities are allocated based on a flat Dirichlet distribution) behaves similarly, but the expected utility is smaller since it is less likely that we will hit the right scenario.
What is going on?
This doesn’t seem to fit Robin’s or my intuitions at all! The best we can say about uniform allocation is that it doesn’t produce much regret: whatever happens, we will have made some allocation to the possibility. For large N this actually works out better than the directed allocation for a sizable fraction of realizations, but on average we get less utility than betting on the likely choices.
The problem with the model is of course that we actually know the probabilities before making the allocation. In reality, we do not know the likelihood of AI, WBE or alien invasions. We have some information, and we do have priors (like Robin’s view that ), but we are not able to allocate perfectly. A more plausible model would give us probability estimates instead of the actual probabilities.
We know nothing
Let us start by looking at the worst possible case: we do not know what the true probabilities are at all. We can draw estimates from the same distribution – it is just that they are uncorrelated with the true situation, so they are just noise.
In this case uniform distribution of effort is optimal. Not only does it avoid regret, it has a higher expected utility than trying to focus on a few scenarios (). The larger N is, the less likely it is that we focus on the right scenario since we know nothing. The rationality of ignoring irrelevant information is pretty obvious.
Note that if we have to allocate a minimum effort to each investigated scenario we will be forced to effectively increase our above 0. The above result gives the somewhat optimistic conclusion that the loss of utility compared to an even spread is rather mild: in the uniform case we have a pretty low amount of effort allocated to the winning scenario, so the low chance of being right in the nonuniform case is being balanced by having a slightly higher effort allocation on the selected scenarios. For high there is a tail of rare big “wins” when we hit the right scenario that drags the expected utility upwards, even though in most realizations we bet on the wrong case. This is very much the hedgehog predictor story: ocasionally they have analysed the scenario that comes about in great detail and get intensely lauded, despite looking at the wrong things most of the time.
We know a bit
We can imagine that knowing more should allow us to gradually interpolate between the different results: the more you know, the more you should focus on the likely scenarios.
If we take the mean of the true probabilities with some randomly drawn probabilities (the “half random” case) the curve looks quite similar to the case where we actually know the probabilities: we get a maximum for . In fact, we can mix in just a bit () of the true probability and get a fairly good guess where to allocate effort (i.e. we allocate effort as where is uncorrelated noise probabilities). The optimal alpha grows roughly linearly with , in this case.
Adding a bit of realism, we can consider a learning process: after allocating some effort to the different scenarios we get better information about the probabilities, and can now reallocate. A simple model may be that the standard deviation of noise behaves as where is the effort placed in exploring the probability of scenario . So if we begin by allocating uniformly we will have noise at reallocation of the order of . We can set , where is some constant denoting how tough it is to get information. Putting this together with the above result we get . After this exploration, now we use the remaining effort to work on the actual scenarios.
This is surprisingly inefficient. The reason is that the expected utility declines as and the gain is just the utility difference between the uniform case and optimal , which we know is pretty small. If C is small (i.e. a small amount of effort is enough to figure out the scenario probabilities) there is an optimal nonzero . This optimum decreases as C becomes smaller. If C is large, then the best approach is just to spread efforts evenly.
So, how should we focus? These results suggest that the key issue is knowing how little we know compared to what can be known, and how much effort it would take to know significantly more.
If there is little more that can be discovered about what scenarios are likely, because our state of knowledge is pretty good, the world is very random, or improving knowledge about what will happen will be costly, then we should roll with it and distribute effort either among likely scenarios (when we know them) or spread efforts widely (when we are in ignorance).
If we can acquire significant information about the probabilities of scenarios, then we should do it – but not overdo it. If it is very easy to get information we need to just expend some modest effort and then use the rest to flesh out our scenarios. If it is doable but costly, then we may spend a fair bit of our budget on it. But if it is hard, it is better to go directly on the object level scenario analysis as above. We should not expect the improvement to be enormous.
Here I have used a square root diminishing return model. That drives some of the flatness of the optima: had I used a logarithm function things would have been even flatter, while if the returns diminish more mildly the gains of optimal effort allocation would have been more noticeable. Clearly, understanding the diminishing returns, number of alternatives, and cost of learning probabilities better matters for setting your strategy.
In the case of future studies we know the number of scenarios are very large. We know that the returns to forecasting efforts are strongly diminishing for most kinds of forecasts. We know that extra efforts in reducing uncertainty about scenario probabilities in e.g. climate models also have strongly diminishing returns. Together this suggests that Robin is right, and it is rational to stop clustering too hard on favorite scenarios. Insofar we learn something useful from considering scenarios we should explore as many as feasible.
The problem is not that it is absurd to care about existential risks or the far future (which was the Economist‘s unfortunate claim), nor that it is morally wrong to have a separate colony, but that there might be better risk reduction strategies with more bang for the buck.
One interesting aspect is that making space more accessible makes space refuges a better option. At some point in the future, even if space refuges are currently not the best choice, they may well become that. There are of course other reasons to do this too (science, business, even technological art).
So while existential risk mitigation right now might rationally aim at putting out the current brushfires and trying to set the long-term strategy right, doing the groundwork for eventual space colonisation seems to be rational.
My view is largely that moral action is strongly driven and motivated by emotions rather than reason, but outside the world of the blindingly obvious or everyday human activity our intuitions and feelings are not great guides. We do not function well morally when the numbers get too big or the cognitive biases become maladaptive. Morality may be about the heart, but ethics is in the brain.