I recently nerded out about high-energy proton interaction with matter, enjoying reading up on the Bethe equation at the Particle Data Group review and elsewhere. That got me to look around at the PDL website, which is full of awesome stuff – everything from math and physics reviews to data for the most obscure “particles” ever, plus tests of how conserved the conservation laws are.
Historical graph of physical constant estimates from K.A. Olive et al. (Particle Data Group), Chin. Phys. C, 38, 090001 (2014) and 2015 update.
The first thing that strikes the viewer is that they have moved a fair bit, including often being far outside the original error bars. 6 of them have escaped them. That doesn’t look very good for science!
Fortunately, it turns out that these error bars are not 95% confidence intervals (the most common form in many branches of science) but 68.3% confidence intervals (one standard deviation, if things are normal). That means having half of them out of range is entirely reasonable! On the other hand, most researchers don’t understand error bars (original paper), and we should be able to do much better.
Sometimes large changes occur. These usually reflect the introduction of significant new data or the discarding of older data. Older data are discarded in favor of newer data when it is felt that the newer data have smaller systematic errors, or have more checks on systematic errors, or have made corrections unknown at the time of the older experiments, or simply have much smaller errors. Sometimes, the scale factor becomes large near the time at which a large jump takes place, reflecting the uncertainty introduced by the new and inconsistent data. By and large, however, a full scan of our history plots shows a dull progression toward greater precision at central values quite consistent with the first data points shown.
Overall, kudos to PDG for showing the history and making it clearer what is going on! But I do not agree it is a dull progression.
Plot of light-speed measurements 1875-1958. Error bars are standard error. From Max Henrion and Baruch Fischhoff. Assessing uncertainty in physical constants. American Journal of Physics 54, 791 (1986); doi: 10.1119/1.14447.
Note that the shifts were far larger than the estimated error bars. The dip in the 1930s and 40s even made some physicists propose that c could be changing over time. Overall Henrion and Fischhoff find that physicists have been rather overconfident in their tight error bounds on their measurements. The approach towards current estimates is anything but dull, and hides many amusing historical anecdotes.
Stories like this might have been helpful; it is notable that the PDG histories on the right, for newer constants, seem to stay closer to the present value than the longer ones to the left. Maybe this is just because they have not had the time to veer off yet, but one can be hopeful.
Still, even if people are improving this might not mean the conclusions stay stable or approach truth monotonically. A related issue is “negative learning”, where more data and improved models make the consensus view of a topic move in the wrong direction: Oppenheimer, M., O’Neill, B. C., & Webster, M. (2008). Negative learning. Climatic Change, 89(1-2), 155-172. Here the problem is not just that people are overconfident in how certain they can be about their conclusions, but also that there is a bit of group-think, plus that the models change in structure and are affected in different ways by the same data. They point out how estimates of ozone depletion oscillated, or the consensus on the stability of climate has shifted from oscillatory (before 1968) towards instability (68-82), towards stability (82-96), and now towards instability again (96-06). These problems are not due to mere irrationality, but the fact that as we learn more and build better models these incomplete but better models may still deviate strongly from the ground truth because they miss some key component.
Noli fumare
This is related to what Nick Bostrom calls the “data fumes” problem. Early data will be fragmentary and explanations uncertain – but the data points and their patterns are very salient, just as the early models, since there is nothing else. So we begin to anchor on them. Then new data arrives and the models improve… and the old patterns are revealed as statistical noise, or bugs in the simulation or plotting routine. But since we anchored on them, we are unlikely to update as strongly towards the new most likely estimates. Worse, accommodating a new model takes mental work; our status quo bias will be pushing against the update. Even if we do accommodate the new state, things will likely change more – we may well end up either with a view anchored on early noise, or assume that the final state is far more uncertain than it actually is (since we weigh the early jumps strongly because of their saliency).
This is of course why most people prefer to believe a charismatic diet cultleader expert rather than trying to dig through 70 years of messy, conflicting dietary epidemiology.
Here is a simple example where an agent is trying to do a maximum likelihood estimation of a Gaussian distribution with mean 1 and variance 1, but is hamstrung by giving double weight to the first 9 data points:
Simple data fume model, showing the slow and biased convergence when the first 9 data points are over-weighted. The blue area is a 95% confidence interval for the mean of the generating Gaussian distribution.
It is not hard to complicate the model with anchoring/recency/status quo bias (estimates get biased towards previous estimates), or that early data points are more polluted by differently distributed noise. Asymmetric error checking (you will look for bugs if results deviate from expectation and hence often find such bugs, but not look for bugs making your results closer to expectation) is another obvious factor for how data fumes can get integrated in models.
The problem with data fumes is that it is not easy to tell when you have stabilized enough to start trusting the data. It is even messier when the inputs are results generated by your own models or code. I like to approach it by using multiple models to guesstimate model error: for example, one mathematical model on paper and one Monte Carlo simulation – if they don’t agree, then I should disregard either answer and keep on improving.
Even when everything seems to be fine there may be a big crucial consideration one has missed. The Turing-Good estimator gives another way of estimating the risk of that: if you have acquired data points and seen big surprises (remember that the first data point counts as one!), then the probability of a new surprise for your next data point is . So if you expect data points in total, when you can start to trust the estimates… assuming surprises are uncorrelated etc. Which you will not be certain about. The progression towards greater precision may be anything but dull.
In Scott Alexander’s kabbalistic sf story Unsong, the archangel Uriel works on a problem while other things are going on in heaven:
All the angels listened in rapt attention except Uriel, who was sort of half-paying attention while trying to balance several twelve-dimensional shapes on top of each other.
…
There was utter silence throughout the halls of Heaven, except a brief curse as Uriel’s hyperdimensional tower collapsed on itself and he picked up the pieces to try to rebuild it.
…
A great clamor arose from all the heavenly hosts, save Uriel, who took advantage of the brief lapse to conjure a parchment and pen and start working on a proof about the optimal configuration of twelve-dimensional shapes.
This got me thinking about the stability of stacking polytopes. That seemed complicated (I am no archangel) so I started toying with the stability of polytopes on a flat surface.
(Terminology note: I will consistently use “face” to denote the D-1 dimensional elements that bound the polytope, although “facet” is in some use.)
A face of a 3D polyhedron is stable if the polyhedron can rest on it without tipping over. This means that the projection of the center of mass onto the plane containing the face is inside the polygon. The platonic polyhedra are stable on all faces, but it is not hard to make a few faces unstable by moving a vertex far away from the center. A polyhedron has at least one stable face (if it did not, it would be a perpetual motion device: every tip will move the center of mass downwards, but there is a bound on how low it can go. A uni-stable or monostatic polyhedron has just one stable face. It is an unsolved problem what the simplest uni-stable 3D polyhedron is, with the current record 14 faces. Also, it seems unclear whether there are monostatic simplices in dimension 9 (they exist in 10 or more dimensions, but not in 8 or fewer).
So, how many faces of a polytope will typically be unstable?
I wrote a Matlab script to generate random convex polytopes by selecting N points randomly on the surface of a D-dimensional sphere and calculating their convex hull. Using a Delaunay decomposition I can split them into simplices, which allow me to calculate the center of mass. The center of mass of a simplex is just the average of the corners , and the center of mass of the polyhedron is just the sum of the simplex centers of mass weighted by their volumes: . The volume of a simplex is where , the matrix made by sticking together the coordinate vectors of a simplex. Once we know this we can project the center of mass onto the plane of a face by finding its nullspace (the higher dimensional counterpart to a normal) . Finally, to check whether the projection is inside the face, we can look at the matrix A where each column is the coordinates of one of the faces minus and the final row just ones, and solve for Ax=b where b is zero except for a one in the last row (I found this neat algorithm due to elisbben on stack overflow). If the answer vector is all positive, then the point is inside the face. Repeat for all the faces.
Whew. This math is of course really simple to do in Matlab.
Stable (yellow) and unstable (blue) faces of random polyhedron.Stability of random polyhedron. The center of mass is marked by a circle. It is projected along the dotted lines into the plane of each face, marked with a square (if inside the face and hence a stable face) or a cross (if outside the face, which is hence unstable). A dotted line connects the projection points to the center of their face.
The 12 dimensional case is a bit messier:
Projection of a 12D polytope with 20 vertices. Each of the 2777 faces is a 11 dimensional simplex.
So, what is the average fraction of stable faces on a 3D polyhedron?
Fraction stable faces on 3D convex hulls of N points on a sphere.
It tends to converge to 50%. Doing this in higher dimensions shows the same kind of convergence, although to lower fractions.
Fraction stable faces on 4D convex hulls of N points on a sphere.Fraction stable faces of N=100 convex hulls in different dimensions. Red line exponential fit.
It looks like the fraction of stable faces declines exponentially with dimensionality.
Does this mean that for a sufficiently high dimension it is likely that a random polytope is unistable? The answer is no: the number of faces increases pretty exponentially (as ), but the number of stable faces also increases exponentially with D (as $latex 2^{0.9273 D}$).
Combined plot of number of faces (points with red line) and stable faces (points with green line) as a function of dimension for N=100.
This was based on runs with N=100. Obviously things go much faster if you select a lower N, such as 30. However, as you approach N=D the polytopes become more and more simplex-like, and simplices tend to both have fewer faces and be less stable in high dimensions, so the exponential growth stops. This actually happens far below D; for N=30 the effect is felt already in 11 dimensions. The face growth rates were also lower, with coefficients 1.1621 and 0.4730.
Number of faces and stable faces for N=30 random convex hulls in different dimensions.
(There are some asymptotic formulas known for the growth of the number of faces for random convex hulls; they grow linearly with N but at an accelerating rate with D.)
Stuart Armstrong gave me a very heuristic argument for why there would be so many unstable faces. Consider building up the polytope vertex by vertex, essentially just adding together the simplices from the Delaunay decomposition. If you start from a stable state, eventually you will likely end up with an unstable face. Adding the next vertex will add a simplex to the polyhedron, and the center of mass will move in the direction of the new simplex. To have the face become stable again the shift in center of mass needs to be large enough along the directions parallel to the face to bring the projection back inside the face. But in high dimensional spaces there are many directions you can move in: the probability of a random vector being nearly parallel to another vector is very low. Hence, the next step and the following are likely to preserve the instability. So high dimensional polytopes are likely to have many unstable faces even if they are nicely inscribed in spheres.
The number of steps the polytope rolls over until finding a stable face is also limited: the “drainage basin” of a stable face is a tree, with a branching degree set by D-1 (if faces are D-simplexes). So the number of steps will scale as . Even high-dimensional polytopes will stop flipping quickly in general. (A unistable polytope on the other hand can run through at least half of its faces, so there are some very slow ones too).
The expected minimum distance between two points on this kind of random polytope scales as (if they were optimally distributed it would be ). At the same time, if N is relatively small compared to D (the polytope is simplex-like), the average diameter (the longest edge) of each face seems to approach . Why? I think this is because , the mean of a flipped k=2 Weibull distribution that shows up because of extreme value theory. Meanwhile the average and median cord length between random points on hyperspheres tends towards . Faces hence tends to be fairly wide unless N is large compared to D, but there will typically always be a few very narrow ones that are tricky to balance on.
Stacking no-slip polytopes
What about stacking polytopes?
If you put a polytope on top of another one (assuming no slipping) at first it seems you need to use a stable face of the top polytope, but this is not enough nor necessary.
Since the underlying face is likely tilted from the horizontal, the vertical projection of the center of mass has to be within the top face. The upper polytope can be rotated, moving the projection point. The tilt angle (or rather, tilt angles – we are doing this in higher dimensions, remember?) generates a hypersphere of radius around the normal projection point (which is at distance d from the center of mass) where the vertical projection can intersect the face. Only parts of the hypersphere surface that are inside the face represent orientations that are stable. Even an unstable face can (sometimes) be stabilized if you turn it so that the tilted projection is inside, but for sufficiently high angles the hypersphere will be bigger than the face and it cannot be stable.
Stability of polyhedron on tilted surface. The line of gravity from the center of mass intersects the inside the bottom face, so the polyhedron is resting stably. Turning the polyhedron will move the line to some point on the circle, but since all points on the circle are inside the face all orientations are stable.Stability of polyhedron on tilted surface. The line of gravity from the center of mass intersects the outside the bottom face, so the polyhedron is unstable and will flip over. Turning the polyhedron will move the line to some other point on the circle: since some points on the circle are inside the face there are some orientations that are stable.
Having the top polytope stay in place is the first requirement. The second is that the bottom polytope should not become unstable. The new center of mass is moved to a point somewhere along the connecting line between the individual centers of mass of the polytopes, with exact position dependent on their volume ratio (note that turning the top polytope can move the center of mass too). This moves the projection point along the plane of the bottom face, and if it gets outside that face the assembly will tip over.
One can imagine this as adding random (D-1)-dimensional vectors of length 1/N until they reach the edge of the face. I am a bit uncertain about the properties of such random walks (all works on decreasing step size walks I have seen have been in 1D). The harmonic random walk in 1D apparently converges with probability 1, so I think the (D-1)-dimensional one also does it since the distance from the origin to the walker will be smaller than if the walker just kept to a 1D line. Since the expected distance traversed in 1D is $latex E[|X|] \approx 1.0761$ this is actually not a very extreme shift. Given the surprisingly large diameters of the faces if the first condition might be tougher to meet than the second, but this is just a guess.
The no slipping constraint is important. If the polytopes are frictionless, then any transverse force will move them. Hence only polytopes that have some parallel top and bottom stable faces can be stacked, and the problem becomes simpler. There are still surprises there, though: even stacks of rectangular blocks can do surprising things. The block stacking problem also demonstrates that one can have 1/N overhangs (counting downwards), enabling arbitrarily large total overhangs without tipping over. With polytopes with shapes that act as counterweights the overhangs can be even larger.
Uriel’s stacking problems
This leads to what we might call “Uriel’s stacking problem”: given a collection of no-slip convex D-dimensional polytopes, what is the tallest tower that can be constructed from them?
I suspect that this problem is NP-hard. It sounds very much like a knapsack problem, but there is a dependency on previous steps when you add a new polytope that seem to make it harder. It seems that it would not be too difficult to fool a greedy algorithm just trying to put the next polytope on the most topmost face into adding one that makes subsequent steps too unstable, forcing backtracking.
Another related problem: if the polytopes are random convex hulls of N points, what is the distribution of maximum tower heights? What if we just try random stacking?
And finally, what is the maximum overhang that can be done by stacking polytopes from a given set?
Were robots thinking, moral beings liability would be easy: they would presumably be legal subjects and handled like humans and corporations. But now they have an uneasy position as legal objects yet endowed with the ability to perform complex actions on behalf of others, or with emergent behaviors nobody can predict. The challenge may be to design not just the robots or laws, but robots and laws that fit each other (and real social practices): social robotics.
But it is early days. It is actually hard to tell where robotics will truly shine or matter legally, and premature laws can stifle innovation. We also do not really know what principles we ought to use to underpin the social robotics – more research is needed. And if you thought AI safety was hard, now consider getting machines to fit into the even less well defined human social landscape.
Robin Hanson’s The Age of Em is bound to be a classic.
It might seem odd, given that it is both awkward to define what kind of book it is – economics textbook, future studies, speculative sociology, science fiction without any characters? – and that most readers will disagree with large parts of it. Indeed, one of the main reasons it will become classic is that there is so much to disagree with and those disagreements are bound to stimulate a crop of blogs, essays, papers and maybe other books.
This is a very rich synthesis of many ideas with a high density of fascinating arguments and claims per page just begging for deeper analysis and response. It is in many ways like an author’s first science fiction novel (Zindell’s Neverness, Rajaniemi’s The Quantum Thief, and Egan’s Quarantine come to mind) – lots of concepts and throwaway realizations has been built up in the background of the author’s mind and are now out to play. Later novels are often better written, but first novels have the freshest ideas.
The second reason it will become classic is that even if mainstream economics or futurism pass it by, it is going to profoundly change how science fiction treats the concept of mind uploading. Sure, the concept has been around for ages, but this is the first through treatment of what it means to a society. Any science fiction treatment henceforth will partially define itself by how it relates to the Age of Em scenario.
Plausibility
The Age of Em is all about the implications of a particular kind of software intelligence, one based on scanning human brains to produce intelligent software entities. I suspect much of the debate about the book will be more about the feasibility of brain emulations. To many people the whole concept sparks incredulity and outright rejection. The arguments against brain emulation range from pure arguments of incredulity (“don’t these people know how complex the brain is?”) over various more or less well-considered philosophical positions (“don’t these people read Heidegger?” to questioning the inherent functionalist reductionism of the concept) to arguments about technological feasibility. Given that the notion is one many people will find hard to swallow I think Robin spent too little effort bolstering the plausibility, making the book look a bit too much like what Nordmann called if-then ethics: assume some outrageous assumption, then work out the conclusion (which Nordmann finds a waste of intellectual resources). I think one can make fairly strong arguments for the plausibility, but Robin is more interested in working out the consequences. I have a feeling there is a need now for a good defense of the plausibility (this and this might be a weak start, but much more needs to be done).
Scenarios
In this book, I will set defining assumptions, collect many plausible arguments about the correlations we should expect from these assumptions, and then try to combine these many correlation clues into a self-consistent scenario describing relevant variables.
What I find more interesting is Robin’s approach to future studies. He is describing a self-consistent scenario. The aim is not to describe the most likely future of all, nor to push some particular trend the furthest it can go. Rather, he is trying to describe what, given some assumptions, is likely to occur based on our best knowledge and fits with the other parts of the scenario into an organic whole.
The baseline scenario I generate in this book is detailed and self-consistent, as scenarios should be. It is also often a likely baseline, in the sense that I pick the most likely option when such an option stands out clearly. However, when several options seem similarly likely, or when it is hard to say which is more likely, I tend instead to choose a “simple” option that seems easier to analyze.
This baseline scenario is a starting point for considering variations such as intervening events, policy options or alternatives, intended as the center of a cluster of similar scenarios. It typically is based on the status quo and consensus model: unless there is a compelling reason elsewhere in the scenario, things will behave like they have done historically or according to the mainstream models.
As he notes, this is different from normal scenario planning where scenarios are generated to cover much of the outcome space and tell stories of possible unfoldings of events that may give the reader a useful understanding of the process leading to the futures. He notes that the self-consistent scenario seems to be taboo among futurists.
Part of that I think is the realization that making one forecast will usually just ensure one is wrong. Scenario analysis aims at understanding the space of possibility better: hence they make several scenarios. But as critics of scenario analysis have stated, there is a risk of the conjunction fallacy coming into play: the more details you add to the story of a scenario the more compelling the story becomes, but the less likely the individual scenario. The scenario analyst respond by claiming individual scenarios should not be taken as separate things: they only make real sense as part of the bigger structure. The details are to draw the reader into the space of possibility, not to convince them that a particular scenario is the true one.
Robin’s maximal consistent scenario is not directly trying to map out an entire possibility space but rather to create a vivid prototype residing somewhere in the middle of it. But if it is not a forecast, and not a scenario planning exercise, what is it? Robin suggest it is a tool for thinking about useful action:
The chance that the exact particular scenario I describe in this book will actually happen just as I describe it is much less than one in a thousand. But scenarios that are similar to true scenarios, even if not exactly the same, can still be a relevant guide to action and inference. I expect my analysis to be relevant for a large cloud of different but similar scenarios. In particular, conditional on my key assumptions, I expect at least 30% of future situations to be usefully informed by my analysis. Unconditionally, I expect at least 10%.
To some degree this is all a rejection of how we usually think of the future in “far mode” as a neat utopia or dystopia with little detail. Forcing the reader into “near mode” changes the way we consider the realities of the scenario (compare to construal level theory). It makes responding to the scenario far more urgent than responding to a mere possibility. The closest example I know is Eric Drexler’s worked example of nanotechnology in Nanosystems and Engines of Creation.
Again, I expect much criticism quibbling about whether the status quo and mainstream positions actually fit Robin’s assumptions. I have a feeling there is much room for disagreement, and many elements presented as uncontroversial will be highly controversial – sometimes to people outside the relevant field, but quite often to people inside the field too (I am wondering about the generalizations about farmers and foragers). Much of this just means that the baseline scenario can be expanded or modified to include the altered positions, which could provide useful perturbation analysis.
It may be more useful to start from the baseline scenario and ask what the smallest changes are to the assumptions that radically changes the outcome (what would it take to make lives precious? What would it take to make change slow?) However, a good approach is likely to start by investigating robustness vis-à-vis plausible “parameter changes” and use that experience to get a sense of the overall robustness properties of the baseline scenario.
Beyond the Age of Em
But is this important? We could work out endlessly detailed scenarios of other possible futures: why this one? As Robin argued in his original paper, while it is hard to say anything about a world with de novo engineered artificial intelligence, the constraints of neuroscience and economics make this scenario somewhat more predictable: it is a gap in the mist clouds covering the future, even if it is a small one. But more importantly, the economic arguments seem fairly robust regardless of sociological details: copyable human/machine capital is economic plutonium (c.f. this and this paper). Since capital can almost directly be converted into labor, the economy will likely grow radically. This seems to be true regardless of whether we talk about ems or AI, and is clearly a big deal if we think things like the industrial revolution matter – especially a future disruption of our current order.
In fact, people have already criticized Robin for not going far enough. The age described may not last long in real-time before it evolves into something far more radical. As Scott Alexander pointed out in his review and subsequent post, an “ascended economy” where automation and on-demand software labor function together can be a far more powerful and terrifying force than merely a posthuman Malthusian world. It could produce some of the dystopian posthuman scenarios envisioned in Nick Bostrom’s “The future of human evolution“, essentially axiological existential threats where what gives humanity value disappears.
We do not yet have good tools for analyzing this kind of scenarios. Mainstream economics is busy with analyzing the economy we have, not future models. Given that the expertise to reason about the future of a domain is often fundamentally different from the expertise needed in the domain, we should not even assume economists or other social scientists to be able to say much useful about this except insofar they have found reliable universal rules that can be applied. As Robin likes to point out, there are far more results of that kind in the “soft sciences” than outsiders believe. But they might still not be enough to constrain the possibilities.
Yet it would be remiss not to try. The future is important: that is where we will spend the rest of our lives.
If the future matters more than the past, because we can influence it, why do we have far more historians than futurists? Many say that this is because we just can’t know the future. While we can project social trends, disruptive technologies will change those trends, and no one can say where that will take us. In this book, I’ve tried to prove that conventional wisdom wrong.
I have recently begun to work on the problem of information hazards: when spreading true information is causing danger. Since we normally regard information as a good thing this is a bit unusual and understudied, and in the case of existential risk it is important to get things right at the first try.
However, concealing information can also produce risk. This book is an excellent series of case studies of major disasters, showing how the practice of hiding information contributed to make them possible, worse, and hinder rescue/recovery.
Chernov and Sornette focus mainly on technological disasters such as the Vajont Dam, Three Mile Island, Bhopal, Chernobyl, the Ufa train disaster, Fukushima and so on, but they also cover financial disasters, military disasters, production industry failures and concealment of product risk. In all of these cases there was plentiful concealment going on at multiple levels, from workers blocking alarms to reports being classified or deliberately mislaid to active misinformation campaigns.
Chernov and Sornette’s model of the factors causing or contributing to risk concealment.
When summed up, many patterns of information concealment recur again and again. They sketch out a model of the causes of concealment, with about 20 causes grouped into five major clusters: the external environment enticing concealment, risk communication channels blocked, an internal ecology stimulating concealment or ignorance, faulty risk assessment and knowledge management, and people having personal incentives to conceal.
The problem is very much systemic: having just one or two of the causative problems can be counteracted by good risk management, but when several causes start to act together they become much harder to deal with – especially since many corrode the risk management ability of the entire organisation. Once risks are hidden, it becomes harder to manage them (management, after all, is done through information). Conversely, they list examples of successful risk information management: risk concealment may be something that naturally tends to emerge, but it can be counteracted.
Chernov and Sornette also apply their model to some technologies they think show signs of risk concealment: shale energy, GMOs, real debt and liabilities of the US and China, and the global cyber arms race. They are not arguing that a disaster is imminent, but the patterns of concealment are a reason for concern: if they persist, they have potential to make things worse the day something breaks.
Is information concealment the cause of all major disasters? Definitely not: some disasters are just due to exogenous shocks or surprise failures of technology. But as Fukushima shows, risk concealment can make preparation brittle and handling the aftermath inefficient. There is also likely plentiful risk concealment in situations that will never come to attention because there is no disaster necessitating and enabling a thorough investigation. There is little to suggest that the examined disasters were all uniquely bad from a concealment perspective.
From an information hazard perspective, this book is an important rejoinder: yes, some information is risky. But lack of information can be dangerous too. Many of the reasons for concealment like national security secrecy, fear of panic, prevention of whistle-blowing, and personnel being worried about personally being held accountable for a serious fault are maladaptive information hazard management strategies. The worker not reporting a mistake is handling a personal information hazard, at the expense of the safety of the entire organisation. Institutional secrecy is explicitly intended to contain information hazards, but tends to compartmentalize and block relevant information flows.
A proper information hazard management strategy needs to take the concealment risk into account too: there is a risk cost of not sharing information. How these two risks should be rationally traded against each other is an important question to investigate.
The quick of it is that it will mess with our definitions of who happens to be dead, but that is mostly a matter of sorting out practice and definitions, and that it is somewhat questionable who is benefiting: the original patient is unlikely to recover, but we might get a moral patient we need to care for even if they are not a person, or even a different person (or most likely, just generally useful medical data but no surviving patient at all). The problem is that partial success might be worse than no success. But the only way of knowing is to try.
I have been working on the Fermi paradox for a while, and in particular the mathematical structure of the Drake equation. While it looks innocent, it has some surprising issues.
One area I have not seen much addressed is the independence of terms. To a first approximation they were made up to be independent: the fraction of life-bearing Earth-like planets is presumably determined by a very different process than the fraction of planets that are Earth-like, and these factors should have little to do with the longevity of civilizations. But as Häggström and Verendel showed, even a bit of correlation can cause trouble.
If different factors in the Drake equation vary spatially or temporally, we should expect potential clustering of civilizations: the average density may be low, but in areas where the parameters have larger values there would be a higher density of civilizations. A low may not be the whole story. Hence figuring out the typical size of patches (i.e. the autocorrelation distance) may tell us something relevant.
Astrophysical correlations
There is a sometimes overlooked spatial correlation in the first terms. In the orthodox formulation we are talking about earth-like planets orbiting stars with planets, which form at some rate in the Milky Way. This means that civilizations must be located in places where there are stars (galaxies), and not anywhere else. The rare earth crowd also argues that there is a spatial structure that makes earth-like worlds exist within a ring-shaped region in the galaxy. This implies an autocorrelation on the order of (tens of) kiloparsecs.
A tangent: different kinds of matter plausibly have different likelihood of originating life. Note that this has an interesting implication: if the probability of life emerging in something like the intergalactic plasma is non-zero, it has to be more than a hundred thousand times smaller than the probability per unit mass of planets, or the universe would be dominated by gas-creatures (and we would be unlikely observers, unless gas-life was unlikely to generate intelligence). Similarly life must be more than 2,000 times more likely on planets than stars (per unit of mass), or we should expect ourselves to be star-dwellers. Our planetary existence does give us some reason to think life or intelligence in the more common substrates (plasma, degenerate matter, neutronium) is significantly less likely than molecular matter.
Biological correlations
One way of inducing correlations in the factor is panspermia. If life originates at some low rate per unit volume of space (we will now assume a spatially homogeneous universe in terms of places life can originate) and then diffuses from a nucleation site, then intelligence will show up in spatially correlated locations.
It is not clear how much panspermia could be going on, or if all kinds of life do it. A simple model is that panspermias emerge at a density and grow to radius . The rate of intelligence emergence outside panspermias is set to 1 per unit volume (this sets a space scale), and inside a panspermia (since there is more life) it will be per unit volume. The probability that a given point will be outside a panspermia is
.
The fraction of civilizations finding themselves outside panspermias will be
.
As A increases, vastly more observers will be in panspermias. If we think it is large, we should expect to be in a panspermia unless we think the panspermia efficiency (and hence r) is very small. Loosely, the transition from going from 1% to 99% probability takes one order of magnitude change in r, three orders of magnitude in and four in A: given that these parameters can a priori range over many, many orders of magnitude, we should not expect to be in the mixed region where there are comparable numbers of observers inside panspermias and outside. It is more likely all or nothing.
There is another relevant distance beside , the expected distance to the next civilization. This is where is the density of civilizations. For the outside panspermia case this is , while inside it is . Note that these distances are not dependent on the panspermia sizes, since they come from an independent process (emergence of intelligence given a life-bearing planet rather than how well life spreads from system to system).
If then there will be no panspermia-induced correlation between civilization locations, since there is less than one civilization per panspermia. For there will be clustering with a typical autocorrelation distance corresponding to the panspermia size. For even larger panspermias they tend to dominate space (if is not very small) and there is no spatial structure any more.
So if panspermias have sizes in a certain range, , the actual distance to the nearest neighbour will be smaller than what one would have predicted from the average values of the parameters of the drake equation.
Nearest neighbour distance for civilizations in a model with spherical panspermias and corresponding randomly re-sampled distribution.
Running a Monte Carlo simulation shows this effect. Here I use 10,000 possible life sites in a cubical volume, and – the number of panspermias will be Poisson(1) distributed. The background rate of civilizations appearing is 1/10,000, but in panspermias it is 1/100. As I make panspermias larger civilizations become more common and the median distance from a civilization to the next closest civilization falls (blue stars). If I re-sample so the number of civilizations are the same but their locations are uncorrelated I get the red crosses: the distances decline, but they can be more than a factor of 2 larger.
Technological correlations
The technological terms and can also show spatial patterns, if civilizations spread out from their origin.
The basic colonization argument by Hart and Tipler assumes a civilization will quickly spread out to fill the galaxy; at this point if we count inhabited systems. If we include intergalactic colonization, then in due time, everything out to a radius of reachability on the order of 4 gigaparsec (for near c probes) and 1.24 gigaparsec (for 50% c probes). Within this domain it is plausible that the civilization could maintain whatever spatio-temporal correlations it wishes, from perfect homogeneity over the zoo hypothesis to arbitrary complexity. However, the reachability limit is due to physics and do impose a pretty powerful limit: any correlation in the Drake equation due to a cause at some point in space-time will be smaller than the reachability horizon (as measured in comoving coordinates) for that point.
Total colonization is still compatible with an empty galaxy if is short enough. Galaxies could be dominated by a sequence of “empires” that disappear after some time, and if the product between empire emergence rate and is small enough most eras will be empty.
A related model is Brin’s resource exhaustion model, where civilizations spread at some velocity but also deplete their environment at some (random rate). The result is a spreading shell with an empty interior. This has some similarities to Hanson’s “burning the cosmic commons scenario”, although Brin is mostly thinking in terms of planetary ecology and Hanson in terms of any available resources: the Hanson scenario may be a single-shot situation. In Brin’s model “nursery worlds” eventually recover and may produce another wave. The width of the wave is proportional to where is the expansion speed; if there is a recovery parameter corresponding to the time before new waves can emerge we should hence expect spatial correlation length of order . For light-speed expansion and a megayear recovery (typical ecology and fast evolutionary timescale) we would get a length of a million light-years.
Another approach is the percolation theory inspired models first originated by Landis. Here civilizations spread short distances, and “barren” offshoots that do not colonize form a random “bark” around the network of colonization (or civilizations are limited to flights shorter than some distance). If the percolation parameter is low, civilizations will only spread to a small nearby region. When it increases larger and larger networks are colonized (forming a fractal structure), until a critical parameter value where the network explodes and reaches nearly anywhere. However, even above this transition there are voids of uncolonized worlds. The correlation length famously scales as , where for this case. The probability of a random site belonging to the infinite cluster for scales as () and the mean cluster size (excluding the infinite cluster) scales as ().
So in this group of models, if the probability of a site producing a civilization is the probability of encountering another civilization in one’s cluster is
for . Above the threshold it is essentially 1; there is a small probability of being inside a small cluster, but it tends to be minuscule. Given the silence in the sky, were a percolation model the situation we should conclude either an extremely low or a low .
Temporal correlations
Another way the Drake equation can become misleading is if the parameters are time varying. Most obviously, the star formation rate has changed over time. The metallicity of stars have changed, and we should expect any galactic life zones to shift due to this.
In my opinion the most important temporal issue is inherent in the Drake equation itself. It assumes a steady state! At the left we get new stars arriving at a rate , and at the right the rate gets multiplied by the longevity term for civilizations , producing a dimensionless number. Technically we can plug in a trillion years for the longevity term and get something that looks like a real estimate of a teeming galaxy, but this actually breaks the model assumptions. If civilizations survived for trillions of years, the number of civilizations would currently be increasing linearly (from zero at the time of the formation of the galaxy) – none would have gone extinct yet. Hence we can know that in order to use the unmodified Drake equation has to be years.
Making a temporal Drake equation is not impossible. A simple variant would be something like
where the first term is just the factors of the vanilla equation regarded as time-varying functions and the second term a decay corresponding to civilizations dropping out at a rate of 1/L (this assumes exponentially distributed survival, a potentially doubtful assumption). The steady state corresponds to the standard Drake level, and is approached with a time constant of 1/L. One nice thing with this equation is that given a particular civilization birth rate corresponding to the first term, we get an expression for the current state:
.
Note how any spike in gets smoothed by the exponential, which sets the temporal correlation length.
If we want to do things even more carefully, we can have several coupled equations corresponding to star formation, planet formation, life formation, biosphere survival, and intelligence emergence. However, at this point we will likely want to make a proper “demographic” model that assumes stars, biospheres and civilization have particular lifetimes rather than random disappearance. At this point it becomes possible to include civilizations with different L, like Sagan’s proposal that the majority of civilizations have short L but some have very long futures.
The overall effect is still a set of correlation timescales set by astrophysics (star and planet formation rates), biology (life emergence and evolution timescales, possibly the appearance of panspermias), and civilization timescales (emergence, spread and decay). The overall effect is dominated by the slowest timescale (presumably star formation or very long-lasting civilizations).
Conclusions
Overall, the independence of the terms of the Drake equation is likely fairly strong. However, there are relevant size scales to consider.
Over multiple gigaparsec scales there can not be any correlations, not even artificially induced ones, because of limitations due to the expansion of the universe (unless there are super-early or FTL civilizations).
Over hundreds of megaparsec scales the universe is fairly uniform, so any natural influences will be randomized beyond this scale.
Colonization waves in Brin’s model could have scales on the galactic cluster scale, but this is somewhat parameter dependent.
The nearest civilization can be expected around , where is the galactic volume. If we are considering parameters such that the number of civilizations per galaxy are low V needs to be increased and the density will go down significantly (by a factor of about 100), leading to a modest jump in expected distance.
Panspermias, if they exist, will have an upper extent limited by escape from galaxies – they will tend to have galactic scales or smaller. The same is true for galactic habitable zones if they exist. Percolation colonization models are limited to galaxies (or even dense parts of galaxies) and would hence have scales in the kiloparsec range.
“Scars” due to gamma ray bursts and other energetic events are below kiloparsecs.
The lower limit of panspermias are due to being smaller than the panspermia, presumably at least in the parsec range. This is also the scale of close clusters of stars in percolation models.
Time-wise, the temporal correlation length is likely on the gigayear timescale, dominated by stellar processes or advanced civilization survival. The exception may be colonization waves modifying conditions radically.
In the end, none of these factors appear to cause massive correlations in the Drake equation. Personally, I would guess the most likely cause of an observed strong correlation between different terms would be artificial: a space-faring civilization changing the universe in some way (seeding life, wiping out competitors, converting it to something better…)
That trolling is a shameful thing, and that no one of sense would accept to be called ‘troll’, all are agreed; but what trolling is, and how many its species are, and whether there is an excellence of the troll, is unclear. And indeed trolling is said in many ways; for some call ‘troll’ anyone who is abusive on the internet, but this is only the disagreeable person, or in newspaper comments the angry old man. And the one who disagrees loudly on the blog on each occasion is a lover of controversy, or an attention-seeker. And none of these is the troll, or perhaps some are of a mixed type; for there is no art in what they do. (Whether it is possible to troll one’s own blog is unclear; for the one who poses divisive questions seems only to seek controversy, and to do so openly; and this is not trolling but rather a kind of clickbait.)
Aristotle’s definition is quite useful:
The troll in the proper sense is one who speaks to a community and as being part of the community; only he is not part of it, but opposed. And the community has some good in common, and this the troll must know, and what things promote and destroy it: for he seeks to destroy.
He then goes on analysing the knowledge requirements of trolling, the techniques, the types or motivations of trolls, the difference between a gadfly like Socrates and a troll, and what communities are vulnerable to trolls. All in a mere two pages.
(If only the medieval copyists had saved his other writings on the Athenian Internet! But the crash and split of Alexander the Great’s social media empire destroyed many of them before that era.)
The text reminds me of another must-read classic, Harry Frankfurt’s “On Bullshit”. There Frankfurt analyses the nature of bullshitting. His point is that normal deception cares about the truth: it aims to keep someone from learning it. But somebody producing bullshit does not care about the truth or falsity of the statements made, merely that they fit some manipulative, social or even time-filling aim.
It is just this lack of connection to a concern with truth – this indifference to how things really are – that I regard as of the essence of bullshit.
It is pernicious, since it fills our social and epistemic arena with dodgy statements whose value is uncorrelated to reality, and the bullshitters gain from the discourse being more about the quality (or the sincerity) of bullshitting than any actual content.
Both of these essays are worth reading in this era of the Trump candidacy and Dugin’s Eurasianism. Know your epistemic enemies.
By Anders Sandberg, Future of Humanity Institute, Oxford Martin School, University of Oxford
Thinking of the future is often done as entertainment. A surprising number of serious-sounding predictions, claims and prophecies are made with apparently little interest in taking them seriously, as evidenced by how little they actually change behaviour or how rarely originators are held responsible for bad predictions. Rather, they are stories about our present moods and interests projected onto the screen of the future. Yet the future matters immensely: it is where we are going to spend the rest of our lives. As well as where all future generations will live – unless something goes badly wrong.
Olle Häggström’s book is very much a plea for taking the future seriously, and especially for taking exploring the future seriously. As he notes, there are good reasons to believe that many technologies under development will have enormous positive effects… and also good reasons to suspect that some of them will be tremendously risky. It makes sense to think about how we ought to go about avoiding the risks while still reaching the promise.
Current research policy is often directed mostly towards high quality research rather than research likely to make a great difference in the long run. Short term impact may be rewarded, but often naively: when UK research funding agencies introduced impact evaluation a few years back, their representatives visiting Oxford did not respond to the question on whether impact had to be positive. Yet, as Häggström argues, obviously the positive or negative impact of research must matter! A high quality investigation into improved doomsday weapons should not be pursued. Investigating the positive or negative implications of future research and technology has high value, even if it is difficult and uncertain.
Inspired by James Martin’s The Meaning of the 21st Century this book is an attempt to make a broad map sketch of parts of the future that matters, especially the uncertain corners where we have reason to think dangerous dragons lurk. It aims more at scope than the detail of many of the covered topics, making it an excellent introduction and pointer towards the primary research.
One obvious area is climate change, not just in terms of its direct (and widely recognized risks) but the new challenges posed by geoengineering. Geoengineering may both be tempting to some nations and possible to perform unilaterally, yet there are a host of ethical, political, environmental and technical risks linked to it. It also touches on how far outside the box we should search for solutions: to many geoengineering is already too far, but other proposals such as human engineering (making us more eco-friendly) go much further. When dealing with important challenges, how do we allocate our intellectual resources?
Other areas Häggström reviews include human enhancement, artificial intelligence, and nanotechnology. In each of these areas tremendously promising possibilities – that would merit a strong research push towards them – are intermixed with different kinds of serious risks. But the real challenge may be that we do not yet have the epistemic tools to analyse these risks well. Many debates in these areas contain otherwise very intelligent and knowledgeable people making overconfident and demonstrably erroneous claims. One can also argue that it is not possible to scientifically investigate future technology. Häggström disagrees with this: one can analyse it based on currently known facts and using careful probabilistic reasoning to handle the uncertainty. That results are uncertain does not mean they are useless for making decisions.
He demonstrates this by analysing existential risks, scenarios for the long term future humanity and what the “Fermi paradox” may tell us about our chances. There is an interesting interplay between uncertainty and existential risk. Since our species can end only once, traditional frequentist approaches run into trouble Bayesian methods do not. Yet reasoning about events that are unprecedented also makes our arguments terribly sensitive to prior assumptions, and many forms of argument are more fragile than they first look. Intellectual humility is necessary for thinking about audacious things.
In the end, this book is as much a map of relevant areas of philosophy and mathematics containing tools for exploring the future, as it is a direct map of future technologies. One can read it purely as an attempt to sketch where there may be dragons in the future landscape, but also as an attempt at explaining how to go about sketching the landscape. If more people were to attempt that, I am confident that we would fence in the dragons better and direct our policies towards more dragon-free regions. That is a possibility worth taking very seriously.
[Conflict of interest: several of my papers are discussed in the book, both critically and positively.]