# Telescoping

Wednesday August 10 1960

Robert lit his pipe while William meticulously set the coordinates from the computer printout. “Want to bet?”

William did not look up from fine-tuning the dials and re-checking the flickering oscilloscope screen. “Five dollars that we get something.”

“’Something’ is not going to be enough to make Edward or the General happy. They want the future on film.”

“If we get more delays we can always just step out with a camera. We will be in the future already.”

“Yeah, and fired.”

“I doubt it. This is just a blue-sky project Ed had to try because John and Richard’s hare-brained one-electron idea caught the eye of the General. It will be like the nuclear mothball again. There, done. You can start it.”

Robert put the pipe in the ashtray and walked over to the Contraption controls. He noted down the time and settings in the log, then pressed the button. “Here we go.” The Contraption hummed for a second, the cameras clicked. “OK, you bet we got something. You develop the film.”

“We got something!” William was exuberant enough to have forgotten the five dollars. He put down the still moist prints on Robert’s desk. Four black squares. He thrust a magnifying glass into Robert’s hands and pointed at a corner. “Recognize it?”

It took Robert a few seconds to figure out what he was looking at. First he thought there was nothing there but noise, then eight barely visible dots became a familiar shape: Orion. He was seeing a night sky. In a photo taken inside a basement lab. During the day.

“Well… that is really something.”

Tuesday August 16 1960

The next attempt was far more meticulous. William had copied the settings from the previous attempt, changed them slightly in the hope of a different angle, and had Raymond re-check it all on the computer despite the cost. This time they developed the film together. As the seal of the United States of America began to develop on the film they both simultaneously turned to each other.

“Am I losing my mind?”

“That would make two of us. Look, there is text there. Some kind of plaque…”

The letters gradually filled in. “THIS PLAQUE COMMEMORATES THE FIRST SUCCESSFUL TRANSCHRONOLOGICAL OBSERVATION August 16 1960 to July 12 2054.” Below was more blurry text.

“Darn, the date is way off…”

“What do you mean? That is today’s date.”

“The other one. Theory said it should be a month in the future.”

“Idiot! We just got a message from the goddamn future! They put a plaque. In space. For us.”

Wednesday 14 December 1960

The General was beaming. “Gentlemen, you have done your country a great service. The geographic coordinates on Plaque #2 contained excellent intel. I am not at liberty to say what we found over there in Russia, but this project has already paid off far beyond expectation. You are going to get medals for this.” He paused and added in a lower voice: “I am relieved. I can now go to my grave knowing that the United States is still kicking communist butt 90 years in the future.”

One of the general’s aides later asked Robert: “There is something I do not understand, sir. How did the people in the future know where to put the plaques?”

Robert smiled. “That bothered us too for a while. Then we realized that it was the paperwork that told them. You guys have forced us to document everything. Just saying, it is a huge bother. But that also meant that every setting is written down and archived. Future people with the right clearances can just look up where we looked.”

“And then go into space and place a plaque?”

“Yes. Future America is clearly spacefaring. The most recent plaques also contain coordinate settings for the next one, in addition to the intel.”

He did not mention the mishap. When they entered the coordinates for Plaque #4 given on Plaque #3, William had made a mistake – understandable, since the photo was blurry – and they photographed the wrong spacetime spot. Except that Plaque #4 was there. It took them a while to realize that what mattered was what settings they entered into the archive, not what the plaque said.

“They knew where we would look.” Robert had said with wonder.

“Why did they put in different coordinates on #3 then? We could just set random coordinates and they will put a plaque there.”

“Have a heart. I assume that would force them to run around the entire solar system putting plaques in place. Theory says the given coordinates are roughly in Earth’s vicinity – more convenient for our hard-working future astronauts.”

“You know, we should try putting the wrong settings into the archive.”

“You do that if the next plaque is a dud.”

Friday 20 January 1961

Still, something about the pattern bothered Robert. The plaques contained useful information, including how to make a better camera and electronics. The General was delighted, obviously thinking of spy satellites not dependent on film cannisters. But there was not much information about the world: if he had been sending information back to 1866, wouldn’t he have included some modern news clippings, maybe a warning about stopping that Marx guy?

Suppose things did not go well in the future. The successors of that shoe-banging Khrushchev somehow won and instituted their global dictatorship. They would pore over the remaining research of the formerly free world, having their minions squeeze every creative idea out the archives. Including the coordinates for the project. Then they could easily fake messages from a future America to fool his present, maybe even laying the groundwork for their success…

William was surprisingly tricky to convince. Robert had assumed he would be willing to help with the scheme just because it was against the rules, but he had been at least partially taken in by the breath-taking glory of the project and the prospect of his own future career. Still, William was William and could not resist a technical challenge. Setting up an illicit calculation on the computer disguised as an abortive run with a faulty set of punch cards was just his kind of thing. He had always loved cloak-and-dagger stuff. Robert made use of the planned switch to the new cameras to make the disappearance of one roll of film easy to overlook. The security guards knew both of them worked on ungodly hours.

“Bet what? That we will see a hammer and sickle across the globe?”

“Something simpler: that there will be a plaque saying ‘I see you peeping!’.”

Robert shivered. “No thanks. I just want to be reassured.”

“It is a shame we can’t get better numerical resolution; if we are lucky we will just see Earth. Imagine if we could get enough decimal places to put the viewport above Washington DC.”

The photo was beautiful. Black space, and slightly off-centre there was a blue and white marble. Robert realized that they were the first people ever to see the entire planet from this distance. Maybe in a decade or so, a man on the moon would actually see it like that. But the planet looked fine. Was there maybe glints of something in orbit?

“Glad I did not make the bet with you. No plaque.”

“The operational security of the future leaves a bit to be desired.”

“…that is actually a good answer.”

“What?”

“Imagine you are running Future America, and have a way of telling the past about important things. Like whatever was in Russia, or whatever is in those encrypted sequences on Plaque #9. Great. But Past America can peek at you, and they don’t have all the counterintelligence gadgets and tricks you do. So if they peek at something sensitive – say the future plans for a super H-bomb – then the Past Commies might steal it from you.”

“So the plaques are only giving us what we need, or is safe if there are spies in the project.”

“Future America might even do a mole-hunt this way… But more importantly, you would not want Past America to watch you too freely since that might leak information to not just our adversaries or the adversaries of Future America, but maybe mid-future adversaries too.”

“You are reading too many spy novels.”

“Maybe. But I think we should not try peeking too much. Even if we know we are trustworthy, I have no doubt there are some sticklers in the project – now or in the future – who are paranoid.”

“More paranoid than us? Impossible. But yes.”

With regret Robert burned the photo later that night.

February 1962

As the project continued its momentum snowballed and it became ever harder to survey. Manpower was added. Other useful information was sent back – theory, technology, economics, forecasts. All benign. More and more was encrypted. Robert surmised that somebody simply put the encryption keys in the archive and let the future send things back securely to the right recipients.

His own job was increasingly to run the work on building a more efficient “Conduit”. The Contraption would retire in favour of an all-electronic version, all integrated circuits and rapid information flow. It would remove the need for future astronauts to precisely place plaques around the solar system: the future could send information as easily as using ComLogNet teletype terminals.

William was enthusiastically helping the engineers implement the new devices. He seemed almost giddy with energy as new tricks arrived weekly and wonders emerged from the workshop. A better camera? Hah, the new computers were lightyears ahead of anything anybody else had.

So why did Robert feel like he was being fooled?

Wednesday 28 February 1962

In a way this was a farewell to the Contraption around which his life had circulated the past few years: tomorrow the Conduit would take over upstairs.

Robert quietly entered the coordinates into the controls. This time he had done most of the work himself: he could run jobs on the new mainframe and the improved algorithms Theory had worked out made a huge difference.

It was also perhaps his last chance to actually do things himself. He had found himself increasingly insulated as a manager – encapsulated by subordinates, regulations, and schedules. The last time he had held a soldering iron was months ago. He relished the muggy red warmth of the darkroom as he developed the photos.

The angles were tilted, but the photos were more unexpected than he had anticipated. One showed what he thought was in the DC region but the whole area was an empty swampland dotted with overgrown ruins. New York was shrouded in a thunderstorm, but he could make out glowing skyscrapers miles high shedding von Kármán vortices in the hurricane strength winds. One photo showed a gigantic structure near the horizon that must have been a hundred kilometres tall, surmounted by an aurora. This was not a communist utopia. Nor was it the United States in any shape or form. It was not a radioactive wasteland – he was pretty sure at least one photo showed some kind of working rail line. This was an alien world.

When William put his hand on his shoulder Robert nearly screamed.

“Anything interesting?”

Wordlessly he showed the photos to William, who nodded. “Thought so.”

“What do you mean?”

“When do you think this project will end?”

Robert gave it some thought. “I assume it will run as long as it is useful.”

“And then what? It is not like we would pack up the Conduit and put it all in archival storage.”

“Of course not. It is tremendously useful.”

“Future America still has the project. They are no doubt getting intel from further down the line. From Future Future America.”

Robert saw it. A telescoping series of Conduits shipping choice information from further into the future to the present. Presents. Some of which would be sending it further back. And at the futuremost end of the series…

“I read a book where they discussed progress, and the author suggested that all of history is speeding up towards some super-future. The Contraption and Conduit allows the super-future to come here.”

“It does not look like there are any people in the super-future.”

“We have been evolving for millions of years, slowly. What if we could just cut to the chase?”

“Humanity ending up like that?” He gestured towards Thunder New York.

“I think that is all computers. Maybe electronic brains are the inhabitants of the future.”

“We must stop it! This is worse than commies. Russians are at least human. We must prevent the Conduit…”

William smiled broadly. “That won’t happen. If you blew up the Conduit, don’t you think there would be a report? A report archived for the future? And if you were Future America, you would probably send back an encrypted message addressed to the right person saying ‘Talk Robert out of doing something stupid tonight’? Even better, a world where someone gets your head screwed on straight, reports accurately about it, and the future sends back a warning to the person is totally consistent.”

Robert stepped away from William in horror. The red gloom of the darkroom made him look monstrous. “You are working for them!”

“Call it freelancing. I get useful tips, I do my part, things turn out as they should. I expect a nice life. But there is more to it than that, Robert. I believe in moral progress. I think those things in your photos probably know better than we do – yes, they are probably more alien than little green men from Mars, but they have literally eons of science, philosophy and whatever comes after that.”

“Mice.”

“Mice?”

“MICE: Money, Ideology, Coercion, Ego. The formula for recruiting intelligence assets. They got you with all but the coercion part.”

“They did not have to. History, or rather physical determinism, coerces us. Or, ‘you can’t fight fate’.”

“I’m doing this to protect free will! Against the Nazis. The commies! Your philosophers!”

“Funny way you are protecting it. You join this organisation, you allow yourself to become a cog in the machine, feel terribly guilty about your little experiments. No, Robert, you are protecting your way of life. You are protecting normality. You could just as well have been in Moscow right now working to protect socialism.”

“Enough! I am going to stop the Conduit!”

William held up a five dollar bill. “Want to bet?”

# And the pedestrians are off! Oh no, that lady is jaywalking!

In 1983 Swedish Television began an all-evening entertainment program named Razzel. It was centred around the state lottery draw, with music, sketch comedy, and television series interspersed between the blocks. Yes, this was back in the day when there was two TV channels to choose from and more or less everybody watched. The ice age had just about ended.

One returning feature consisted of camera footage of a pedestrian crossing in Stockholm. A sports commenter well-known for his coverage of horse-racing narrated the performance of the unknowing pedestrians as if they were competing in a race. In some cases I even think he even showed up to deliver flowers to the “winner”. But you would get disqualified if you had a false start or went outside the stripes!

I suspect this feature noticeably improved traffic safety for a generation.

I was reminded of this childhood memory earlier today when discussing the use of face recognition in China to detect jaywalkers and display them on a billboard to shame them. The typical response in a western audience is fear of what looks like a totalitarian social engineering program. The glee with which many responded to the news that the system had been confused by a bus ad, putting a celebrity on the board of shame, is telling.

# Is there a difference?

But compare the Chinese system to the TV program. In the China case the jaywalker may be publicly shamed from the billboard… but in the cheerful 80s TV program they were shamed in front of much of the nation.

There is a degree of increased personalness in the Chinese case since it also displays their name, but no doubt friends and neighbours would recognize you if they saw you on TV (remember, this was back when we only had two television channels and a fair fraction of people watched TV on Friday evening). There may also be SMS messages involved in some versions of the system. This acts differently: now it is you who gets told off when you misbehave.

A fundamental difference may be the valence of the framing. The TV show did this as happy entertainment, more of a parody of sport television than an attempt at influencing people. The Chinese system explicitly aims at discouraging misbehaviour. The TV show encouraged positive behaviour (if only accidentally).

So the dimensions here may be the extent of the social effect (locally, or nationwide), the degree the feedback is directly personal or public, and whether it is a positive or negative feedback. There is also a dimension of enforcement: is this something that happens every time you transgress the rules, or just randomly?

In terms of actually changing behaviour making the social effect broad rather than close and personal might not have much effect: we mostly care about our standing relative to our peers, so having the entire nation laugh at you is certainly worse than your friends laughing, but still not orders of magnitude more mortifying. The personal message on the other hand sends a signal that you were observed; together with an expectation of effective enforcement this likely has a fairly clear deterrence effect (it is often not the size of the punishment that deters people from crime, but their expectation of getting caught). The negative stick of acting wrong and being punished is likely stronger than the positive carrot of a hypothetical bouquet of flowers.

# Where is the rub?

From an ethical standpoint, is there a problem here? We are subject to norm enforcement from friends and strangers all the time. What is new is the application of media and automation. They scale up the stakes and add the possibility of automated enforcement. Shaming people for jaywalking is fairly minor, but some people have lost jobs, friends or been physically assaulted when their social transgressions have become viral social media. Automated enforcement makes the panopticon effect far stronger: instead of suspecting a possibility of being observed it is a near certainty. So the net effect is stronger, more pervasive norm enforcement…

…of norms that can be observed and accurately assessed. Jaywalking is transparent in a way being rude or selfish often isn’t. We may end up in a situation where we carefully obey some norms, not because they are the most important but because they can be monitored. I do not think there is anything in principle impossible about a rudeness detection neural network, but I suspect the error rates and lack of context sensitivity would make it worse than useless in preventing actual rudeness. Goodhart’s law may even make it backfire.

So, in the end, the problem is that automated systems encode a formalization of a social norm rather than the actual fluid social norm. Having a TV commenter narrate your actions is filtered through the actual norms of how to behave, while the face recognition algorithm looks for a pattern associated with transgression rather than actual transgression. The problem is that strong feedback may then lock in obedience to the hard to change formalization rather than actual good behaviour.

# Thinking long-term, vast and slow

This spring Richard Fisher at BBC Future has commissioned a series of essays about long-termism: Deep Civilisation. I really like this effort (and not just because I get the last word):

“Deep history” is fascinating because it gives us a feeling of the vastness of our roots – not just the last few millennia, but a connection to our forgotten stone-age ancestors, their hominin ancestors, the biosphere evolving over hundreds of millions and billions of years, the planet, and the universe. We are standing on top of a massive sedimentary cliff of past, stretching down to an origin unimaginably deep below.

Yet the sky above, the future, is even more vast and deep. Looking down the 1,857 m into Grand Canyon is vertiginous. Yet above us the troposphere stretches more than five times further up, followed by an even vaster stratosphere and mesosphere, in turn dwarfed by the thermosphere… and beyond the exosphere fades into the endlessness of deep space. The deep future is in many ways far more disturbing since it is moving and indefinite.

That also means there is a fair bit of freedom in shaping it. It is not very easy to shape. But if we want to be more than just some fossils buried inside the rocks we better do it.

# Stuff I have been up to

I have been busy everywhere except on this blog. Here are a few highlights, mostly my public outreach:

Long term survival

On BBC Future I have an essay concluding their amazing season on long term thinking where I go really long-term: The greatest long term threats facing humanity.

The approach I take there is to look at the question “if we have survived X years into the future, what problems must we have overcome before that?” It is not so much the threats (or frankly, problems – threat seems to imply a bit more active maliciousness than the universe normally brings about) that are interesting as just how radically we need to change or grow in power to meet them.

The central paradox of survival is that it requires change, and long-term that means that what survives may be very alien. Not so much a problem for me, but I think many disagree. A solid state civilization powered by black holes in a starless universe close to absolute zero, planning billions of years ahead may sound like a great continuation of us, or something too alien to matter.

Debunking doom

Climate doom is in the air, and I am frankly disturbed by how many think that we are facing an existential threat to our survival in the next decade or so – both because it is based on a misunderstanding of the science (which is not entirely easy to read), and how it breeds fatalism. As a response to a youngster question I wrote this piece on the Conversation: Will climate change cause humans to go extinct?

Robots in space

In Quartz, I have an essay about the next 50 years of space exploration and whether we should send robots instead: We should stop sending humans into space to do a robot’s job.

As often the title (not chosen by me, I prefer “A small step for machinekind”) makes it seem I am arguing for something different than I am actually arguing. As I see it, sending machines to space makes much more sense than sending humans… but given the very human desire to be the ones exploring, we will send humans in any case. Long-term we should also become multiplanetary if only to reduce extinction risks, but that might require sending robots ahead – except that in order to do that we need a lot of cheap, low-threshold experimentation and testing.

Good versus evil, Moloch versus Maria

Last year I participated in the Nexus Instituut “intellectual opera” in Amsterdam, enjoying myself immensely. I ended up writing an essay AI, Good and Evil… and Moloch (Official version Sandberg, A. (2019) Kunstmatige intelligentie en Moloch. Tijdschrift Nexus 81: De strijd tussen goed en kwaad. Nexus Instituut, Amsterdam (Tr. Laura Weeda)).

My main point is that evil is usually seen as active maliciousness or neglecting others, suffering itself, or meaningless/removal of meaning. Bad AI is unlikely to be actively malicious and making machines that can experience suffering is likely tricky, but automation that perform bad actions without caring is all too simple. The big risk is getting systems that are effective at implementing pointless goals too efficiently, destroying value (human or other) for no gain, not even to themselves. A further problem is also that these systems are systems, not individuals. We tend to think of AI as robots, “the AI” and other individual entities, when it just as well can be an ambient functionality of the wider techno-social world – impossible to pull the plug, with everybody complicit. We need better ways of debugging adaptive technological systems.

Life extension

On Humanity 2.0 I discussed/debated digital afterlives with Steve Fuller, Sr. Mary Christa Nutt, James Madden and Matthew Harvey Sanders. Got a bit intense at some points, but there is an interesting issue in untangling exactly what we want from an extended life. Not all forms of continuity count for all people: a continuity of consciousness is very different from a continuity of memory, a continuity of social interactions or functions, or leaving the right life projects in order.

Other stuff

Hacking the Brain: Dimensions of Cognitive Enhancement. A paper on cognitive enhancement, the final fruit of the “comparing apples to oranges” Volkswagen foundation project I participated in.

The GoCAS existential risk project final outputs arrived in the journal Foresight. I have two pieces, There is plenty of time at the bottom: The economics, risk and ethics of time compression and the group-written Long-term trajectories of human civilization.

I have also helped a bit with an Oxford project on sensitive intervention points for a post-carbon society. Not all tipping points are bad, and sometimes cartoon heroes may help.

Grand futures

Behind the scenes, my book is expanding… whether that is progress remains to be seen.

I have given various talks about some contents, but there is so much more. I think I have to do a proper lecture series this fall.

# Newtonmass fractals 2018

It is Newtonmass, and that means doing some fun math. I try to invent a new fractal or something similar every year: this is the result for 2018.

The Newton fractal is an old classic. Newton’s method for root finding iterates an initial guess $z_0$ to find a better approximation $z_{n+1}=z_{n}-f(z_{n})/f'(z_{n})$. This will work as long as $f'(z)\neq 0$, but which root one converges to can be sensitively dependent on initial conditions. Plotting which root a given initial value ends up with gives the Newton fractal.

The Newton-Gauss method is a method for minimizing the total squared residuals $S(\beta)=\sum_{i=1}^m r_i^2(\beta)$ when some function dependent on n-dimensional parameters $\beta$ is fitted to $m$ data points $r_i(\beta)=f(x_i;\beta)-y_i$. Just like the root finding method it iterates towards a minimum of $S(\beta)$: $\beta_{n+1} = \beta_n - (J^t J)^{-1}J^t r(\beta)$ where $J$ is the Jacobian $J_{ij}=\frac{\partial r_i}{\partial \beta_j}$. This is essentially Newton’s method but in a multi-dimensional form.

So we can make fractals by trying to fit (say) a function consisting of the sum of two Gaussians with different means (but fixed variances) to a random set of points. So we can set $f(x;\beta_1,\beta_2)=(1/\sqrt{2\pi})[e^{-(x-\beta_1)^2/2}+(1/4)e^{-(x-\beta_1)^2/8}]$ (one Gaussian with variance 1 and one with 1/4 – the reason for this is not to make the diagram boringly symmetric as for the same variance case). Plotting the location of the final $\beta(50)$ (by stereographically mapping it onto a unit sphere in (r,g,b) space) gives a nice fractal:

It is a bit modernistic-looking. As I argued in 2016, this is because the generic local Jacobian of the dynamics doesn’t have much rotation.

As more and more points are added the attractor landscape becomes simpler, since it is hard for the Gaussians to “stick” to some particular clump of points and the gradients become steeper.

This fractal can obviously be generalized to more dimensions by using more parameters for the Gaussians, or more Gaussians etc.

The fractality is guaranteed by the generic property of systems with several attractors that points at the border of two basins of attraction will tend to find their ways to other attractors than the two equally balanced neighbors. Hence a cut transversally across the border will find a Cantor-set of true basin boundary points (corresponding to points that eventually get mapped to a singular Jacobian in the iteration formula, like the boundary of the Newton fractal is marked by points mapped to $f'(z_n)=0$ for some n) with different basins alternating.

Merry Newtonmass!

# A bit of existential hope for Christmas (and beyond)

Existential hope is in the air. The term was coined by my collegues Toby and Owen to denote the opposite of an existential catastrophe: the chance that things could turn out much better than expected.

Recently I had the chance to attend a visioning weekend with the Foresight Institute where we discussed ways of turning dystopias into utopias. It had a clear existential hope message, largely because  it was organised by Allison Duettman who is writing a book on the topic. I must admit that I got a bit nervous when I found out since I am also writing my own grand futures book, but I am glad to say we are dealing with largely separate domains and reasons for hope.

Now I extra am glad to add a podcast to the list of hopeful messages: the Future of Life Institute had me on the podcast Existential Hope in 2019 and beyond. It includes not just me and Allison, but also Max Tegmark, Anthony Aguirre, Gaia Dempsey, and Josh Clark (who also interviewed me for his podcast series End of the World).

I also participated in the Nexus Instituut event “The Battle between Good and Evil”. I assume the good guys won. I certainly had fun. I ended up arguing that good is only weak compared to evil like how water is weak compared to solid object – in small amounts it will deform and splash. In larger amounts it is like the tide or a tsunami: you better get out of the way. In retrospect that analogy might have been particularly powerful in the Netherlands. They know their water and how many hands (and windmills) can reshape a country.

# Do we really have grounds for existential hope?

A useful analysis of the concept of hope can be found in Jayne M. Waterworth’s A Philosophical Analysis of Hope. He defines that hoping for something requires (1) a conception of an uncertain possibility, (2) a desire for an objective, (3) a desire that one’s desire be satisfied, and (4) that one takes an anticipatory stance towards the objective.

One can hope for things that have a certain or uncertain probability, but also for things that are merely possible. Waterworth calls the first category “hope because of reality” or probability hope, while the second category is “hope in spite of reality” or possibility hope. I might have probability hope in fixing climate change, but possibility hope in humanity one day resurrecting the dead – in the first case we have some ideas of how it might happen and what might be involved, in the second case we have no idea even where to begin.

Outcomes can also be of different importance: hoping for a nice Christmas present is what Waterworth calls an ordinary hope, while hoping for a solution of climate change or death is an extraordinary hope.

We may speak of existential hope in the sense that “existential eucatastrophes” can occur, or that our actions can make them happen. This would represent the most extraordinary kind of hope possible.

But note that this kind of hope is potentially “hope because of reality” rather than “hope in spite of reality”. We can affect the future to some extent (there is an interesting issue of how much). There doesn’t seem to be any law of nature dooming us to early existential risk or a necessary collapse of civilization. We have in the past changed the rules for our species in very positive ways, and may do so again. We may discover facts about the world that greatly expand the size and value of our future – we have already done so in the past. These are good reasons to hope.

Hope is a mental state. The reason hope is a virtue in Christian theology is that it is the antidote to despair.

Hope is different from optimism, the view that good things are likely to happen. First, optimism is a general disposition rather than directed at particular hoped for occurrences. Second, hope can be a very small and unspecific thing: rather than being optimistic about everything going the right way, a hopeful person can see the overwhelming problems and risks and yet hope that something will happen to get us through. Even a small grain of hope might be enough to fend of despair.

Still, there may be a psychological disposition towards being hopeful. As defined by Snyder in regarding motivations towards goals this involves a sense of agency (chosen goals can be achieved) and pathways (successful plans and strategies for those goals can be generated). This trait predicts academic achievement in students beyond intelligence, personality, and past achievement. Indeed, in law students hope but not optimism was predictive for achievement (but both contributed to life satisfaction). This trait may be more about being motivated to seek out good future states than actually being hopeful about many things, but the more possibilities are seen, the more likely something worth hoping for will show up.

If there is something I wish for everybody in 2019 and beyond it is having this kind of disposition relative to existential hope. Yes, there are monumental problems ahead. But we can figure out ways around/through/over them. There are opportunities to be grabbed. There are new values to be forged.

The winter solstice has just passed and the days will become brighter and longer for the next months. Cheers!

# Throwing balls on torus-earth

A question came up on Physics Stack Exchange: how does thrown object trajectories look on a toroidal planet?

Locally we should expect them to be like on Earth: there is constant gravitational acceleration orthogonal to the ground, so they will just look like parabolas.

But if the trajectory is longer the rapid rotation ought to twist it, since there is a fair Coriolis effect. So the differential equation will be $\mathbf{x}''=\mathbf{g}+2\mathbf{x}'\times\mathbf{\Omega}.$ If we just look at the velocity vector we get $\mathbf{v}'=\mathbf{g}+2\mathbf{v}\times\mathbf{\Omega}.$

That is, the forcefield will twist the velocity around if it is large and orthogonal to the angular velocity vector. If the velocity is parallel it will just be affected by gravity. For a trajectory near the pole it will become twisted and tilted:

For a starting point on the equator the twisting gets a bit more complex:

One can also recognise the analogy to an electron in an electromagnetic field: $latex \mathbf{v}’ = (q/m)(\mathbf{E}+\mathbf{v}\times \mathbf{B})$. Without gravity we should hence expect thrown balls to just follow helices around the omega-vector direction just like charged particles follow magnetic field-lines. One can eliminate the electric field from the equation by using a different velocity coordinate $latex \mathbf{v_2}=\mathbf{v}-\mathbf{E}\times\mathbf{B}/B^2$. Hence we can treat ball trajectories like helices plus a drift velocity in the $\mathbf{g}\times\mathbf{\Omega}$ direction. The helix radius will be $v/2\Omega$.

How large is the Coriolis effect? On Earth $\Omega=2\pi/86400\approx 0.0000727$. On Donut it is 0.000614 and on Hoop 0.000494, several times higher. Still, the correction is not going to be enormous: for a ball moving 10 meters per second the helix radius will be 69 km on Earth (at the pole), 8.1 km on Donut, and 10 km on Hoop. We hence need to throw the ball a suborbital distance before the twists become really visible. At these distances the curvature of the planet and the non-linearity of the gravitational field also begins to bite.

I have not simulated such trajectories since I need a proper mass distribution model of the worlds, and it is messy. However, for an infinitely thin ring one can solve orbits numerically relatively easily (you “just” have to integrate elliptic integrals):

Beside the “normal” equatorial orbits and torus-like orbits winding themselves around the ring, there are internal halo-orbits and chaotic tangles.

# Blueberry Earth

[Update: I have a paper version of this essay on arXiv:1807.10553, extending and correcting some of the results.]

Supposing that the entire Earth was instantaneously replaced with an equal volume of closely packed, but uncompressed blueberries, what would happen from the perspective of a person on the surface?

Unfortunately the site tends to frown on fun questions like this, so it was in my opinion prematurely closed while I was working out the answer. So here it is, with some extra extensions:

The density of blueberries has been estimated to 625.56 kg/m3, WillO on Stackexchange estimated it to 13% of Earth’s density (5510*0.13=716.3 kg/m3), so assuming it to be around $\rho_{berries}=700$ kg/m3 appears to be reasonable. Blueberry pulp has a density similar to water,  980 to 1050 kg per m3 although this is both temperature dependent and depends on how much solids there are. The difference to the whole berries is due to the air between the berries. Note that these are likely the big, thick-skinned “American” blueberries rather than the small wild thin-skinned blueberries (bilberries) I grew up with; the latter would have higher density due to their smaller size and break far more easily.

So instantaneously turning Earth into blueberries will reduce its mass to 0.1274 of what it was. Gravity will become correspondingly weaker, $g_{BE}=0.1274 g$.

However, blueberries are not particularly sturdy. While there is a literature on blueberry mechanics (of course!), I did not manage to find a great source on their compressive strength. A rough estimate is possible: stacking a sugar cube (1 g) on a berry will not break it, while a milk carton (1 kg) will; 100 g has a decent but not certain chance. So if we assume the blueberry area to be one square centimetre the breaking pressure is on the order of $P_{break}=0.1 g / 10^{-4} \approx 10,000$ N/m2. This allows us to estimate at what depth the berries will start to break: $z=P_{break}/g_{BE}\rho_{berries} = 11.4188$ m. So while the surface will be free blueberries they will start pulping within a few meters of the surface.

This pulping has an important effect: the pulp separates from the air, coalescing into a smaller sphere. If we assume pulp to be an incompressible fluid, then a sphere of pulp with the same mass as the initial berries will be $\rho_{pulp} r_{pulp}^3 = \rho_{berries}r_{earth}^3$, or $r_{pulp} = (\rho_{berries}/ \rho_{pulp} )^{1/3}r_{earth}$. In this case we end up with a planet with 0.8879 times smaller radius (5,657 km), surrounded by a vast atmosphere.

The freefall timescale for the planet is initially 41 minutes, but relatively shortly the pulping interactions, the air convection etc will slow things down in a complicated way. I expect that the the actual coalescence will take hours, with some late bubbles from the deep interior erupting fairly late.

The gravity on the pulp surface is just 1.5833 m/s2, 16% of normal gravity – almost exactly lunar gravity. This weakens convection currents and the speed with which bubbles move up. The scale height of the atmosphere, assuming the same composition and temperature as on Earth, will be 6.2 times higher. This means that pressure will decline much less with altitude, allowing far thicker clouds and weather systems. As we will see, the atmosphere will puff up more.

The separation has big consequences. Enormous amounts of air will be pushing out from the pulp as bubbles and jets, producing spectacular geysers (especially since the gravity is low). Even more dramatic is the heating: a lot of gravitational energy is released as the mass is compacted. The total gravitational energy of a constant density sphere of radius R is

$\int_0^R G [4\pi r^2 \rho] [4 \pi r^3 \rho/3] / r dr$ $= (16\pi^2 G\rho^2/3) \int_0^R r^4 dr$
$=(16\pi^2 G/15)\rho^2 R^5$

(the first factor in the integral is the mass of a spherical shell of radius r, the second the mass of the stuff inside, and the third the 1/r gravitational potential). If we ignore the mass of the air since it is small and we just want an order of magnitude estimate,  the compression of the berry mass gives energy

$E=(16\pi^2 G/15)(\rho_{berries}^2 r_{earth}^5 - \rho_{pulp}^2R_{pulp}^5)$ $\approx 4.3594\times 10^{29}$ J.

This is the energy output of the sun over half an hour, nothing to sneeze at: blueberry earth will become hot. There is about 573,000 J per kg, enough to heat the blueberries from freezing to boiling.

The result is that blueberry earth will turn into a roaring ocean of boiling jam, with the geysers of released air and steam likely ejecting at least a few berries into orbit (escape velocity is just 4.234 km/s, and berries at the initial surface will be even higher up in the potential). As the planet evolves a thick atmosphere of released steam will add to the already considerable air from the berries. It is not inconceivable that the planet may heat up further due to a water vapour greenhouse effect, turning into a very odd Venusian world.

Meanwhile the jam ocean is very deep, and the pressure at depth will be enough to cause the formation of high pressure ice even if it is warm. If the formation process is slow there will be some separation of water into ice and a concentration of other chemicals in the jam ocean, but I suspect the rapid collapse will instead make some kind of composite pulp ice. Ice VII forms above 9 GPa, so if we just use constant gravity this happens at a depth $z_{ice}=P_{VII}/g_{BE}\rho_{pulp}\approx 1,909$ km, about two-thirds of the radius. This would make up most of the interior. However, gravity is a bit weaker in the interior, so we need to take that into account. The pressure from all the matter above radius r is $P(r) =(3GM^2/8\pi R^4)(1-(r/R)^2)$, and the ice core will have radius $r_{ice}=\sqrt{1-P_{VII}/P(0)}$  $\approx$ 3,258 km. This is smaller, about 57% of the radius, and just 20% of the total volume.

The coalescence will also speed up rotation. The original blueberry earth would of course make one rotation every 24 hours, but the smaller result would have a smaller moment of inertia. The angular momentum conservation gives $(2/5)MR_1^2(2\pi/T_1) = (2/5)MR_2^2(2\pi/T_2)$, or $T_2 = (R_2/R_1)^2 T_1$, in this case 18.9210 hours. This in turn will increase the oblateness a bit, to approximately 0.038 – an 8.8 times increase over Earth.

Another effect is the orbit of the Moon. Now the two bodies have about equal mass. Is the Moon bound to blueberry earth? A kilogram of lunar material has potential energy $GM_{BE}/r_{moon} \approx$ 1.6925 $\times 10^{5}$ J, while the kinetic energy is $2.6442\times 10^5$ J – more than enough to escape. Had it remained the jam ocean would have made an excellent tidal dissipation mechanism that would have slowed down rotation and moved blueberry earth towards tidal lock with the moon much earlier than the 50 billion years it would otherwise have taken.

So, to sum up, to a person standing on the surface of the Earth when it turns into blueberries, the first effect would be a drastic reduction of gravity. Standing on the blueberries might be possible in theory, except that almost immediately they begin to compress rapidly and air starts erupting everywhere. The effect is basically the worst earthquake ever, and it keeps on going until everything has fallen 714 km. While this is going on everything heats up drastically until the entire environment is boiling jam and steam. The end result is a world that has a steam atmosphere covering an ocean of jam on top of warm blueberry granita.

# What kinds of grand futures are there?

I have been working for about a year on a book on “Grand Futures” – the future of humanity, starting to sketch a picture of what we could eventually achieve were we to survive, get our act together, and reach our full potential. Part of this is an attempt to outline what we know is and isn’t physically possible to achieve, part of it is an exploration of what makes a future good.

Here are some things that appear to be physically possible (not necessarily easy, but doable):

• Societies of very high standards of sustainable material wealth. At least as rich (and likely far above) current rich nation level in terms of what objects, services, entertainment and other lifestyle ordinary people can access.
• Human enhancement allowing far greater health, longevity, well-being and mental capacity, again at least up to current optimal levels and likely far, far beyond evolved limits.
• Sustainable existence on Earth with a relatively unchanged biosphere indefinitely.
• Expansion into space:
• Settling habitats in the solar system, enabling populations of at least 10 trillion (and likely many orders of magnitude more)
• Settling other stars in the milky way, enabling populations of at least 1029 people
• Settling over intergalactic distances, enabling populations of at least 1038 people.
• Survival of human civilisation and the species for a long time.
• As long as other mammalian species – on the order of a million years.
• As long as Earth’s biosphere remains – on the order of a billion years.
• Settling the solar system – on the order of 5 billion years
• Settling the Milky Way or elsewhere – on the order of trillions of years if dependent on sunlight
• Using artificial energy sources – up to proton decay, somewhere beyond 1032 years.
• Constructing Dyson spheres around stars, gaining energy resources corresponding to the entire stellar output, habitable space millions of times Earth’s surface, telescope, signalling and energy projection abilities that can reach over intergalactic distances.
• Moving matter and objects up to galactic size, using their material resources for meaningful projects.
• Performing more than a google (10100) computations, likely far more thanks to reversible and quantum computing.

While this might read as a fairly overwhelming list, it is worth noticing that it does not include gaining access to an infinite amount of matter, energy, or computation. Nor indefinite survival. I also think faster than light travel is unlikely to become possible. If we do not try to settle remote galaxies within 100 billion years accelerating expansion will move them beyond our reach. This is a finite but very large possible future.

What kinds of really good futures may be possible? Here are some (not mutually exclusive):

• Survival: humanity survives as long as it can, in some form.
• “Modest futures”: humanity survives for as long as is appropriate without doing anything really weird. People have idyllic lives with meaningful social relations. This may include achieving close to perfect justice, sustainability, or other social goals.
• Gardening: humanity maintains the biosphere of Earth (and possibly other planets), preventing them from crashing or going extinct. This might include artificially protecting them from a brightening sun and astrophysical disasters, as well as spreading life across the universe.
• Happiness: humanity finds ways of achieving extreme states of bliss or other positive emotions. This might include local enjoyment, or actively spreading minds enjoying happiness far and wide.
• Abolishing suffering: humanity finds ways of curing negative emotions and suffering without precluding good states. This might include merely saving humanity, or actively helping all suffering beings in the universe.
• Posthumanity: humanity deliberately evolves or upgrades itself into forms that are better, more diverse or otherwise useful, gaining access to modes of existence currently not possible to humans but equally or more valuable.
• Deep thought: humanity develops cognitive abilities or artificial intelligence able to pursue intellectual pursuits far beyond what we can conceive of in science, philosophy, culture, spirituality and similar but as yet uninvented domains.
• Creativity: humanity plays creatively with the universe, making new things and changing the world for its own sake.

I have no doubt I have missed many plausible good futures.

Note that there might be moral trades, where stay-at-homes agree with expansionists to keep Earth an idyllic world for modest futures and gardening while the others go off to do other things, or long-term oriented groups agreeing to give short-term oriented groups the universe during the stelliferous era in exchange for getting it during the cold degenerate era trillions of years in the future. Real civilisations may also have mixtures of motivations and sub-groups.

Note that the goals and the physical possibilities play out very differently: modest futures do not reach very far, while gardener civilisations may seek to engage in megascale engineering to support the biosphere but not settle space. Meanwhile the happiness-maximizers may want to race to convert as much matter as possible to hedonium, while the deep thought-maximizers may want to move galaxies together to create permanent hyperclusters filled with computation to pursue their cultural goals.

I don’t know what goals are right, but we can examine what they entail. If we see a remote civilization doing certain things we can make some inferences on what is compatible with the behaviour. And we can examine what we need to do today to have the best chances of getting to a trajectory towards some of these goals: avoiding getting extinct, improve our coordination ability, and figure out if we need to perform some global coordination in the long run that we need to agree on before spreading to the stars.

The Universe Today wrote an article about a paper by me, Toby and Eric about the Fermi Paradox. The preprint can be found on Arxiv (see also our supplements: 1,2,3 and 4). Here is a quick popular overview/FAQ.

# TL;DR

• The Fermi question is not a paradox: it just looks like one if one is overconfident in how well we know the Drake equation parameters.
• Our distribution model shows that there is a large probability of little-to-no alien life, even if we use the optimistic estimates of the existing literature (and even more if we use more defensible estimates).
• The Fermi observation makes the most uncertain priors move strongly, reinforcing the rare life guess and an early great filter.
• Getting even a little bit more information can update our belief state a lot!

# So, do you claim we are alone in the universe?

No. We claim we could be alone, and the probability is non-negligible given what we know… even if we are very optimistic about alien intelligence.

# What is the paper about?

The Fermi Paradox – or rather the Fermi Question – is “where are the aliens?” The universe is immense and old and intelligent life ought to be able to spread or signal over vast distances, so if it has some modest probability we ought to see some signs of intelligence. Yet we do not. What is going on? The reason it is called a paradox is that is there is a tension between one plausible theory ([lots of sites]x[some probability]=[aliens]) and an observation ([no aliens]).

## Dissolving the Fermi paradox: there is not much tension

We argue that people have been accidentally misled to feel there is a problem by being overconfident about the probability.

$N=R_*\cdot f_p \cdot n_e \cdot f_l \cdot f_i \cdot f_c \cdot L$

The problem lies in how we estimate probabilities from a product of uncertain parameters (as the Drake equation above). The typical way people informally do this with the equation is to admit that some guesses are very uncertain, give a “representative value” and end up with some estimated number of alien civilisations in the galaxy – which is admitted to be uncertain, yet there is a single number.

Obviously, some authors have argued for very low probabilities, typically concluding that there is just one civilisation per galaxy (“the $N\approx 1$ school”). This may actually still be too much, since that means we should expect signs of activity from nearly any galaxy. Others give slightly higher guesstimates and end up with many civilisations, typically as many as one expects civilisations to last (“the $N\approx L$ school”). But the proper thing to do is to give a range of estimates, based on how uncertain we actually are, and get an output that shows the implied probability distribution of the number of alien civilisations.

If one combines either published estimates or ranges compatible with current scientific uncertainty we get a distribution that makes observing an empty sky unsurprising – yet is also compatible with us not being alone.

The reason is that even if one takes a pretty optimistic view (the published estimates are after all biased towards SETI optimism since the sceptics do not write as many papers on the topic) it is impossible to rule out a very sparsely inhabited universe, yet the mean value may be a pretty full galaxy. And current scientific uncertainties of the rates of life and intelligence emergence are more than enough to create a long tail of uncertainty that puts a fair credence on extremely low probability – probabilities much smaller than what one normally likes to state in papers. We get a model where there is 30% chance we are alone in the visible universe, 53% chance in the Milky Way… and yet the mean number is 27 million and the median about 1! (see figure below)

This is a statement about knowledge and priors, not a measurement: armchair astrobiology.

## The Great Filter: lack of obvious aliens is not strong evidence for our doom

After this result, we look at the Great Filter. We have reason to think at least one term in the Drake equation is small – either one of the early ones indicating how much life or intelligence emerges, or one of the last one that indicate how long technological civilisations survive. The small term is “the Filter”. If the Filter is early, that means we are rare or unique but have a potentially unbounded future. If it is a late term, in our future, we are doomed – just like all the other civilisations whose remains would litter the universe. This is worrying. Nick Bostrom argued that we should hope we do not find any alien life.

Our paper gets a somewhat surprising result: when updating our uncertainties in the light of no visible aliens, it reduces our estimate of the rate of life and intelligence emergence (the early filters) much more than the longevity factor (the future filter).

The reason is that if we exclude the cases where our galaxy is crammed with alien civilisations – something like the Star Wars galaxy where every planet has its own aliens – then that leads to an update of the parameters of the Drake equation. All of them become smaller, since we will have a more empty universe. But the early filter ones – life and intelligence emergence – change much more downwards than the expected lifespan of civilisations since they are much more uncertain (at least 100 orders of magnitude!) than the merely uncertain future lifespan (just 7 orders of magnitude!).

So this is good news: the stars are not foretelling our doom!

Note that a past great filter does not imply our safety.

The conclusion can be changed if we reduce the uncertainty of the past terms to less than 7 orders of magnitude, or the involved  probability distributions have weird shapes. (The mathematical proof is in supplement IV, which applies to uniform and normal distributions. It is possible to add tails and other features that breaks this effect – yet believing such distributions of uncertainty requires believing rather strange things. )

# Isn’t this armchair astrobiology?

Yes. We are after all from the philosophy department.

The point of the paper is how to handle uncertainties, especially when you multiply them together or combine them in different ways. It is also about how to take lack of knowledge into account. Our point is that we need to make knowledge claims explicit – if you claim you know a parameter to have the value 0.1 you better show a confidence interval or an argument about why it must have exactly that value (and in the latter case, better take your own fallibility into account). Combining overconfident knowledge claims can produce biased results since they do not include the full uncertainty range: multiplying point estimates together produces a very different result than when looking at the full distribution.

All of this is epistemology and statistics rather than astrobiology or SETI proper. But SETI makes a great example since it is a field where people have been learning more and more about (some) of the factors.

The same approach as we used in this paper can be used in other fields. For example, when estimating risk chains in systems (like the risk of a pathogen escaping a biosafety lab) taking uncertainties in knowledge will sometimes produce important heavy tails that are irreducible even when you think the likely risk is acceptable. This is one reason risk estimates tend to be overconfident.

# Probability?

What kind of distributions are we talking about here? Surely we cannot speak of the probability of alien intelligence given the lack of data?

There is a classic debate in probability between frequentists, claiming probability is the frequency of events that we converge to when an experiment is repeated indefinitely often, and Bayesians, claiming probability represents states of knowledge that get updated when we get evidence. We are pretty Bayesian.

The distributions we are talking about are distributions of “credences”: how much you believe certain things. We start out with a prior credence based on current uncertainty, and then discuss how this gets updated if new evidence arrives. While the original prior beliefs may come from shaky guesses they have to be updated rigorously according to evidence, and typically this washes out the guesswork pretty quickly when there is actual data. However, even before getting data we can analyse how conclusions must look if different kinds of information arrives and updates our uncertainty; see supplement II for a bunch of scenarios like “what if we find alien ruins?”, “what if we find a dark biosphere on Earth?” or “what if we actually see aliens at some distance?”

# Correlations?

Our use of the Drake equation assumes the terms are independent of each other. This of course is a result of how Drake sliced things into naturally independent factors. But there could be correlations between them. Häggström and Verendel showed that in worlds where the priors are strongly correlated updates about the Great Filter can get non-intuitive.

We deal with this in supplement II, and see also this blog post. Basically, it doesn’t look like correlations are likely showstoppers.

# You can’t resample guesses from the literature!

Sure can. As long as we agree that this is not so much a statement about what is actually true out there, but rather the range of opinions among people who have studied the question a bit. If people give answers to a question in the range from ten to a hundred, that tells you something about their beliefs, at least.

What the resampling does is break up the possibly unconscious correlation between answers (“the $N\approx 1$ school” and “the $N\approx L$ school” come to mind). We use the ranges of answers as a crude approximation to what people of good will think are reasonable numbers.

You may say “yeah, but nobody is really an expert on these things anyway”. We think that is wrong. People have improved their estimates as new data arrives, there are reasons for the estimates and sometimes vigorous debate about them. We warmly recommend Vakoch, D. A., Dowd, M. F., & Drake, F. (2015). The Drake Equation. The Drake Equation, Cambridge, UK: Cambridge University Press, 2015 for a historical overview. But at the same time these estimates are wildly uncertain, and this is what we really care about. Good experts qualify the certainty of their predictions.

## But doesn’t resampling from admittedly overconfident literature constitute “garbage in, garbage out”?

Were we trying to get the true uncertainties (or even more hubristically, the true values) this would not work: we have after all good reasons to suspect these ranges are both biased and overconfidently narrow. But our point is not that the literature is right, but that even if one were to use the overly narrow and likely overly optimistic estimates as estimates of actual uncertainty the broad distribution will lead to our conclusions. Using the literature is the most conservative case.

Note that we do not base our later estimates on the literature estimate but our own estimates of scientific uncertainty. If they are GIGO it is at least our own garbage, not recycled garbage. (This reading mistake seems to have been made on Starts With a Bang).

# What did the literature resampling show?

An overview can be found in Supplement III. The most important point is just that even estimates of super-uncertain things like the probability of life lies in a surprisingly narrow range of values, far more narrow than is scientifically defensible. For example, $f_l$ has five estimates ranging from $10^{-30}$ to $10^{-5}$, and all the rest are in the range $10^{-3}$ to 1. $f_i$ is even worse, with one microscopic and nearly all the rest between one in a thousand to one.

It also shows that estimates that are likely biased towards optimism (because of publication bias) can be used to get a credence distribution that dissolves the paradox once they are interpreted as ranges. See the above figure, were we get about 30% chance of being alone in the Milky Way and 8% chance of being alone in the visible universe… but a mean corresponding to 27 million civilisations in the galaxy and a median of about a hundred.

There are interesting patterns in the data. When plotting the expected number of civilisations in the Milky Way based on estimates from different eras the number goes down with time: the community has clearly gradually become more pessimistic. There are some very pessimistic estimates, but even removing them doesn’t change the overall structure.

# What are our assumed uncertainties?

A key point in the paper is trying to quantify our uncertainties somewhat rigorously. Here is a quick overview of where I think we are, with the values we used in our synthetic model:

• $N_*$: the star formation rate in the Milky Way per year is fairly well constrained. The actual current uncertainty is likely less than 1 order of magnitude (it can vary over 5 orders of magnitude in other galaxies). In our synthetic model we put this parameter as log-uniform from 1 to 100.
• $f_p$: the fraction of systems with planets is increasingly clear ≈1. We used log-uniform from 0.1 to 1.
• $n_e$: number of Earth-like in systems with planets.
• This ranges from rare earth arguments ($<10^{-12}$) to >1. We used log-uniform from 0.1 to 1 since recent arguments have shifted away from rare Earths, but we checked that adding it did not change the conclusions much.
• $f_l$: Fraction of Earthlike planets with life.
• This is very uncertain; see below for our arguments that the uncertainty ranges over perhaps 100 orders of magnitude.
• There is an absolute lower limit due to ergodic repetition: $f_l >10^{-10^{115}}$ – in an infinite universe there will eventually be randomly generated copies of Earth and even the entire galaxy (at huge distances from each other). Observer selection effects make using the earliness of life on Earth problematic.
• We used a log-normal rate of abiogenesis that was transformed to a fraction distribution.
• $f_i$: Fraction of lifebearing planets with intelligence/complex life.
• This is very uncertain; see below for our arguments that the uncertainty ranges over perhaps 100 orders of magnitude.
• One could argue there has been 5 billion species so far and only 1 intelligent, so we know $f_i>2\cdot 10^{-10}$. But one could argue that we should count assemblages of 10 million species, which gives a fraction 1/500 per assemblage. Observer selection effects may be distorting this kind of argument.
• We could have used a log-normal rate of complex life emergence that was transformed to a fraction distribution or a broad log-linear distribution. Since this would have made many graphs hard to interpret we used log-uniform from 0.001 to 1, not because we think this likely but just as a simple illustration (the effect of the full uncertainty is shown in Supplement II).
• $f_c$: Fraction of time when it is communicating.
• Very uncertain; humanity is 0.000615 so far. We used log-uniform from 0.01 to 1.
• $L$: Average lifespan of a civilisation.
• Fairly uncertain; $50? years (upper limit because of the Drake equation applicability: it assumes the galaxy is in a steady state, and if civilisations are long-lived enough they will still be accumulating since the universe is too young.)
• We used log-uniform from 100 to 10,000,000,000.

Note that this is to some degree a caricature of current knowledge, rather than an attempt to represent it perfectly. Fortunately our argument and conclusions are pretty insensitive to the details – it is the vast ranges of uncertainty that are doing the heavy lifting.

## Abiogenesis

Why do we think the fraction of planets with life parameters could have a huge range?

First, instead of thinking in terms of the fraction of planets having life, consider a rate of life formation in suitable environments: what is the induced probability distribution? The emergence is a physical/chemical transition of some kind of primordial soup, and transition events occur in this medium at some rate per unit volume: $f_L\approx \lambda V t$ where $V$ is the available volume and $t$ is the available time. High rates would imply that almost all suitable planets originate life, while low rates would imply that almost no suitable planets originate life.

The uncertainty regarding the length of time when it is possible is at least 3 orders of magnitude ($10^7-10^{10}$ years).

The uncertainty regarding volumes spans 20+ orders of magnitude – from entire oceans to brine pockets on ice floes.

Uncertainty regarding transition rates can span 100+ orders of magnitude! The reason is that this might involve combinatoric flukes (you need to get a fairly longish sequence of parts into the right sequence to get the right kind of replicator), or that it is like the protein folding problem where Levinthal’s paradox shows that it takes literally astronomical time to get entire oceans of copies of a protein to randomly find the correctly folded position (actual biological proteins “cheat” by being evolved to fold neatly and fast). Even chemical reaction rates span 100 orders of magnitude. On the other hand, spontaneous generation could conceivably be common and fast! So we should conclude that $\lambda$ has an uncertainty range of at least 100 orders of magnitude.

Actual abiogenesis will involve several steps. Some are easy, like generating simple organic compounds (plentiful in asteroids, comets and Miller-Urey experiment). Some are likely tough. People often overlook that even how to get proteins and nucleic acids in a watery environment is somewhat of a mystery since these chains tend to hydrolyze; the standard explanation is to look for environments that have a wet-dry cycle allowing complexity to grow. But this means $V$ is much smaller than an ocean.

That we have tremendous uncertainty about abiogenesis does not mean we do not know anything. We know a lot. But at present we have no good scientific reasons to believe we know the rate of life formation per liter-second. That will hopefully change.

## Doesn’t creationists argue stuff like this?

There is a fair number of examples of creationists arguing that the origin of life must be super-unlikely and hence we must believe in their particular god.

The problem(s) with this kind of argument is that it presupposes that there is only one planet, and somehow we got a one-in-a-zillion chance on that one. That is pretty unlikely. But the reality is that there is a zillion planets, so even if there is a one-in-a-zillion chance for each of them we should expect to see life somewhere… especially since being a living observer is a precondition for “seeing life”! Observer selection effects really matter.

We are also not arguing that life has to be super-unlikely. In the paper our distribution of life emergence rate actually makes it nearly universal 50% of the time – it includes the possibility that life will spontaneously emerge in any primordial soup puddle left alone for a few minutes. This is a possibility I doubt anybody believes in, but it could be that would-be new life is emerging right under our noses all the time, only to be outcompeted by the advanced life that already exists.

Creationists make a strong claim that they know $f_l \ll 1$; this is not really supported by what we know. But $f_l \ll 1$ is totally within possibility.

## Complex life

Even if you have life, it might not be particularly good at evolving. The reasoning is that it needs to have a genetic encoding system that is both rigid enough to function efficiently and fluid enough to allow evolutionary exploration.

All life on Earth shares almost exactly the same genetic systems, showing that only rare and minor changes have occurred in $\approx 10^{40}$ cell divisions. That is tremendously stable as a system. Nonetheless, it is fairly commonly believed that other genetic systems preceded the modern form. The transition to the modern form required major changes (think of upgrading an old computer from DOS to Windows… or worse, from CP/M to DOS!). It would be unsurprising if the rate was < 1 per $10^{100}$ cell divisions given the stability of our current genetic system – but of course, the previous system might have been super-easy to upgrade.

Modern genetics required >1/5 of the age of the universe to evolve intelligence. A genetic system like the one that preceded ours might both be stable over a google cell divisions and evolve more slowly by a factor of 10, and run out the clock. Hence some genetic systems may be incapable of ever evolving intelligence.

This related to a point made by Brandon Carter much earlier, where he pointed out that the timescales of getting life, evolving intelligence and how long biospheres last are independent and could be tremendously different – that life emerged early on Earth may have been a fluke due to the extreme difficulty of also getting intelligence within this narrow interval (on all the more likely worlds there are no observers to notice). If there are more difficult transitions, you get an even stronger observer selection effect.

Evolution goes down branches without looking ahead, and we can imagine that it could have an easier time finding inflexible coding systems (“B life”) unlike our own nice one (“A life”). If the rate of discovering B-life is $\lambda_B$ and the rate of discovering capable A-life is $\lambda_A$, then the fraction of A-life in the universe is just $\lambda_A/\lambda_B$ – and rates can differ many orders of magnitude, producing a life-rich but evolution/intelligence-poor universe. Multiple step models add integer exponents to rates: these the multiply order of magnitude differences.

So we have good reasons to think there could be a hundred orders of magnitude uncertainty on the intelligence parameter, even without trying to say something about evolution of nervous systems.

# How much can we rule out aliens?

Humanity has not scanned that many stars, so obviously we have checked even a tiny part of the galaxy – and could have missed them even if we looked at the right spot. Still, we can model how this weak data updates our beliefs (see Supplement II).

The strongest argument against aliens is the Tipler-Hart argument that settling the Milky Way, even when you are expanding at low speed, will only take a fraction of its age. And once a civilisation is everywhere it is hard to have it go extinct everywhere – it will tend to persist even if local pieces crash. Since we do not seem to be in a galaxy paved over by an alien supercivilisation we have a very strong argument to assume a low rate of intelligence emergence. Yes, even if if 99% of civilisations stay home or we could be in an alien zoo, you still get a massive update against a really settled galaxy. In our model the probability of less than one civilisation per galaxy went from 52% to 99.6% if one include the basic settlement argument.

The G-hat survey of galaxies, looking for signs of K3 civilisations, did not find any. Again, maybe we missed something or most civilisations don’t want to re-engineer galaxies, but if we assume about half of them want to and have 1% chance of succeeding we get an update from 52% chance of less than one civilisation per galaxy to 66%.

Using models of us looking at about 1,000 stars or that we do not think there is any civilisation within 18 pc gives a milder update, from 52% to 53 and 57% respectively. These just rule out super-densely inhabited scenarios.

# So what? What is the use of this?

People like to invent explanations for the Fermi paradox that all would have huge implications for humanity if they were true – maybe we are in a cosmic zoo, maybe there are interstellar killing machines out there, maybe singularity is inevitable, maybe we are the first civilisation ever, maybe intelligence is a passing stagemaybe the aliens are sleeping… But if you are serious about thinking about the future of humanity you want to be rigorous about this. This paper shows that current uncertainties actually force us to be very humble about these possible explanations – we can’t draw strong conclusions from the empty sky yet.

But uncertainty can be reduced! We can learn more, and that will change our knowledge.

From a SETI perspective, this doesn’t say that SETI is unimportant or doomed to failure, but rather that if we ever see even the slightest hint of intelligence out there many parameters will move strongly. Including the all-important $L$.

From an astrobiology perspective, we hope we have pointed at some annoyingly uncertain factors and that this paper can get more people to work on reducing the uncertainty. Most astrobiologists we have talked with are aware of the uncertainty but do not see the weird knock-on-effects from it. Especially figuring out how we got our fairly good coding system and what the competing options are seems very promising.

Even if we are not sure we can also update our plans in the light of this. For example, in my tech report about settling the universe fast I pointed out that if one is uncertain about how much competition there might be for the universe one can use one’s probability estimates to decide on the range to aim for.

## Uncertainty matters

Perhaps the most useful insight is that uncertainty matters and we should learn to embrace it carefully rather than assume that apparently specific numbers are better.

Perhaps never in the history of science has an equation been devised yielding values differing by eight orders of magnitude. . . . each scientist seems to bring his own prejudices and assumptions to the problem.
History of Astronomy: An Encyclopedia, ed. by John Lankford, s.v. “SETI,” by Steven J. Dick, p. 458.

When Dick complained about the wide range of results from the Drake equation he likely felt it was too uncertain to give any useful result. But 8 orders of magnitude differences is in this case just a sign of downplaying our uncertainty and overestimating our knowledge! Things gets much better when we look at what we know and don’t know, figuring out the implications from both.

Jill Tarter said the Drake equation was “a wonderful way to organize our ignorance”, which we think is closer to the truth than demanding a single number as an answer.

# Ah, but I already knew this!

We have encountered claims that “nobody” really is naive about using the Drake equation. Or at least not any “real” SETI and astrobiology people. Strangely enough people never seem to make this common knowledge visible, and a fair number of papers make very confident statements about “minimum” values for life probabilities that we think are far, far above the actual scientific support.

Sometimes we need to point out the obvious explicitly.

[Edit 2018-06-30: added the GIGO section]