A small step for machinekind?

The moon, seen from the ground.
The moon, seen from the ground.

(Originally published at https://qz.com/1666726/we-should-stop-sending-humans-into-space/ with a title that doesn’t quite fit my claim)

50 years ago humans left their footprints on the moon. We have left footprints on Earth for millions of years, but the moon is the only other body with human footprints.

Yet there are track marks on Mars. There is a deliberately made crater on the comet 9P/Tempel. There are landers on the Moon, Venus, Mars, Titan, the asteroid Eros, and the comet Churyumov–Gerasimenko. Not to mention a number of probes of varying levels of function across and outside the solar system.

As people say, Mars is the only planet in the solar system solely inhabited by robots. In 50 years, will there be a human on Mars… or just even more robots?

What is it about space?

There are of course entirely normal reasons to go to space – communication satellites, GPS, espionage, ICBMs – and massive scientific reasons. But were they the only reasons to explore space it would be about as glorious as marine exploration. Worth spending taxpayer and private money on, but hardly to the extent we have done it.

Space is inconceivably harsher than any terrestrial environment, but also fundamentally different. It is vast beyond imagination. It contains things that have no counterpart on Earth. In many ways it has replaced our supernatural realms and gods with a futuristic realm of exotic planets and – maybe – extra-terrestrial life and intelligence. It is fundamentally The Future.

Again, there are good objective reasons for this focus. In the long run we are not going to survive as a species if we are not distributed across different biospheres or can leave this one when the sun turns a red giant.

Is space a suitable place for a squishy species?

Humans are adapted to a narrow range of conditions. A bit too much or too little pressure, oxygen, water, temperature, radiation and acceleration and we die. In fact, most of the Earth’s surface is largely uninhabitable unless we surround ourselves with protective clothing and technology. In going to space we need to not just bring with ourselves a controlled environment hermit-crab style, but we need to function in conditions we have not evolved for at all. All our ancestors lived with gravity. All our ancestors had reflexes and intuitions that were adequate for earth’s environment. But this means that our reflexes and intuitions are likely to be wrong in deadly ways without extensive retraining.

Meanwhile robots can be designed to not requite the life support, have reactions suited to the space environment and avoid the whole mortality thing. Current robotic explorers are rare and hence extremely expensive, motivating endless pre-mission modelling and careful actions. But robotics is becoming cheaper and more adaptable and if space access becomes cheaper we should expect a more ruthless use of robots. Machine learning allows robots to learn from their experiences, and if a body breaks down or is lost another copy of the latest robot software can be downloaded.

Our relations to robots and artificial intelligence are complicated. For time immemorial we have imagined making artificial servants or artificial minds, yet such ideas invariably become mirrors for ourselves. When we consider the possibility we begin to think about humanity’s place in the world (if man was made in God’s image, whose image is the robot?), our own failings (endless stories about unwise creators and rebellious machines), mysteries about what we are (what is intelligence, consciousness, emotions, dignity…?) When trying to build them we have learned that tasks that are simple for a 5-year old are hard to do while tasks that stump PhDs can be done easily, that our concepts of ethics may be in for a very practical stress test in the near future…

In space robots have so far not been seen as very threatening. Few astronauts have worried about their job security. Instead people seem to adopt their favourite space probes and rovers, becoming sentimental about their fate.

(Full disclosure: I did not weep for the end of Opportunity, but I did shed a tear for Cassini)

What kind of exploration do we wish for?

So, should we leave space to tele-operated or autonomous robots reporting back their findings for our thrills and education while patiently building useful installations for our benefit?

My thesis is: we want to explore space. Space is unsuitable for humans. Robots and telepresence may be better for exploration. Yet what we want is not just exploration in the thin sense of knowing stuff. We want exploration in the thick sense of being there.

There is a reason MarsOne got volunteers despite planning a one-way trip to Mars. There is a reason we keep astronauts at fabulous expense on the ISS doing experiments (beside that their medical state in a sense is the most valuable experiment): getting glimpses of our planet from above and touching the fringe of the Overview Effect is actually healthy for our culture.

Were we only interested in the utilitarian and scientific use of space we would be happy to automate it. The value from having people be present is deeper: it is aspirational, not just in the sense that maybe one day we or our grandchildren could go there but in the sense that at least some humans are present in the higher spheres. It literally represents the “giant leap for humanity” Neil Armstrong referred to.

A sceptic may wonder if it is worth it. But humanity seldom performs grand projects based on a practical utility calculation. Maybe it should. But the benefits of building giant telescopes, particle accelerators, the early Internet, or cathedrals were never objective and clear. A saner species might not perform these projects and would also abstain from countless vanity projects, white elephants and overinvestments, saving much resources for other useful things… yet this species would likely never have discovered much astronomy or physics, the peculiarities of masonry and managing Internetworks. It might well have far slower technological advancement, becoming poorer in the long run despite the reasonableness of its actions.

This is why so many are unenthusiastic about robotic exploration. We merely send tools when we want to send heroes.

Maybe future telepresence will be so excellent that we can feel and smell the Martian environment through our robots, but as evidenced by the queues in front of the Mona Lisa or towards the top of Mount Everest we put a premium on authenticity. Not just because it is rare and expensive but because we often think it is worthwhile.

As artificial intelligence advances those tools may become more like us, but it will always be a hard sell to argue that they represent us in the same way a human would. I can imagine future AI having just as vivid or even better awareness of its environment than we could, and in a sense being a better explorer. But to many people this would not be a human exploring space, just another (human-made) species exploring space: it is not us. I think this might be a mistake if the AI actually is a proper continuation of our species in terms of culture, perception, and values but I have no doubt this will be a hard sell.

What kind of settlement do we wish for?

We may also want to go to space to settle it. If we could get it prepared by automation, that is great.

While exploration is about establishing a human presence, relating to an environment from the peculiar human perspective of the world and maybe having the perspective changed, settlement is about making a home. By its nature it involves changing the environment into a human environment.

A common fear in science fiction and environmental literature is that humans would transform everything into more of the same: a suburbia among the stars. Against this another vision is contrasted: to adapt and allow the alien to change us to a suitable extent. Utopian visions of living in space not only deal with the instrumental freedom of a post-scarcity environment but the hope that new forms of culture can thrive in the radically different environment.

Some fear/hope we may have to become cyborgs to do it. Again, there is the issue of who “we” are. Are we talking about us personally, humanity-as-we-know-it, transhumanity, or the extension of humanity in the form of our robotic mind children? We might have some profound disagreements about this. But to adapt to space we will likely have to adapt more than ever before as a species, and that will include technological changes to our lifestyle, bodies and minds that will call into question who we are on an even more intimate level than the mirror of robotics.

A small step

If a time traveller told me that in 50 years’ time only robots had visited the moon, I would be disappointed. It might be the rational thing to do, but it shows a lack of drive on behalf of our species that would be frankly worrying – we need to get out of our planetary cradle.

If the time traveller told me that in 50 years’ time humans but no robots had visited the moon, I would also be disappointed. Because that implies that we either fail to develop automation to be useful – a vast loss of technological potential – or that we make space to be all about showing off our emotions rather than a place we are serious about learning, visiting and inhabiting.

Obligatory Covid-19 blogging

SARS-CoV-2 spike ectodomain structure (open state)
SARS-CoV-2 spike ectodomain structure (open state) https://3dprint.nih.gov/discover/3DPX-013160
Over at Practical Ethics I have blogged a bit:

The Unilateralist Curse and Covid-19, or Why You Should Stay Home: why we are in a unilateralist curse situation in regards to staying home, making it rational to stay home even when it seems irrational.

Taleb and Norman had a short letter Ethics of Precaution: Individual and Systemic Risk making a similar point, noting that recognizing the situation type and taking contagion dynamics into account is a reason to be more cautious. It is different from our version in the effect of individual action: we had single actor causing full consequences, the letter has exponential scale-up. Also, way more actors: everyone rather than just epistemic peers, and incentives that are not aligned since actors do not bear the full costs of their actions. The problem is finding strategies robust to stupid, selfish actors. Institutional channeling of collective rationality and coordination is likely the only way for robustness here.

Never again – will we make Covid-19 a warning shot or a dud? deals with the fact that we are surprisingly good at forgetting harsh lessons (1918, 1962, Y2K, 2006…), giving us a moral duty to try to ensure appropriate collective memory of what needs to be recalled.

This is why Our World In Data, the Oxford COVID-19 Government Response Tracker and IMF’s policy responses to Covid-19 are so important. The disjointed international responses act as natural experiments that will tell us important things about best practices and the strengths/weaknesses of various strategies.

When the inverse square stops working

In physics inverse square forces are among the most reliable things. You can trust that electric and gravitational fields from monopole charges decay like 1/r^2. Sure, dipoles and multipoles may add higher order terms, and extended conductors like wires and planes produce other behaviour. But most of us think we can trust the 1/r^2 behaviour for spherical objects.

I was surprised to learn that this is not at all true recently when a question at the Astronomy Stack Exchange asked about whether gravity changes near the surface of dense objects.

Electromagnetism does not quite obey the inverse square law

The cause was this paper by John Lekner, that showed that there can be attraction between conductive spheres even when they have the same charge! (Popular summary in Nature by Philip Ball) The “trick” here is that when the charged spheres approach each other the charges on the surface redistribute themselves which leads to a polarization. Near the the other sphere charges are pushed away, and if one sphere has a different radius from the other the “image charge” can be opposite, leading to a net attraction.

Charge distribution on two spherical conductors with the same net charge.
Charge distribution on two spherical conductors with the same net charge.

The formulas in the paper are fairly non-intuitive, so I decided to make an approximate numeric model. I put 500 charges on two spheres (radius 1.0 and 2.0) and calculated the mutual electrostatic repulsion/attraction moving them along the surface. Iterate until it stabilizes, then calculate the overall force on one of the spheres.

Force between two equally charged (blue) and two oppositely charged (red) spheres, and force time squared distance.
Force between two equally charged (blue) and two oppositely charged (red) spheres, and force time squared distance.

The result is indeed that the 1/r^2 law fails as the spheres approach each other. The force times squared distance is constant until they get within a few radii, and then the equally charged sphere begins to experience less repulsion and the oppositely charged spheres more attraction than expected. My numerical method is too sloppy to do a really good job of modelling the near-touching phenomena, but it is enough to show that that the inverse square effect is not true for conductors close enough.

Gravity doesn’t obey the inverse square law either

My answer to the question was along the same lines: if two spherical bodies get close to each other they are going to deform, and this will also affect the force between them. In this case it is not charges moving on the surface, but rather gravitational and tidal distortion turning them into ellipsoids. Strictly speaking, they will turn into more general teardrop shapes, but we can use the ellipsoid as an approximation. If they have fixed centres of mass they will be prolate ellipsoids, while if they are orbiting each other they would be general three-axis ellipsoids.

Calculating the gravitational field of an ellipsoid has been done and has a somewhat elegant answer that unfortunately needs to be expressed terms of special functions. The gravitational potential in the system is just the sum of the potentials from both ellipsoids. The equilibrium shapes would correspond to ellipsoids with the same potential along their entire surface; maybe there is an explicit solution, but it does look likely to be an algebraic mess of special functions.

I did a numeric exploration instead. To find the shape I started with spheres and adjusted the semi-major axis (while preserving volume) so the potential along the surface became more equal at the poles. After a few iterations this gives a self-consistent shape. Then I calculated the force (the derivative of the potential) due to this shape on the other mass.

Semi-major axis, force, and force times distance squared for two self-gravitating unit volume ellipsoids at different center-of-mass distances.
Semi-major axis, force, and force times distance squared for two self-gravitating unit volume ellipsoids at different center-of-mass distances.

The result is indeed that the force increases faster than 1/r^2 as the bodies approach each other, since they elongate and eventually merge (a bit before this they will deviate from my ellipsoidal assumption).

This was the Newtonian case. General relativity is even messier. In intense gravitational fields space-time is curved and expanded, making even the meaning of the distance in the inverse square law problematic. For black holes the Paczyński–Wiita potential is an approximation of the potential, and it deviates from the U(r)=-GM/r potential as U_{PW}(r)=-GM/(r-R_S) (where R_S is the Schwarzschild radius). It makes the force increase faster than the classical potential as we approach r=R_S.

Normally we assume that charges and masses stay where they are supposed to be, just as we prefer to reason as if objects are perfectly rigid or well described by point masses. In many situations this stops being true and then the effective forces can shift as the objects and their charges shift around.

What is the smallest positive integer that will never be used?

A great question from twitter:

This is a bit like the “what is the smallest uninteresting number?” paradox, but not paradoxical: we do not have to name it (and hence make it thought about/interesting) to reason about it.

I will first give a somewhat rough probabilistic bound, and then a much easier argument for the scale of this number. TL;DR: the number is likely smaller than 10^{173}.

Probabilistic bound

If we think about k numbers with frequencies N(x), N(x) approaches some probability distribution p(x). To simplify things we assume p(x) is a decreasing function of x; this is not strictly true (see below) but likely good enough.

If we denote the cumulative distribution function P(x)=\Pr[X<x] we can use the k:th order statistics to calculate the distribution of the maximum of the numbers: F_{(k)}(x) = [P(x)]^{k}. We are interested in the point where it becomes is likely that the number x has not come up despite the trials, which is somewhere above the median of the maximum: F_{(k)}(x^*)=1/2.

What shape does p(x) have? (Dorogovtsev, Mendes, Oliveira 2005) investigated numbers online and found a complex, non-monotonic shape. Obviously dates close to the present are overrepresented, as are prices (ending in .99 or .95), postal codes and other patterns. Numbers in exponential notation stretch very far up. But mentions of explicit numbers generally tend to follow N(x)\sim 1/\sqrt{x}, a very flat power-law.

So if we have k uses we should expect roughly x<k^2 since much larger x are unlikely to occur even once in the sample. We can hence normalize to get p(x)=\frac{1}{2(k-1)}\frac{1}{\sqrt{x}}. This gives us P(x)=(\sqrt{x}-1)/(k-1), and hence F_{(k)}(x)=[(\sqrt{x}-1)/(k-1)]^k. The median of the maximum becomes x^* = ((k-1)2^{-1/k}+1)^2 \approx k^2 - 2k \ln(2). We are hence not entirely bumping into the k^2 ceiling, but we are close – a more careful argument is needed to take care of this.

So, how large is $k$ today? Dorogovtsev et al. had on the order of k=10^{12}, but that was just searchable WWW pages back in 2005. Still, even those numbers contain numbers that no human ever considered since many are auto-generated. So guessing x^* \approx 10^{24} is likely not too crazy. So by this argument, there are likely 24 digit numbers that nobody ever considered.

Consider a number…

Another approach is to assume each human considers a number about once a minute throughout their lifetime (clearly an overestimate given childhood, sleep, innumeracy etc. but we are mostly interested in orders of magnitude anyway and making an upper bound) which we happily assume to be about a century, giving a personal k across a life as about 10^{8}. There has been about 100 billion people, so humanity has at most considered 10^{19} numbers. This would give an estimate using my above formula as x^* \approx 10^{38}.

But that assumes “random” numbers, and is a very loose upper bound, merely describing a “typical” small unconsidered number. Were we to systematically think through the numbers from 1 and onward we would have the much lower x^* \approx 10^{19}. Just 19 digits!

One can refine this a bit: if we have time T and generate new numbers at a rate r per second, then k=rT and we will at most get k numbers. Hence the smallest number never considered has to be at most k+1.

Seth Lloyd estimated that the observable universe cannot have performed more than 10^{120} operations on 10^{90} bits. If each of those operations was a consideration of a number we get a bound on the first unconsidered number as <10^{120}.

This can be used to consider the future too. Computation of our kind can continue until proton decay in \sim 10^{36} years or so, giving a bound of 10^{173} if we use Lloyd’s formula. That one uses the entire observable universe; if we instead consider our own future light cone the number is going to be much smaller.

But the conclusion is clear: if you write a 173 digit number with no particular pattern of digits (a bit more than two standard lines of typing), it is very likely that this number would never have shown up across the history of the entire universe except for your action. And there is a smaller number that nobody – no human, no alien, no posthuman galaxy brain in the far future – will ever consider.

 

 

Newtonmas fractals: rose of gravity

Continuing my intermittent Newtonmas fractal tradition (2014, 2016, 2018), today I play around with a very suitable fractal based on gravity.

The problem

On Physics StackExchange NiveaNutella asked a simple yet tricky to answer question:

If we have two unmoving equal point masses in the plane (let’s say at (\pm 1,0)) and release particles from different locations they will swing around the masses in some trajectory. If we colour each point by the mass it approaches closest (or even collides with) we get a basin of attraction for each mass. Can one prove the boundary is a straight line?

User Kasper showed that one can reframe the problem in terms of elliptic coordinates and show that this implies a straight boundary, while User Lineage showed it more simply using the second constant of motion. I have the feeling that there ought to be an even simpler argument. Still, Kasper’s solution show that the generic trajectory will quasiperiodically fill a region and tend to come arbitrarily close to one of the masses.

The fractal

In any case, here is a plot of the basins of attraction shaded by the time until getting within a small radius r_{trap} around the masses. Dark regions take long to approach any of the masses, white regions don’t converge within a given cut-off time.

Gravity fractal for N=2.
Gravity fractal for N=2.

The boundary is a straight line, and surrounding the simple regions where orbits fall nearly straight into the nearest mass are the wilder regions where orbits first rock back and forth across the x-axis before settling into ellipses around the masses.

The case for 5 evenly spaced masses for r_{trap}=0.1 and 0.01 (assuming unit masses at unit distance from origin and G=1) is somewhat prettier.

Gravity fractal for N=5, trap radius = 0.1.
Gravity fractal for N=5, trap radius = 0.1.
Gravity fractal for N=5, trap radius = 0.01.
Gravity fractal for N=5, trap radius = 0.01.

As r_{trap}\rightarrow 0 the basins approach ellipses around their central mass, corresponding to orbits that loop around them in elliptic orbits that eventually get close enough to count as a hit. The onion-like shading is due to different number of orbits before this happens. Each basin also has a tail or stem, corresponding to plunging orbits coming in from afar and hitting the mass straight. As the trap condition is made stricter they become thinner and thinner, yet form an ever more intricate chaotic web oughtside the central region. Due to computational limitations (read: only a laptop available) these pictures are of relatively modest integration times.

I cannot claim credit for this fractal, as NiveaNutella already plotted it. But it still fascinates me.

 

 

Wada basins and mille-feuille collision manifolds

These patterns are somewhat reminiscent of the classic Newton’s root-finding iterative formula fractals: several basins of attraction with a fractal border where pairs of basins encounter interleaved tiny parts of basins not member of the pair.

However, this dynamics is continuous rather than discrete. The plane is a 2D section through a 4D phase space, where starting points at zero velocity accelerate so that they bob up and down/ana and kata along the velocity axes. This also leads to a neat property of the basins of attraction: they are each arc-connected sets, since for any two member points that are the start of trajectories they end up in a small ball around the attractor mass. One can hence construct a map from [0,1] to (x,y,\dot{x},\dot{x}) that is a homeomorphism. There are hence just N basins of attraction, plus a set of unstable separatrix points that never approach the masses. Some of these border points are just invariant (like the origin in the case of the evenly distributed masses), others correspond to unstable orbits.

Each mass is surrounded by a set of trajectories hitting it exactly, which we can parametrize by the angle they make and the speed they have inwards when they pass some circle around the mass point. They hence form a 3D manifold \theta \times v \times t where t\in (0,\infty) counts the time until collision (i.e. backwards). These collision manifolds must extend through the basin of attraction, approaching the border in ever more convoluted ways as t approaches \infty. Each border point has a neighbourhood where there are infinitely many trajectories directly hitting one of the masses. They form 3D sheets get stacked like an infinitely dense mille-feuille cake in the 4D phase space. And typically these sheets are interleaved with the sheets of the other attractors. The end result is very much like the Lakes of Wada. Proving the boundary actually has the Wada property is tricky, although new methods look promising.

The magnetic pendulum

This fractal is similar to one I made back in 1990 inspired by the dynamics of the magnetic decision-making desk toy, a pendulum oscillating above a number of magnets. Eventually it settles over one. The basic dynamics is fairly similar (see Zhampres’ beautiful images or this great treatment). The difference is that the gravity fractal has no dissipation: in principle orbits can continue forever (but I end when they get close to the masses or after a timeout) and in the magnetic fractal the force dependency was bounded, a K/(r^2 + c) force rather than the G/r^2.

That simulation was part of my epic third year project in the gymnasium. The topic was “Chaos and self-organisation”, and I spent a lot of time reading the dynamical systems literature, running computer simulations, struggling with WordPerfect’s equation editor and producing a manuscript of about 150 pages that required careful photocopying by hand to get the pasted diagrams on separate pieces of paper to show up right. My teacher eventually sat down with me and went through my introduction and had me explain Poincaré sections. Then he promptly passed me. That was likely for the best for both of us.

Appendix: Matlab code

showPlot=0; % plot individual trajectories
randMass = 0; % place masses randomly rather than in circle

RTRAP=0.0001; % size of trap region
tmax=60; % max timesteps to run
S=1000; % resolution

x=linspace(-2,2,S);
y=linspace(-2,2,S);
[X,Y]=meshgrid(x,y);

N=5;
theta=(0:(N-1))*pi*2/N;
PX=cos(theta); PY=sin(theta);
if (randMass==1)
s = rng(3);
PX=randn(N,1); PY=randn(N,1);
end

clf

hit=X*0; 
hitN = X*0; % attractor basin
hitT = X*0; % time until hit
closest = X*0+100; 
closestN=closest; % closest mass to trajectory

tic; % measure time
for a=1:size(X,1)
disp(a)
for b=1:size(X,2)
[t,u,te,ye,ie]=ode45(@(t,y) forceLaw(t,y,N,PX,PY), [0 tmax], [X(a,b) 0 Y(a,b) 0],odeset('Events',@(t,y) finishFun(t,y,N,PX,PY,RTRAP^2)));

if (showPlot==1)
plot(u(:,1),u(:,3),'-b')
hold on
end

if (~isempty(te))
hit(a,b)=1;
hitT(a,b)=te;

mind2=100^2;
for k=1:N
dx=ye(1)-PX(k);
dy=ye(3)-PY(k);
d2=(dx.^2+dy.^2);
if (d2<mind2) mind2=d2; hitN(a,b)=k; end
end

end
for k=1:N
dx=u(:,1)-PX(k);
dy=u(:,3)-PY(k);
d2=min(dx.^2+dy.^2);
closest(a,b)=min(closest(a,b),sqrt(d2));

if (closest(a,b)==sqrt(d2)) closestN(a,b)=k; end
end
end

if (showPlot==1)
drawnow
pause
end
end
elapsedTime = toc

if (showPlot==0)
% Make colorful plot
co=hsv(N);
mag=sqrt(hitT);
mag=1-(mag-min(mag(:)))/(max(mag(:))-min(mag(:)));
im=zeros(S,S,3);
im(:,:,1)=interp1(1:N,co(:,1),closestN).*mag;
im(:,:,2)=interp1(1:N,co(:,2),closestN).*mag;
im(:,:,3)=interp1(1:N,co(:,3),closestN).*mag;
image(im)
end

% Gravity 
function dudt = forceLaw(t,u,N,PX,PY)
%dudt = zeros(4,1);
dudt=u;
dudt(1) = u(2);
dudt(2) = 0;
dudt(3) = u(4);
dudt(4) = 0;

dx=u(1)-PX;
dy=u(3)-PY;
d=(dx.^2+dy.^2).^-1.5;
dudt(2)=dudt(2)-sum(dx.*d);
dudt(4)=dudt(4)-sum(dy.*d);

% for k=1:N
% dx=u(1)-PX(k);
% dy=u(3)-PY(k);
% d=(dx.^2+dy.^2).^-1.5;
% dudt(2)=dudt(2)-dx.*d;
% dudt(4)=dudt(4)-dy.*d;
% end
end

% Are we close enough to one of the masses?
function [value,isterminal,direction] =finishFun(t,u,N,PX,PY,r2)
value=1000;
for k=1:N
dx=u(1)-PX(k);
dy=u(3)-PY(k);
d2=(dx.^2+dy.^2);
value=min(value, d2-r2);
end
isterminal=1;
direction=0;
end

Telescoping

Wednesday August 10 1960

Robert lit his pipe while William meticulously set the coordinates from the computer printout. “Want to bet?”

William did not look up from fine-tuning the dials and re-checking the flickering oscilloscope screen. “Five dollars that we get something.”

“’Something’ is not going to be enough to make Edward or the General happy. They want the future on film.”

“If we get more delays we can always just step out with a camera. We will be in the future already.”

“Yeah, and fired.”

“I doubt it. This is just a blue-sky project Ed had to try because John and Richard’s hare-brained one-electron idea caught the eye of the General. It will be like the nuclear mothball again. There, done. You can start it.”

Robert put the pipe in the ashtray and walked over to the Contraption controls. He noted down the time and settings in the log, then pressed the button. “Here we go.” The Contraption hummed for a second, the cameras clicked. “OK, you bet we got something. You develop the film.”

 

“We got something!” William was exuberant enough to have forgotten the five dollars. He put down the still moist prints on Robert’s desk. Four black squares. He thrust a magnifying glass into Robert’s hands and pointed at a corner. “Recognize it?”

It took Robert a few seconds to figure out what he was looking at. First he thought there was nothing there but noise, then eight barely visible dots became a familiar shape: Orion. He was seeing a night sky. In a photo taken inside a basement lab. During the day.

“Well… that is really something.”

 

Tuesday August 16 1960

The next attempt was far more meticulous. William had copied the settings from the previous attempt, changed them slightly in the hope of a different angle, and had Raymond re-check it all on the computer despite the cost. This time they developed the film together. As the seal of the United States of America began to develop on the film they both simultaneously turned to each other.

“Am I losing my mind?”

“That would make two of us. Look, there is text there. Some kind of plaque…”

The letters gradually filled in. “THIS PLAQUE COMMEMORATES THE FIRST SUCCESSFUL TRANSCHRONOLOGICAL OBSERVATION August 16 1960 to July 12 2054.” Below was more blurry text.

“Darn, the date is way off…”

“What do you mean? That is today’s date.”

“The other one. Theory said it should be a month in the future.”

“Idiot! We just got a message from the goddamn future! They put a plaque. In space. For us.”

 

Wednesday 14 December 1960

The General was beaming. “Gentlemen, you have done your country a great service. The geographic coordinates on Plaque #2 contained excellent intel. I am not at liberty to say what we found over there in Russia, but this project has already paid off far beyond expectation. You are going to get medals for this.” He paused and added in a lower voice: “I am relieved. I can now go to my grave knowing that the United States is still kicking communist butt 90 years in the future.”

One of the general’s aides later asked Robert: “There is something I do not understand, sir. How did the people in the future know where to put the plaques?”

Robert smiled. “That bothered us too for a while. Then we realized that it was the paperwork that told them. You guys have forced us to document everything. Just saying, it is a huge bother. But that also meant that every setting is written down and archived. Future people with the right clearances can just look up where we looked.”

“And then go into space and place a plaque?”

“Yes. Future America is clearly spacefaring. The most recent plaques also contain coordinate settings for the next one, in addition to the intel.”

He did not mention the mishap. When they entered the coordinates for Plaque #4 given on Plaque #3, William had made a mistake – understandable, since the photo was blurry – and they photographed the wrong spacetime spot. Except that Plaque #4 was there. It took them a while to realize that what mattered was what settings they entered into the archive, not what the plaque said.

“They knew where we would look.” Robert had said with wonder.

“Why did they put in different coordinates on #3 then? We could just set random coordinates and they will put a plaque there.”

“Have a heart. I assume that would force them to run around the entire solar system putting plaques in place. Theory says the given coordinates are roughly in Earth’s vicinity – more convenient for our hard-working future astronauts.”

“You know, we should try putting the wrong settings into the archive.”

“You do that if the next plaque is a dud.”

 

Friday 20 January 1961

Still, something about the pattern bothered Robert. The plaques contained useful information, including how to make a better camera and electronics. The General was delighted, obviously thinking of spy satellites not dependent on film cannisters. But there was not much information about the world: if he had been sending information back to 1866, wouldn’t he have included some modern news clippings, maybe a warning about stopping that Marx guy?

Suppose things did not go well in the future. The successors of that shoe-banging Khrushchev somehow won and instituted their global dictatorship. They would pore over the remaining research of the formerly free world, having their minions squeeze every creative idea out the archives. Including the coordinates for the project. Then they could easily fake messages from a future America to fool his present, maybe even laying the groundwork for their success…

William was surprisingly tricky to convince. Robert had assumed he would be willing to help with the scheme just because it was against the rules, but he had been at least partially taken in by the breath-taking glory of the project and the prospect of his own future career. Still, William was William and could not resist a technical challenge. Setting up an illicit calculation on the computer disguised as an abortive run with a faulty set of punch cards was just his kind of thing. He had always loved cloak-and-dagger stuff. Robert made use of the planned switch to the new cameras to make the disappearance of one roll of film easy to overlook. The security guards knew both of them worked on ungodly hours.

“Want to bet?” William asked.

“Bet what? That we will see a hammer and sickle across the globe?”

“Something simpler: that there will be a plaque saying ‘I see you peeping!’.”

Robert shivered. “No thanks. I just want to be reassured.”

“It is a shame we can’t get better numerical resolution; if we are lucky we will just see Earth. Imagine if we could get enough decimal places to put the viewport above Washington DC.”

The photo was beautiful. Black space, and slightly off-centre there was a blue and white marble. Robert realized that they were the first people ever to see the entire planet from this distance. Maybe in a decade or so, a man on the moon would actually see it like that. But the planet looked fine. Was there maybe glints of something in orbit?

“Glad I did not make the bet with you. No plaque.”

“The operational security of the future leaves a bit to be desired.”

“…that is actually a good answer.”

“What?”

“Imagine you are running Future America, and have a way of telling the past about important things. Like whatever was in Russia, or whatever is in those encrypted sequences on Plaque #9. Great. But Past America can peek at you, and they don’t have all the counterintelligence gadgets and tricks you do. So if they peek at something sensitive – say the future plans for a super H-bomb – then the Past Commies might steal it from you.”

“So the plaques are only giving us what we need, or is safe if there are spies in the project.”

“Future America might even do a mole-hunt this way… But more importantly, you would not want Past America to watch you too freely since that might leak information to not just our adversaries or the adversaries of Future America, but maybe mid-future adversaries too.”

“You are reading too many spy novels.”

“Maybe. But I think we should not try peeking too much. Even if we know we are trustworthy, I have no doubt there are some sticklers in the project – now or in the future – who are paranoid.”

“More paranoid than us? Impossible. But yes.”

With regret Robert burned the photo later that night.

 

February 1962

As the project continued its momentum snowballed and it became ever harder to survey. Manpower was added. Other useful information was sent back – theory, technology, economics, forecasts. All benign. More and more was encrypted. Robert surmised that somebody simply put the encryption keys in the archive and let the future send things back securely to the right recipients.

His own job was increasingly to run the work on building a more efficient “Conduit”. The Contraption would retire in favour of an all-electronic version, all integrated circuits and rapid information flow. It would remove the need for future astronauts to precisely place plaques around the solar system: the future could send information as easily as using ComLogNet teletype terminals.

William was enthusiastically helping the engineers implement the new devices. He seemed almost giddy with energy as new tricks arrived weekly and wonders emerged from the workshop. A better camera? Hah, the new computers were lightyears ahead of anything anybody else had.

So why did Robert feel like he was being fooled?

 

Wednesday 28 February 1962

In a way this was a farewell to the Contraption around which his life had circulated the past few years: tomorrow the Conduit would take over upstairs.

Robert quietly entered the coordinates into the controls. This time he had done most of the work himself: he could run jobs on the new mainframe and the improved algorithms Theory had worked out made a huge difference.

It was also perhaps his last chance to actually do things himself. He had found himself increasingly insulated as a manager – encapsulated by subordinates, regulations, and schedules. The last time he had held a soldering iron was months ago. He relished the muggy red warmth of the darkroom as he developed the photos.

The angles were tilted, but the photos were more unexpected than he had anticipated. One showed what he thought was in the DC region but the whole area was an empty swampland dotted with overgrown ruins. New York was shrouded in a thunderstorm, but he could make out glowing skyscrapers miles high shedding von Kármán vortices in the hurricane strength winds. One photo showed a gigantic structure near the horizon that must have been a hundred kilometres tall, surmounted by an aurora. This was not a communist utopia. Nor was it the United States in any shape or form. It was not a radioactive wasteland – he was pretty sure at least one photo showed some kind of working rail line. This was an alien world.

When William put his hand on his shoulder Robert nearly screamed.

“Anything interesting?”

Wordlessly he showed the photos to William, who nodded. “Thought so.”

“What do you mean?”

“When do you think this project will end?”

Robert gave it some thought. “I assume it will run as long as it is useful.”

“And then what? It is not like we would pack up the Conduit and put it all in archival storage.”

“Of course not. It is tremendously useful.”

“Future America still has the project. They are no doubt getting intel from further down the line. From Future Future America.”

Robert saw it. A telescoping series of Conduits shipping choice information from further into the future to the present. Presents. Some of which would be sending it further back. And at the futuremost end of the series…

“I read a book where they discussed progress, and the author suggested that all of history is speeding up towards some super-future. The Contraption and Conduit allows the super-future to come here.”

“It does not look like there are any people in the super-future.”

“We have been evolving for millions of years, slowly. What if we could just cut to the chase?”

“Humanity ending up like that?” He gestured towards Thunder New York.

“I think that is all computers. Maybe electronic brains are the inhabitants of the future.”

“We must stop it! This is worse than commies. Russians are at least human. We must prevent the Conduit…”

William smiled broadly. “That won’t happen. If you blew up the Conduit, don’t you think there would be a report? A report archived for the future? And if you were Future America, you would probably send back an encrypted message addressed to the right person saying ‘Talk Robert out of doing something stupid tonight’? Even better, a world where someone gets your head screwed on straight, reports accurately about it, and the future sends back a warning to the person is totally consistent.”

Robert stepped away from William in horror. The red gloom of the darkroom made him look monstrous. “You are working for them!”

“Call it freelancing. I get useful tips, I do my part, things turn out as they should. I expect a nice life. But there is more to it than that, Robert. I believe in moral progress. I think those things in your photos probably know better than we do – yes, they are probably more alien than little green men from Mars, but they have literally eons of science, philosophy and whatever comes after that.”

“Mice.”

“Mice?”

“MICE: Money, Ideology, Coercion, Ego. The formula for recruiting intelligence assets. They got you with all but the coercion part.”

“They did not have to. History, or rather physical determinism, coerces us. Or, ‘you can’t fight fate’.”

“I’m doing this to protect free will! Against the Nazis. The commies! Your philosophers!”

“Funny way you are protecting it. You join this organisation, you allow yourself to become a cog in the machine, feel terribly guilty about your little experiments. No, Robert, you are protecting your way of life. You are protecting normality. You could just as well have been in Moscow right now working to protect socialism.”

“Enough! I am going to stop the Conduit!”

William held up a five dollar bill. “Want to bet?”

 

And the pedestrians are off! Oh no, that lady is jaywalking!

In 1983 Swedish Television began an all-evening entertainment program named Razzel. It was centred around the state lottery draw, with music, sketch comedy, and television series interspersed between the blocks. Yes, this was back in the day when there was two TV channels to choose from and more or less everybody watched. The ice age had just about ended.

One returning feature consisted of camera footage of a pedestrian crossing in Stockholm. A sports commenter well-known for his coverage of horse-racing narrated the performance of the unknowing pedestrians as if they were competing in a race. In some cases I even think he even showed up to deliver flowers to the “winner”. But you would get disqualified if you had a false start or went outside the stripes!

I suspect this feature noticeably improved traffic safety for a generation.

I was reminded of this childhood memory earlier today when discussing the use of face recognition in China to detect jaywalkers and display them on a billboard to shame them. The typical response in a western audience is fear of what looks like a totalitarian social engineering program. The glee with which many responded to the news that the system had been confused by a bus ad, putting a celebrity on the board of shame, is telling.

Is there a difference?

But compare the Chinese system to the TV program. In the China case the jaywalker may be publicly shamed from the billboard… but in the cheerful 80s TV program they were shamed in front of much of the nation.

There is a degree of increased personalness in the Chinese case since it also displays their name, but no doubt friends and neighbours would recognize you if they saw you on TV (remember, this was back when we only had two television channels and a fair fraction of people watched TV on Friday evening). There may also be SMS messages involved in some versions of the system. This acts differently: now it is you who gets told off when you misbehave.

A fundamental difference may be the valence of the framing. The TV show did this as happy entertainment, more of a parody of sport television than an attempt at influencing people. The Chinese system explicitly aims at discouraging misbehaviour. The TV show encouraged positive behaviour (if only accidentally).

So the dimensions here may be the extent of the social effect (locally, or nationwide), the degree the feedback is directly personal or public, and whether it is a positive or negative feedback. There is also a dimension of enforcement: is this something that happens every time you transgress the rules, or just randomly?

In terms of actually changing behaviour making the social effect broad rather than close and personal might not have much effect: we mostly care about our standing relative to our peers, so having the entire nation laugh at you is certainly worse than your friends laughing, but still not orders of magnitude more mortifying. The personal message on the other hand sends a signal that you were observed; together with an expectation of effective enforcement this likely has a fairly clear deterrence effect (it is often not the size of the punishment that deters people from crime, but their expectation of getting caught). The negative stick of acting wrong and being punished is likely stronger than the positive carrot of a hypothetical bouquet of flowers.

Where is the rub?

From an ethical standpoint, is there a problem here? We are subject to norm enforcement from friends and strangers all the time. What is new is the application of media and automation. They scale up the stakes and add the possibility of automated enforcement. Shaming people for jaywalking is fairly minor, but some people have lost jobs, friends or been physically assaulted when their social transgressions have become viral social media. Automated enforcement makes the panopticon effect far stronger: instead of suspecting a possibility of being observed it is a near certainty. So the net effect is stronger, more pervasive norm enforcement…

…of norms that can be observed and accurately assessed. Jaywalking is transparent in a way being rude or selfish often isn’t. We may end up in a situation where we carefully obey some norms, not because they are the most important but because they can be monitored. I do not think there is anything in principle impossible about a rudeness detection neural network, but I suspect the error rates and lack of context sensitivity would make it worse than useless in preventing actual rudeness. Goodhart’s law may even make it backfire.

So, in the end, the problem is that automated systems encode a formalization of a social norm rather than the actual fluid social norm. Having a TV commenter narrate your actions is filtered through the actual norms of how to behave, while the face recognition algorithm looks for a pattern associated with transgression rather than actual transgression. The problem is that strong feedback may then lock in obedience to the hard to change formalization rather than actual good behaviour.

Thinking long-term, vast and slow

John Fowler "Long Way Down" https://www.flickr.com/photos/snowpeak/10935459325
John Fowler “Long Way Down” https://www.flickr.com/photos/snowpeak/10935459325

This spring Richard Fisher at BBC Future has commissioned a series of essays about long-termism: Deep Civilisation. I really like this effort (and not just because I get the last word):

“Deep history” is fascinating because it gives us a feeling of the vastness of our roots – not just the last few millennia, but a connection to our forgotten stone-age ancestors, their hominin ancestors, the biosphere evolving over hundreds of millions and billions of years, the planet, and the universe. We are standing on top of a massive sedimentary cliff of past, stretching down to an origin unimaginably deep below.

Yet the sky above, the future, is even more vast and deep. Looking down the 1,857 m into Grand Canyon is vertiginous. Yet above us the troposphere stretches more than five times further up, followed by an even vaster stratosphere and mesosphere, in turn dwarfed by the thermosphere… and beyond the exosphere fades into the endlessness of deep space. The deep future is in many ways far more disturbing since it is moving and indefinite.

That also means there is a fair bit of freedom in shaping it. It is not very easy to shape. But if we want to be more than just some fossils buried inside the rocks we better do it.

Stuff I have been up to

I have been busy everywhere except on this blog. Here are a few highlights, mostly my public outreach:

Long term survival

On BBC Future I have an essay concluding their amazing season on long term thinking where I go really long-term: The greatest long term threats facing humanity.

The approach I take there is to look at the question “if we have survived X years into the future, what problems must we have overcome before that?” It is not so much the threats (or frankly, problems – threat seems to imply a bit more active maliciousness than the universe normally brings about) that are interesting as just how radically we need to change or grow in power to meet them.

The central paradox of survival is that it requires change, and long-term that means that what survives may be very alien. Not so much a problem for me, but I think many disagree. A solid state civilization powered by black holes in a starless universe close to absolute zero, planning billions of years ahead may sound like a great continuation of us, or something too alien to matter.

Debunking doom

Climate doom is in the air, and I am frankly disturbed by how many think that we are facing an existential threat to our survival in the next decade or so – both because it is based on a misunderstanding of the science (which is not entirely easy to read), and how it breeds fatalism. As a response to a youngster question I wrote this piece on the Conversation: Will climate change cause humans to go extinct?

Robots in space

In Quartz, I have an essay about the next 50 years of space exploration and whether we should send robots instead: We should stop sending humans into space to do a robot’s job.

As often the title (not chosen by me, I prefer “A small step for machinekind”) makes it seem I am arguing for something different than I am actually arguing. As I see it, sending machines to space makes much more sense than sending humans… but given the very human desire to be the ones exploring, we will send humans in any case. Long-term we should also become multiplanetary if only to reduce extinction risks, but that might require sending robots ahead – except that in order to do that we need a lot of cheap, low-threshold experimentation and testing.

See also my chat with John Ellis and Kierann Shah about space at the How The Light Gets In Festival.

Good versus evil, Moloch versus Maria

Last year I participated in the Nexus Instituut “intellectual opera” in Amsterdam, enjoying myself immensely. I ended up writing an essay AI, Good and Evil… and Moloch (Official version Sandberg, A. (2019) Kunstmatige intelligentie en Moloch. Tijdschrift Nexus 81: De strijd tussen goed en kwaad. Nexus Instituut, Amsterdam (Tr. Laura Weeda)).

My main point is that evil is usually seen as active maliciousness or neglecting others, suffering itself, or meaningless/removal of meaning. Bad AI is unlikely to be actively malicious and making machines that can experience suffering is likely tricky, but automation that perform bad actions without caring is all too simple. The big risk is getting systems that are effective at implementing pointless goals too efficiently, destroying value (human or other) for no gain, not even to themselves. A further problem is also that these systems are systems, not individuals. We tend to think of AI as robots, “the AI” and other individual entities, when it just as well can be an ambient functionality of the wider techno-social world – impossible to pull the plug, with everybody complicit. We need better ways of debugging adaptive technological systems.

Life extension

On Humanity 2.0 I discussed/debated digital afterlives with Steve Fuller, Sr. Mary Christa Nutt, James Madden and Matthew Harvey Sanders. Got a bit intense at some points, but there is an interesting issue in untangling exactly what we want from an extended life. Not all forms of continuity count for all people: a continuity of consciousness is very different from a continuity of memory, a continuity of social interactions or functions, or leaving the right life projects in order.

Other stuff

Polish translation of my chapter on limits of morphological freedom.

Hacking the Brain: Dimensions of Cognitive Enhancement. A paper on cognitive enhancement, the final fruit of the “comparing apples to oranges” Volkswagen foundation project I participated in.

The GoCAS existential risk project final outputs arrived in the journal Foresight. I have two pieces, There is plenty of time at the bottom: The economics, risk and ethics of time compression and the group-written Long-term trajectories of human civilization.

I have also helped a bit with an Oxford project on sensitive intervention points for a post-carbon society. Not all tipping points are bad, and sometimes cartoon heroes may help.

Grand futures

Behind the scenes, my book is expanding… whether that is progress remains to be seen.

I have given various talks about some contents, but there is so much more. I think I have to do a proper lecture series this fall.

Newtonmass fractals 2018

It is Newtonmass, and that means doing some fun math. I try to invent a new fractal or something similar every year: this is the result for 2018.

The Newton fractal is an old classic. Newton’s method for root finding iterates an initial guess z_0 to find a better approximation z_{n+1}=z_{n}-f(z_{n})/f'(z_{n}). This will work as long as f'(z)\neq 0, but which root one converges to can be sensitively dependent on initial conditions. Plotting which root a given initial value ends up with gives the Newton fractal.

The Newton-Gauss method is a method for minimizing the total squared residuals S(\beta)=\sum_{i=1}^m r_i^2(\beta) when some function dependent on n-dimensional parameters $\beta$ is fitted to m data points r_i(\beta)=f(x_i;\beta)-y_i. Just like the root finding method it iterates towards a minimum of S(\beta): \beta_{n+1} = \beta_n - (J^t J)^{-1}J^t r(\beta) where J is the Jacobian J_{ij}=\frac{\partial r_i}{\partial \beta_j}. This is essentially Newton’s method but in a multi-dimensional form.

So we can make fractals by trying to fit (say) a function consisting of the sum of two Gaussians with different means (but fixed variances) to a random set of points. So we can set f(x;\beta_1,\beta_2)=(1/\sqrt{2\pi})[e^{-(x-\beta_1)^2/2}+(1/4)e^{-(x-\beta_1)^2/8}] (one Gaussian with variance 1 and one with 1/4 – the reason for this is not to make the diagram boringly symmetric as for the same variance case). Plotting the location of the final \beta(50) (by stereographically mapping it onto a unit sphere in (r,g,b) space) gives a nice fractal:

Basins of attraction of different attractor states of Gauss-Newton's method.
Basins of attraction of different attractor states of Gauss-Newton’s method. Two Gaussian functions fitted to five random points.
Basins of attraction of different attractor states of Gauss-Newton's method. Two Gaussian functions fitted to 10 random points.
Basins of attraction of different attractor states of Gauss-Newton’s method. Two Gaussian functions fitted to 10 random points.
Basins of attraction of different attractor states of Gauss-Newton’s method. Two Gaussian functions fitted to 50 random points.

It is a bit modernistic-looking. As I argued in 2016, this is because the generic local Jacobian of the dynamics doesn’t have much rotation.

As more and more points are added the attractor landscape becomes simpler, since it is hard for the Gaussians to “stick” to some particular clump of points and the gradients become steeper.

This fractal can obviously be generalized to more dimensions by using more parameters for the Gaussians, or more Gaussians etc.

The fractality is guaranteed by the generic property of systems with several attractors that points at the border of two basins of attraction will tend to find their ways to other attractors than the two equally balanced neighbors. Hence a cut transversally across the border will find a Cantor-set of true basin boundary points (corresponding to points that eventually get mapped to a singular Jacobian in the iteration formula, like the boundary of the Newton fractal is marked by points mapped to f'(z_n)=0 for some n) with different basins alternating.

Merry Newtonmass!