Messages on plaques and disks

Local plaqueOn Sky News I mildly disagree with Christopher Riley about whether we ought to add a short update message to the Voyager probes.

Representing ourselves

If we wanted to represent humanity most honestly to aliens, we would just give them a constantly updated full documentation of our cultures and knowledge. But that is not possible.

So in METI we may consider sending “a copy of the internet” as a massive snapshot of what we currently are, or as the Voyager recording did, send a sample of what we are. In both cases it is a snapshot at a particular time: had we sent the message at some other time, the contents would have been different. The selection used is also a powerful shaper, with what is chosen as representative telling a particular story.

That we send a snapshot is not just a necessity, it may be a virtue. The full representation of what humanity is, is not so much a message as a gift with potentially tricky moral implications: imagine if we were given the record of an alien species, clearly sent with the intention that we ought to handle it according to some – to us unknowable – preferences. If we want to do some simple communication, essentially sending a postcard-like “here we are! This is what we think we are!” is the best we can do. A thick and complex message would obscure the actual meaning:

The spacecraft will be encountered and the record played only if there are advanced space-faring civilizations in interstellar space. But the launching of this ‘bottle’ into the cosmic ‘ocean’ says something very hopeful about life on this planet.
– Carl Sagan

It is a time capsule we send because we hope to survive and matter. If it becomes an epitaph of our species it is a decent epitaph. Anybody receiving it is a bonus.

Temporal preferences

How should we relate to this already made and launched message?

Clearly we want the message to persist, maybe be detected, and ideally understood. We do not want the message to be distorted by random chance (if it can be avoided) or by independent actors.

This is why I am not too keen on sending an addendum. One can change the meaning of a message with a small addition: “Haha, just kidding!” or “We were such tools in the 1970s!”

Note that we have a present desire for a message (possibly the original) to reach the stars, but the launchers in 1977 clearly wanted their message to reach the stars: their preferences were clearly linked to what they selected. I think we have a moral duty to respect past preferences for information. I have expressed it elsewhere as a temporal golden rule: “treat the past as you want the future to treat you”. We would not want our message or amendments changed, so we better be careful about past messages.

Additive additions

However, adding a careful footnote is not necessarily wrong. But it needs to be in the spirit of the past message, adding to it.

So what kind of update would be useful?

We might want to add something that we have learned since the launch that aliens ought to know. For example, an important discovery. But this needs to be something that advanced aliens are unlikely to already know, which is tricky: they likely know about dark matter, that geopolitical orders can suddenly shift, or a proof of the Poincaré conjecture.

They have to be contingent, unique to humanity, and ideally universally significant. Few things are. Maybe that leaves us with adding the notes for some new catchy melody (“Gangnam style” or “Macarena”?) or a really neat mathematical insight (PCP theorem? Oops, it looks like Andrew Wiles’ Fermat proof is too large for the probe).

In the end, maybe just a “Still here, 38 years later” may be the best addition. Contingent, human, gives some data on the survival of intelligence in the universe.

ET, phone for you!

TelescopeI have been in the media recently since I became the accidental spokesperson for UKSRN at the British Science Festival in Bradford:

BBC / The Telegraph / The Guardian / Iol SciTech / The Irish Times / Bt.com

(As well as BBC 5 Live, BBC Newcastle and BBC Berkshire… so my comments also get sent to space as a side effect).

My main message is that we are going to send in something for the Breakthrough Message initiative: a competition to write a good message to be sent to aliens. The total pot is a million dollars (it seems that was misunderstood in some reporting: it is likely not going to be a huge prize, but rather several). The message will not actually be sent to the stars: this is an intellectual exercise rather than a practical one.

(I also had some comments about the link between Langsec and SETI messages – computer security is actually a bit of an issue for fun reasons. Watch this space.)

Should we?

One interesting issue is whether there are any good reasons not to signal. Stephen Hawking famously argued against it (but he is a strong advocate of SETI), as does David Brin. A recent declaration argues that we should not signal unless there was a widespread agreement about it. Yet others have made the case that we should signal, perhaps a bit cautiously. In fact, an eminent astronomer just told he could not take concerns about sending a message seriously.

Some of the arguments are (in no particular order):

Pro Con
SETI will not work if nobody speaks. Malign ETI.
ETI is likely to be far more advanced than us and could help us. Past meetings between different civilizations have often ended badly.
Knowing if there is intelligence out there is important. Giving away information about ourselves may expose us to accidental or deliberate hacking.
Hard to prevent transmissions.  Waste of resources.
 Radio transmissions are already out there.  If the ETI is quiet, it is for a reason.
 Maybe they are waiting for us to make the first move.  We should listen carefully first, then transmit.

It is actually an interesting problem: how do we judge the risks and benefits in a situation like this? Normal decision theory runs into trouble (not that it stops some of my colleagues). The problem here is that the probability and potential gain/loss are badly defined. We may have our own personal views on the likelihood of intelligence within radio reach and its nature, but we should be extremely uncertain given the paucity of evidence.

[ Even the silence in the sky is some evidence, but it is somewhat tricky to interpret given that it is compatible with both no intelligence (because of rarity or danger), intelligence not communicating or looking in spectra we see, cultural convergence towards quietness (the zoo hypothesis, everybody hiding, everybody becoming Jupiter brains), or even the simulation hypothesis. The first category is at least somewhat concise, while the later categories have endless room for speculation. One could argue that since later categories can fit any kind of evidence they are epistemically weak and we should not trust them much.]

Existential risks also tends to take precedence over almost anything. If we can avoid doing something that could cause existential risk the maxiPOK principle tells us not to do it: we can avoid sending and sending might bring down the star wolves on us, so we should avoid it.

There is also a unilateralist curse issue. It is enough that one group somewhere thinks transmitting is a good idea and hence do it to get the consequences, whatever they are. So the more groups that consider transmitting, even if they are all rational, well-meaning and consider the issue at length the more likely it is that somebody will do it even if it is a stupid thing to do. In situations like this we have argued it behoves us to be more conservative individually than we would otherwise have been – we should simply think twice just because sending messages is in the unilateralist curse category. We also argue in that paper that it is even better to share information and make collectively coordinated decisions.

That these arguments strengthen the con side – but largely independently of what the actual anti-message arguments are. They are general arguments that we should be careful, not final arguments.

Conversely, Alan Penny argued that given the high existential risk to humanity we may actually have little to lose: if our risk per century is 12-40% of extinction, then adding a small ETI risk has little effect on the overall risk level, yet a small chance of friendly ETI advice (“By the way, you might want to know about this…”) that decreases existential risk may be an existential hope. Suppose we think it is 50% likely that ETI is friendly, and 1% chance it is out there. If it is friendly it might give us advice that reduces our existential risk by 50%, otherwise it will eat us with 1% probability. So if we do nothing our risk is (say) 12%. If we signal, then the risk is 0.12*0.99 + 0.01*(0.5*0.12*0.5 + 0.5*(0.12*0.99+0.01))=11.9744% – a slight improvement. Like the Drake equation one can of course plug in different numbers and get different effects.

Truth to the stars

Considering the situation over time, sending a message now may also be irrelevant since we could wipe ourselves out before any response will arrive. That brings to mind a discussion we had at the press conference yesterday about what the point of sending messages far away would be: wouldn’t humanity be gone by then? Also, we were discussing what to present to ETI: an honest or whitewashed version of ourselves? (my co-panelist Dr Jill Stuart made some great points about the diversity issues in past attempts).

My own view is that I’d rather have an honest epitaph for our species than a polished but untrue one. This is both relevant to us, since we may want to be truthful beings even if we cannot experience the consequences of the truth, and relevant to ETI, who may find the truth more useful than whatever our culture currently would like to present.