ET, phone for you!

TelescopeI have been in the media recently since I became the accidental spokesperson for UKSRN at the British Science Festival in Bradford:

BBC / The Telegraph / The Guardian / Iol SciTech / The Irish Times / Bt.com

(As well as BBC 5 Live, BBC Newcastle and BBC Berkshire… so my comments also get sent to space as a side effect).

My main message is that we are going to send in something for the Breakthrough Message initiative: a competition to write a good message to be sent to aliens. The total pot is a million dollars (it seems that was misunderstood in some reporting: it is likely not going to be a huge prize, but rather several). The message will not actually be sent to the stars: this is an intellectual exercise rather than a practical one.

(I also had some comments about the link between Langsec and SETI messages – computer security is actually a bit of an issue for fun reasons. Watch this space.)

Should we?

One interesting issue is whether there are any good reasons not to signal. Stephen Hawking famously argued against it (but he is a strong advocate of SETI), as does David Brin. A recent declaration argues that we should not signal unless there was a widespread agreement about it. Yet others have made the case that we should signal, perhaps a bit cautiously. In fact, an eminent astronomer just told he could not take concerns about sending a message seriously.

Some of the arguments are (in no particular order):

Pro Con
SETI will not work if nobody speaks. Malign ETI.
ETI is likely to be far more advanced than us and could help us. Past meetings between different civilizations have often ended badly.
Knowing if there is intelligence out there is important. Giving away information about ourselves may expose us to accidental or deliberate hacking.
Hard to prevent transmissions.  Waste of resources.
 Radio transmissions are already out there.  If the ETI is quiet, it is for a reason.
 Maybe they are waiting for us to make the first move.  We should listen carefully first, then transmit.

It is actually an interesting problem: how do we judge the risks and benefits in a situation like this? Normal decision theory runs into trouble (not that it stops some of my colleagues). The problem here is that the probability and potential gain/loss are badly defined. We may have our own personal views on the likelihood of intelligence within radio reach and its nature, but we should be extremely uncertain given the paucity of evidence.

[ Even the silence in the sky is some evidence, but it is somewhat tricky to interpret given that it is compatible with both no intelligence (because of rarity or danger), intelligence not communicating or looking in spectra we see, cultural convergence towards quietness (the zoo hypothesis, everybody hiding, everybody becoming Jupiter brains), or even the simulation hypothesis. The first category is at least somewhat concise, while the later categories have endless room for speculation. One could argue that since later categories can fit any kind of evidence they are epistemically weak and we should not trust them much.]

Existential risks also tends to take precedence over almost anything. If we can avoid doing something that could cause existential risk the maxiPOK principle tells us not to do it: we can avoid sending and sending might bring down the star wolves on us, so we should avoid it.

There is also a unilateralist curse issue. It is enough that one group somewhere thinks transmitting is a good idea and hence do it to get the consequences, whatever they are. So the more groups that consider transmitting, even if they are all rational, well-meaning and consider the issue at length the more likely it is that somebody will do it even if it is a stupid thing to do. In situations like this we have argued it behoves us to be more conservative individually than we would otherwise have been – we should simply think twice just because sending messages is in the unilateralist curse category. We also argue in that paper that it is even better to share information and make collectively coordinated decisions.

That these arguments strengthen the con side – but largely independently of what the actual anti-message arguments are. They are general arguments that we should be careful, not final arguments.

Conversely, Alan Penny argued that given the high existential risk to humanity we may actually have little to lose: if our risk per century is 12-40% of extinction, then adding a small ETI risk has little effect on the overall risk level, yet a small chance of friendly ETI advice (“By the way, you might want to know about this…”) that decreases existential risk may be an existential hope. Suppose we think it is 50% likely that ETI is friendly, and 1% chance it is out there. If it is friendly it might give us advice that reduces our existential risk by 50%, otherwise it will eat us with 1% probability. So if we do nothing our risk is (say) 12%. If we signal, then the risk is 0.12*0.99 + 0.01*(0.5*0.12*0.5 + 0.5*(0.12*0.99+0.01))=11.9744% – a slight improvement. Like the Drake equation one can of course plug in different numbers and get different effects.

Truth to the stars

Considering the situation over time, sending a message now may also be irrelevant since we could wipe ourselves out before any response will arrive. That brings to mind a discussion we had at the press conference yesterday about what the point of sending messages far away would be: wouldn’t humanity be gone by then? Also, we were discussing what to present to ETI: an honest or whitewashed version of ourselves? (my co-panelist Dr Jill Stuart made some great points about the diversity issues in past attempts).

My own view is that I’d rather have an honest epitaph for our species than a polished but untrue one. This is both relevant to us, since we may want to be truthful beings even if we cannot experience the consequences of the truth, and relevant to ETI, who may find the truth more useful than whatever our culture currently would like to present.

4 thoughts on “ET, phone for you!

  1. Looking at human history, the main reason humans try to contact strangers is to find out if they have any stuff that would be useful to us. e.g. gold, slaves, useful techniques or weaponry, oil, etc.

    We might pretend to be altruistic seekers after knowledge, but that is just a facade to get access to their resources.

    If the strangers are weaker than us then we just take what we want, with varying degrees of politeness and killing, as required.

    If the strangers are about the same strength as us, then negotiations ensue, followed by trade and mutual benefit. With occasional arguments and skirmishes, just to check that we really are about equal strength.

    Contact with more powerful humans could lead to them following our explorers back home and taking our stuff. So human explorers tend to be armed and dangerous as they venture into the unknown.

    So why should we expect aliens, also evolved through the brutality of natural selection, to be any different?

    1. There is very likely a significant time gulf between our species: even a million years of headstart makes a huge difference, and there have likely been life-bearing planets (if any) for billions of years. Similarly we should expect a long-lived species to have spent more of its time interacting with and modifying itself than it spent being selected – the original selective forces may have shaped the original version, but the post-alien creatures a million years later are going to be more shaped by what works within their cultures.

      This gives a reason to think that the red in tooth and claw behaviour you describe is not guaranteed. In fact, as we can see in our own species, over history we have become significantly more peaceful and cooperative (standard hand-wave towards Pinker’s book, but also comparisons of trust rates within markets), likely mostly through social innovations. When societies solve game theory dilemmas with win-win situations they grow wealthy and capable, outcompeting the low-trust societies that cannot coordinate. Add a bit of self-design to the equation and we get truly radical possibilities.

      So this means we shouldn’t be too confident in aliens being competitive *or* altruistic. They may have undergone very radical changes from what they evolved as, and the new cultural strategies may be totally unexpected.

      1. “likely significant time gulf between our species”

        So why aren’t they here already?
        Answer: They’ve got all they need and are not interested in us.

        When (If!) humans advance millions of years we’ll ignore the screaming kids as well.

        1. True. In fact, I would expect that as a civilization gets more and more advanced, the resources it needs will first become more diverse, but eventually start narrowing down. With sufficiently good atomically precise manufacturing you just need the elements themselves (with a handful dominating), and they can be recycled endlessly. In the end it only cares about time, energy, heat sinks, and matter. And matter and energy can be interconverted.

Leave a Reply to Anders Sandberg Cancel reply

Your email address will not be published. Required fields are marked *