I have been busy everywhere except on this blog. Here are a few highlights, mostly my public outreach:
Long term survival
On BBC Future I have an essay concluding their amazing season on long term thinking where I go really long-term: The greatest long term threats facing humanity.
The approach I take there is to look at the question “if we have survived X years into the future, what problems must we have overcome before that?” It is not so much the threats (or frankly, problems – threat seems to imply a bit more active maliciousness than the universe normally brings about) that are interesting as just how radically we need to change or grow in power to meet them.
The central paradox of survival is that it requires change, and long-term that means that what survives may be very alien. Not so much a problem for me, but I think many disagree. A solid state civilization powered by black holes in a starless universe close to absolute zero, planning billions of years ahead may sound like a great continuation of us, or something too alien to matter.
Debunking doom
Climate doom is in the air, and I am frankly disturbed by how many think that we are facing an existential threat to our survival in the next decade or so – both because it is based on a misunderstanding of the science (which is not entirely easy to read), and how it breeds fatalism. As a response to a youngster question I wrote this piece on the Conversation: Will climate change cause humans to go extinct?
Robots in space
In Quartz, I have an essay about the next 50 years of space exploration and whether we should send robots instead: We should stop sending humans into space to do a robot’s job.
As often the title (not chosen by me, I prefer “A small step for machinekind”) makes it seem I am arguing for something different than I am actually arguing. As I see it, sending machines to space makes much more sense than sending humans… but given the very human desire to be the ones exploring, we will send humans in any case. Long-term we should also become multiplanetary if only to reduce extinction risks, but that might require sending robots ahead – except that in order to do that we need a lot of cheap, low-threshold experimentation and testing.
See also my chat with John Ellis and Kierann Shah about space at the How The Light Gets In Festival.
Good versus evil, Moloch versus Maria
Last year I participated in the Nexus Instituut “intellectual opera” in Amsterdam, enjoying myself immensely. I ended up writing an essay AI, Good and Evil… and Moloch (Official version Sandberg, A. (2019) Kunstmatige intelligentie en Moloch. Tijdschrift Nexus 81: De strijd tussen goed en kwaad. Nexus Instituut, Amsterdam (Tr. Laura Weeda)).
My main point is that evil is usually seen as active maliciousness or neglecting others, suffering itself, or meaningless/removal of meaning. Bad AI is unlikely to be actively malicious and making machines that can experience suffering is likely tricky, but automation that perform bad actions without caring is all too simple. The big risk is getting systems that are effective at implementing pointless goals too efficiently, destroying value (human or other) for no gain, not even to themselves. A further problem is also that these systems are systems, not individuals. We tend to think of AI as robots, “the AI” and other individual entities, when it just as well can be an ambient functionality of the wider techno-social world – impossible to pull the plug, with everybody complicit. We need better ways of debugging adaptive technological systems.
Life extension
On Humanity 2.0 I discussed/debated digital afterlives with Steve Fuller, Sr. Mary Christa Nutt, James Madden and Matthew Harvey Sanders. Got a bit intense at some points, but there is an interesting issue in untangling exactly what we want from an extended life. Not all forms of continuity count for all people: a continuity of consciousness is very different from a continuity of memory, a continuity of social interactions or functions, or leaving the right life projects in order.
Other stuff
Polish translation of my chapter on limits of morphological freedom.
Hacking the Brain: Dimensions of Cognitive Enhancement. A paper on cognitive enhancement, the final fruit of the “comparing apples to oranges” Volkswagen foundation project I participated in.
The GoCAS existential risk project final outputs arrived in the journal Foresight. I have two pieces, There is plenty of time at the bottom: The economics, risk and ethics of time compression and the group-written Long-term trajectories of human civilization.
I have also helped a bit with an Oxford project on sensitive intervention points for a post-carbon society. Not all tipping points are bad, and sometimes cartoon heroes may help.
Grand futures
Behind the scenes, my book is expanding… whether that is progress remains to be seen.
I have given various talks about some contents, but there is so much more. I think I have to do a proper lecture series this fall.