I have a piece in The Conversation: The five biggest threats to human existence.
My current list is
Nanotechnology is in a sense a stand-in for all technologies making it easier to realize whatever people wish fast and cheaply.
Next time I will write a list of the threats you have never heard of.
When is it acceptable to do research that threatens to cause the disaster it seeks to limit?
My conclusion is that risky experiments like gain-of-function virology or geoengineering experiments can be justified if they look likely to reduce overall risk (especially extreme tail risk), their benefits would accrue to all who are also subjected to risk, and the experiments can be adequately monitored and kept proportionally accountable. Nothing too unusual.
The real problem is of course how to judge risks and risk expertise when we do not have all the data. Given the stakes, the value of better information here may be much higher than normally assumed.
I have now finished my book chapter (PDF) based on my Anticipating 2025 talk about smarter policymaking.
As I see it, the big wins may be better ways of incorporating de-biasing in network media and group decisionmaking, methods of opening up government data for public inspection, automating accountability, tools for combining preferences, tools supporting negotiation, and frameworks helping non-experts to formulate viable policy proposals. There is going to have to be a lot of trial and error on this: we are not smart enough to design emergent social interactions ad hoc. But by 2025 we are going to be a fair bit better than now, and have a decade of hindsight.
Brian blogs about our "when is diminishment enhancement?" paper.
Our point is simple: enhancement is about helping to achieve the good life, not necessarily having "more" of some ability. Most of the time more is good, but not always: an excellent memory is nice, but not being overwhelmed by detail.
Me and Andrew was on Reddit AMA today, discussing existential risk and a lot of other fun topics - Carrington events, human enhancement, Eclipse Phase.
Some highlights in The Conversation: From human extinction to super intelligence, two futurists explain.
I just rediscovered a little discussion about why extrapolating the future is tricky in a forgotten folder: A Sigmoid Dialogue.
This issue has annoyed me for quite some time. Still, I do make use of this kind of extrapolation: it is just that I use it to make scenarios rather than claim I know where the trend actually is going.
Journal of Artificial General Intelligence, Volume 4, Issue 3 (Dec 2013) Brain Emulation and Connectomics: a Convergence of Neuroscience and Artificial General Intelligence, Editors: Randal Koene and Diana Deca is now out, as openaccess.
Yes, I have a paper in it with Peter Eckersley, "Is Brain Emulation Dangerous?". We argue that the geopolitical risks of a WBE arms race could be bad, that there is plenty of potential for mistreatment of software people, and that there are some big computer security issues we better start working on.