If you've done nothing wrong, you've got nothing to fear: Wikileaks and RIPA (Practical Ethics) - I blog about whether the "if you have done nothing wrong, you got nothing to fear" maxim applies to governments railing against Wikileaks. My overall conclusion is roughly that since the maxim is wrong, the answer is of course no. But there are interesting similarities between what Wikileaks have done and what data retention projects are doing.
I am more and more becoming convinced that we need to figure out how to achieve a socially sustainable form of transparent societies before it is too late. Governments are historically dangerous, surveillance is getting easier, and right now people don't seem to be seeing holding their governments accountable as a high priority. It is better to be stuck in the panopticon if there are no one-way-mirrors, but right now people are more interested in their mirror images.
Here is the one page handout I did for a seminar with the Oxford transhumanist rationality group about existential risk:
Examples: War, Genocide/democide , Totalitarianism, Famine, Social collapse, Pandemics, Supervolcanoes, Meteor impacts, Supernovae/gamma ray bursts, Climate changes, Resource crunches, Global computer failure, Nuclear conflicts, Bioweapons, Nanoweapons, Physical experiments, Bad superintelligence, Loss of human potential, Evolutionary risks, Unknowns
Types of risk: exogenous/endogenous/systemic, accidental/deliberate. Many ongoing What is the disvalue of human extinction? Parfit’s discussion of whether having 100% of humanity is wiped out is much worse than 99.999% of humanity killed. How to discount the future? Predictability: Past predictions have been bad. Known knowns/known unknowns/unknown unknowns/unknown knowns. Many risks affect each other’s likelihood, either through correlation, causation (war increases epidemic risk) or shielding (high probability of an early xrisk occurring will reduce the probability of a later risk wiping us out). Can we even do probability estimates? Xrisks cannot be observed in our past (bad for frequentists). But near misses might give us information. Partial information: power law distributed disasters, models (nuclear war near misses), experts? Anthropic bias: our existence can bias our observations. See Cirkovic, Sandberg and Bostrom, “Anthropic shadows: observation selection effects and human extinction risks” However, independent observations or facts can constrain these biases: Tegmark and Bostrom, “How unlikely is a doomsday catastrophe?” Cognitive biases: big problem here. Low probability, dramatic effects, high uncertainty. Yudkowsky, “Cognitive biases potentially affecting judgment of global risks”. Risk of arguments being wrong: Ord, Hillerbrand and Sandberg, “Probing the Improbable” How to act rationally: Xrisk likely underestimated. Must be proactive rather than reactive? Reducing risk might be worth more in terms of future lives than expanding fast (Bostrom “Astronomical Waste: The Opportunity Cost of Delayed Technological Development” ) The maxipok principle: Maximize the probability of an okay outcome. Spend most on biggest threat, earliest threat, easiest threat, least known threat or most political threat? Also worth considering: when do we want true information to spread? Bostrom, “Information hazards”
Människa+ - the Swedish transhumanist group Människa+ has set up their website (In Swedish of course). They have various transhumanism relevant essays, links and videos. Ah, my talk on collective intelligence from Lift 10 is near the top - I feel honored.
On practical ethics I blog about Unintentional contraception: does the Pope's new stance on condom use to prevent AIDS together with the doctrine of double effect that it is OK to use them as contraceptives, as long as you are sufficiently worried about getting a STD?
Things like the doctrine of double effect are the reason I am a consequentialist.
Retaining privacy: the EU commission and the right to be forgotten (Practical Ethics) - I blog about the EU Commission draft on data protection, and how this links with data retention and actual privacy norms.
The Swedish blog inslag.se has an excellent little post about normality. It shows that being different from the norm is quite normal. The argument has been made before, but it is worth repeating.
If we assume the following rather modest simplifications:
One can quibble a bit with each of these assumptions, but mild modifications do not change the conclusion much.
Given this, the probability of being normal (in the sense of not being different along any of these 200 dimensions) is 0.98200=1.8%. So normals are a rarer than the other minorities!
True normality is hence so rare that it makes you a member of a very small and odd minority.