The frightening infinite spaces: apeirophobia

Bobby Azarian writes in The Atlantic about Apeirophobia: The Fear of Eternity. This is the existential vertigo experienced by some when considering everlasting life (typically in a religious context), or just the infinite. Pascal’s Pensées famously touches on the same feeling: “The eternal silence of these infinite spaces frightens me.” For some this is upsetting enough that it actually count as a specific phobia, although in most cases it seems to be more a general unease.

Fearing immortality

Circle of life

I found the concept relevant since yesterday I had a conversation with a philosopher arguing against life extension. Many of her arguments were familiar: they come up again and again if you express a positive view of longevity. It is interesting to notice that many other views do not elicit the same critical response. Suggest a future in space and some think it might be wasteful or impossible, but rarely with the same tenaciousness as life extension. As soon as one rational argument is disproven another one takes its place.

In the past I have usually attributed this to ego defence and maybe terror management. We learn about our mortality when we are young and have to come up with a way of handling it: ignoring it, denying it by assuming eternal hereafters, that we can live on through works or children, various philosophical solutions, concepts of the appropriate shape of our lives, etc. When life extension comes up, this terror management or self image is threatened and people try to defend it – their emotional equilibrium is risked by challenges to the coping strategy (and yes, this is also true for transhumanists who resolve mortality by hoping for radical life extension: there is a lot of motivated thinking going on in defending the imminent breakthroughs against death, too). While “longevity is disturbing to me” is not a good argument it is the motivator for finding arguments that can work in the social context. This is also why no amount of knocking down these arguments actually leads anywhere: the source is a coping strategy, not a rationally consistent position.

However, the apeirophobia essay suggests a different reason some people argue against life extension. They are actually unsettled by indefinite or infinite lives. I do not think everybody who argues has apeirophobia, it is probably a minority fear (and might even be a different take on the fear of death). But it is a somewhat more respectable origin than ego defence.

When I encounter arguments for the greatness of finite and perhaps short spans of life, I often rhetorically ask – especially if the interlocutor is from a religious worldview – if they think people will die in Heaven. It is basically Sappho’s argument (“to die is an evil; for the gods have thus decided. For otherwise they would be dying.”) Of course, this rarely succeeds in convincing anybody but it tends to throw a spanner in the works. However, the apeirophobia essay actually shows that some religious people may have a somewhat consistent fear that eternal life in Heaven isn’t a good thing. I respect that. Of course, I might still ask why God in their worldview insists on being eternal, but even I can see a few easy ways out of that issue (e.g. it is a non-human being not affected by eternity in the same way).


I found infinity on the stairsAs I often have to point out, I do not believe immortality is a thing. We are finite beings in a random universe, and sooner or later our luck runs out. What to aim for is indefinitely long lives, lives that go on (with high probability) until we no longer find them meaningful. But even this tends to trigger apeirophobia. Maybe one reason is the indeterminacy: there is nothing pre-set at all.

Pascal’s worry seem to be not just the infinity of the spaces but also their arbitrariness and how insignificant we are relative to them. The full section of the Pensées:

205: When I consider the short duration of my life, swallowed up in the eternity before and after, the little space which I fill, and even can see, engulfed in the infinite immensity of spaces of which I am ignorant, and which know me not, I am frightened, and am astonished at being here rather than there; for there is no reason why here rather than there, why now rather than then. Who has put me here? By whose order and direction have this place and time been allotted to me? Memoria hospitis unius diei prætereuntis.

206: The eternal silence of these infinite spaces frightens me.

207: How many kingdoms know us not!

208: Why is my knowledge limited? Why my stature? Why my life to one hundred years rather than to a thousand? What reason has nature had for giving me such, and for choosing this number rather than another in the infinity of those from which there is no more reason to choose one than another, trying nothing else?

Pascal is clearly unsettled by infinity and eternity, but in the Pensées he tries to resolve this psychologically: since he trusts God, then eternity must be a good thing even if it is hard to bear. This is a very different position from my interlocutor yesterday, who insisted that it was the warm finitude of a human life that gave life meaning (a view somewhat echoed in Mark O’Connell’s To Be a Machine). To Pascal apeirophobia was just another challenge to become a good Christian, to the mortalist it is actually a correct, value-tracking intuition.

Apeirophobia as a moral intuition

Infinite ShardI have always been sceptical of psychologizing why people hold views. It is sometimes useful for emphatizing with them and for recognising the futility of knocking down arguments that are actually secondary to a core worldview (which it may or may not be appropriate to challenge). But it is easy to make mistaken guesses. Plus, one often ends up in the “sociological fallacy”: thinking that since one can see non-rational reasons people hold a belief then that belief is unjustified or even untrue. As Yudkowsky pointed out, forecasting empirical facts by psychoanalyzing people never works. I also think this applies to values, insofar they are not only about internal mental states: that people with certain characteristics are more likely to think something has a certain value than people without the characteristic only gives us information about the value if that characteristic somehow correlates with being right about that kind of values.

Feeling apeirophobia does not tell us that infinity is bad, just as feeling xenophobia does not tell us that foreigners are bad. Feeling suffering on the other hand does give us direct knowledge that it is intrinsically aversive (it takes a lot of philosophical footwork to construct an argument that suffering is actually OK). Moral or emotional intuitions certainly can motivate us to investigate a topic with better intellectual tools than the vague unease, conservatism or blind hope that started the process. The validity of the results should not depend on the trigger since there is no necessary relation between the feeling and the ethical state of the thing triggering it: much of the debate about “the wisdom of repugnance” is clarifying when we should expect the intuition to overwhelm the actual thinking and when they are actually reliable. I always get very sceptical when somebody claims their intuition comes from a innate sense of what the good is – at least when it differs from mine.

Would people with apeirophobia have a better understanding of the value of infinity than somebody else? I suspect apeirophobes are on average smarter and/or have a higher need for cognition, but this does not imply that they get things right, just that they think more and more deeply about concepts many people are happy to gloss over. There are many smart nonapeirophobes too.

A strong reason to be sceptical of apeirophobic intuitions is that intuitions tend to work well when we have plenty of experience to build them from, either evolutionarily or individually. Human practical physics intuitions are great for everyday objects and speeds, and progressively worsens as we reach relativistic or quantum scales. We do not encounter eternal life at all, and hence we should be very suspicious about the validity of aperirophobia as a truth-tracking innate signal. Rather, it is triggered when we become overwhelmed by the lack of references to infinity in our lived experience or we discover the arbitrarily extreme nature of “infinite issues” (anybody who has not experienced vertigo when they understood uncountable sets?) It is a correct signal that our minds are swimming above an abyss we do not know but it does not tell us what is in this abyss. Maybe it is nice down there? Given our human tendency to look more strongly for downsides and losses than positives we will tend to respond to this uncertainty by imagining diffuse worst case scenario monsters anyway.

Bad eternities

I do not think I have apeirophobia, but I can still see how chilling belief in eternal lives can be. Unsong’s disutility-maximizing Hell is very nasty, but I do not think it exists. I am not worried about Eternal Returns: if you chronologically live forever but actually just experience a finite length loop of experiences again and again then it makes sense to say that your life just that long.

My real worry is quantum immortality: from a subjective point of view one should expect to survive whatever happen in a multiverse situation, since one cannot be aware in those branches where one died. The problem is that the set of nice states to be in is far smaller than the set of possible states, so over time we should expect to end up horribly crippled and damaged yet unable to die. But here the main problem is the suffering and reduction of circumstances, not the endlessness.

There is a problem with endlessness here though: since random events play a decisive role in our experienced life paths it seems that we have little control over where we end up and that whatever we experience in the long run is going to be wholly determined by chance (after all, beyond 10100 years or more we will all have to be a succession of Boltzmann brains). But the problem seems to be more  the pointlessness that emerges from this chance than that it goes on forever: a finite randomised life seems to hold little value, and as Tolstoy put it, maybe we need infinite subjective lives where past acts matter to actually have meaning. I wonder what apeirophobes make of Tolstoy?

Embracing the abyss

XXI: Azathoth PleromaMy recommendation to apeirophobes is not to take Azarian’s advice and put eternity out of mind, but instead to embrace it in a controllable way. Learn set theory and the paradoxes of infinity. And then look at the time interval t=[0, \infty) and realise it can be mapped into the interval [0,1) (e.g. by f(t)=1/(t+1)). From the infinite perspective any finite length of life is equal. But infinite spans can be manipulated too: in a sense they are also all the same. The infinities hide within what we normally think of as finite.

I suspect Pascal would have been delighted if he knew this math. However, to him the essential part was how we turn intellectual meditation into emotional or existential equilibrium:

Let us therefore not look for certainty and stability. Our reason is always deceived by fickle shadows; nothing can fix the finite between the two Infinites, which both enclose and fly from it.

If this be well understood, I think that we shall remain at rest, each in the state wherein nature has placed him. As this sphere which has fallen to us as our lot is always distant from either extreme, what matters it that man should have a little more knowledge of the universe? If he has it, he but gets a little higher. Is he not always infinitely removed from the end, and is not the duration of our life equally removed from eternity, even if it lasts ten years longer?

In comparison with these Infinites all finites are equal, and I see no reason for fixing our imagination on one more than on another. The only
comparison which we make of ourselves to the finite is painful to us.

In the end it is we who make the infinite frightening or the finite painful. We can train ourselves to stop it. We may need very long lives in order to grow to do it well, though.

Review of the cyborg bill of rights 1.0

Cyborg NewtonThe Cyborg Bill of Rights 1.0 is out. Rich MacKinnon suggests the following rights:

A person shall enjoy the sanctity of bodily integrity and be free from unnecessary search, seizure, suspension or interruption of function, detachment, dismantling, or disassembly without due process.

A person shall be free (speech clause) to express themselves through temporary or permanent adaptions, alterations, modifications, or augmentations to the shape or form of their bodies. Similarly, a person shall be free from coerced or otherwise involuntary morphological changes.

A person shall be free from exploitive or injurious 3rd party ownerships of vital and supporting bodily systems. A person is entitled to the reasonable accrual of ownership interest in 3rd party properties affixed, attached, embedded, implanted, injected, infused, or otherwise permanently integrated with a person’s body for a long-term purpose.

A person is entitled to dominion over intelligences and agents, and their activities, whether they are acting as permanent residents, visitors, registered aliens, trespassers, insurgents, or invaders within the person’s body and its domain.

A legally recognized mutant shall enjoy all the rights, benefits, and responsibilities extended to natural persons.

As a sometime philosopher with a bit of history of talking about rights regarding bodily modification, I of course feel compelled to comment.

What are rights?

Artifical handFirst, what is a right? Clearly anybody can state that we have a right to X, but only some agents and X-rights make sense or have staying power.

One kind of rights are legal rights of various kinds. This can be international law, national law, or even informal national codes (for example the Swedish allemansrätten, which is actually not a moral/human right and actually fairly recent). Here the agent has to be some legitimate law- or rule-maker. The US Bill of Rights is an example: the result of a political  process that produced legal rights, with relatively little if any moral content. Legal rights need to be enforceable somehow.

Then there are normative moral principles such as fundamental rights (applicable to a person since they are a person), natural rights (applicable because of facts of the world) or divine rights (imposed by God). These are universal and egalitarian: applicable everywhere, everywhen, and the same for everybody. Bentham famously dismissed the idea of natural rights as “nonsense on stilts” and there is a general skepticism today about rights being fundamental norms. But insofar they do exist, anybody can discover and state them. Moral rights need to be doable.

While there may be doubts about the metaphysical nature of rights, if a society agrees on a right it will shape action, rules and thinking in an important way. It is like money: it only gets value by the implicit agreement that it has value and can be exchanged for goods. Socially constructed rights can be proposed by anybody, but they only become real if enough people buy into the construction. They might be unenforceable and impossible to perform (which may over time doom them).

What about the cyborg rights? There is no clear reference to moral principles, and only the last one refers to law. In fact, the preamble states:

Our process begins with a draft of proposed rights that are discussed thoroughly, adopted by convention, and then published to serve as model language for adoption and incorporation by NGOs, governments, and rights organizations.

That is, these rights are at present a proposal for social construction (quite literally) that hopefully will be turned into a convention (a weak international treaty) that eventually may become national law. This also fits with the proposal coming from MacKinnon rather than the General Secretary of the UN – we can all propose social constructions and urge the creation of conventions, treaties and laws.

But a key challenge is to come up with something that can become enforceable at some point. Cyborg bodies might be more finely divisible and transparent than human bodies, so that it becomes hard to regulate these rights. How do you enforce sovereignty against spyware?


Dragon leg 2Why is a right a right? There has to be a reason for a right (typically hinted at in preambles full of “whereas…”)

I have mostly been interested in moral rights. Patrick D. Hopkins wrote an excellent overview “Is enhancement worthy of being a right?” in 2008 where he looks at how you could motivate morphological freedom. He argues that there are three main strategies to show that a right is fundamental or natural:

  1. That the right conforms to human nature. This requires showing that it fits a natural end. That is, there are certain things humans should aim for, and rights help us live such lives. This is also the approach of natural law accounts.
  2. That the right is grounded in interests. Rights help us get the kinds of experiences or states of the world that we (rightly) care about. That is, there are certain things that are good for us (e.g.  “the preservation of life, health, bodily integrity, play, friendship, classic autonomy, religion, aesthetics, and the pursuit of knowledge”) and the right helps us achieve this. Why those things are good for us is another matter of justification, but if we agree on the laundry list then the right follows if it helps achieve them.
  3. That the right is grounded in our autonomy. The key thing is not what we choose but that we get to choose: without freedom of choice we are not moral agents. Much of rights by this account will be about preventing others from restricting our choices and not interfering with their choices. If something can be chosen freely and does not harm others, it has a good chance to be a right. However, this is a pretty shallow approach to autonomy; there are more rigorous and demanding ideas of autonomy in ethics (see SEP and IEP for more). This is typically how many fundamental rights get argued (I have a right to my body since if somebody can interfere with my body, they can essentially control me and prevent my autonomy).

One can do this in many ways. For example, David Miller writes on grounding human rights that one approach is to allow people from different cultures to live together as equals, or basing rights on human needs (very similar to interest accounts), or the instrumental use of them to safeguard other (need-based) rights. Many like to include human dignity, another tricky concept.

Social constructions can have a lot of reasons. Somebody wanted something, and this was recognized by others for some reason. Certain reasons are cultural universals, and that make it more likely that society will recognize a right. For example, property seems to be universal, and hence a right to one’s property is easier to argue than a right to paid holidays (but what property is, and what rules surround it, can be very different).

Legal rights are easier. They exist because there is a law or treaty, and the reasons for that are typically a political agreement on something.

It should be noted that many declarations of rights do not give any reasons. Often because we would disagree on the reasons, even if we agree on the rights. The UN declaration of human rights give no hint of where these rights come from (compare to the US declaration of independence, where it is “self-evident” that the creator has provided certain rights to all men). Still, this is somewhat unsatisfactory and leaves many questions unanswered.

So, how do we justify cyborg rights?

In the liberal rights framework I used for morphological freedom we could derive things rather straightforwardly: we have a fundamental right to life, and from this follows freedom from disassembly. We have a fundamental right to liberty, and together with the right to life this leads to a right to our own bodies, bodily sovereignty, freedom of morphology and the first half of the right to organic naturalization. We have a right to our property (typically derived from fundamental rights to seek our happiness and have liberty), and from this the second half of the organic naturalization right follows (we are literally mixing ourselves rather than our work with the value produced by the implants). Equality for mutants follow from having the same fundamental rights as humans (note that the bill talks about “persons”, and most ethical arguments try to be valid for whatever entities count as persons – this tends to be more than general enough to cover cyborg bodies). We still need some justification of the fundamental rights of life, liberty and happiness, but that is outside the scope of this exercise. Just use your favorite justifications.

The human nature approach would say that cyborg nature is such that these rights fit with it. This might be tricky to use as long as we do not have many cyborgs to study the nature of. In fact, since cyborgs are imagined as self-creating (or at least self-modifying) beings it might be hard to find any shared nature… except maybe the self-creation part. As I often like to argue, this is close to Mirandola’s idea of human dignity deriving from our ability to change ourselves.

The interest approach would ask how the cyborg interests are furthered by these rights. That seems pretty straightforward for most reasonably human-like interests. In fact, the above liberal rights framework is to a large extent an interest-based account.

The autonomy account is also pretty straightforward. All cyborg rights except the last are about autonomy.

Could we skip the ethics and these possibly empty constructions? Perhaps: we could see the cyborg bill of rights as a way of making a cyborg-human society possible to live in. We need to tolerate each other and set boundaries on allowed messing around with each other’s bodies. Universals of property lead to the naturalization right, territoriality the sovereignty right universal that actions under self-control are distinguished from those not under control might be taken as the root for autonomy-like motivations that then support the rest.

Which one is best? That depends. The liberal rights/interest system produces nice modular rules, although there will be much arguments on what has precedence. The human nature approach might be deep and poetic, but potentially easy to disagree on. Autonomy is very straightforward (except when the cyborg starts messing with their brain). Social constructivism allows us to bring in issues of what actually works in a real society, not just what perfect isolated cyborgs (on a frictionless infinite plane) should do.

Parts of rights

Alternative limb projectOne of the cool properties of rights is that they have parts – “the Hohfeldian incidents“, after Wesley Hohfeld (1879–1918) who discovered them. He was thinking of legal rights, but this applies to moral rights too. His system is descriptive – this is how rights work – rather than explaining why the came about or whether this is a good thing. The four parts are:

Privileges (alias liberties): I have a right to eat what I want. Someone with a driver’s licence has the privilege to drive. If you have a duty not do do something, then you have no privilege about it.

Claims: I have a claim on my employer to pay my salary. Children have a claim vis-a-vis every adult not to be abused. My employer is morally and legally dutybound to pay, since they agreed to do so. We are dutybound to refrain from abusing children since it is wrong and illegal.

These two are what most talk about rights deal. In the bill, the freedom from disassembly and freedom of morphology are about privileges and claims. The next two are a bit meta, dealing with rights over the first two:

Powers: My boss has the power to order me to research a certain topic, and then I have a duty to do it. I can invite somebody to my home, and then they have the privilege of being there as long as I give it to them. Powers allow us to change privileges and claims, and sometimes powers (an admiral can relieve a captain of the power to command a ship).

Immunities: My boss cannot order me to eat meat. The US government cannot impose religious duties on citizens. These are immunities: certain people or institutions cannot change other incidents.

These parts are then combined into full rights. For example, my property rights to this computer involve the privilege to use the computer, a claim against others to not use the computer, the power to allow others to use it or to sell it to them (giving them the entire rights bundle), and an immunity of others altering these rights. Sure, in practice the software inside is of doubtful loyalty and there are law-enforcement and emergency situation exceptions, but the basic system is pretty clear. Licence agreements typically give you a far

Sometimes we speak about positive and negative rights: if I have a negative right I am entitled to non-interference from others, while a positive right entitles me to some help or goods. My right to my body is a negative right in the sense that others may not prevent me from using or changing my body as I wish, but I do not have a positive right to demand that they help me with some weird bodymorphing. However, in practice there is a lot of blending going on: public healthcare systems give us positive rights to some (but not all) treatment, policing gives us a positive right of protection (whether we want it or not). If you are a libertarian you will tend to emphasize the negative rights as being the most important, while social democrats tend to emphasize state-supported positive rights.

The cyborg bill of rights starts by talking about privileges and claims. Freedom of morphology clearly expresses an immunity to forced bodily change. The naturalization right is about immunity from unwilling change of the rights of parts, and an expression of a kind of power over parts being integrated into the body. Sovereignty is all about power over entities getting into the body.

The right of bodily sovereignty seems to imply odd things about consensual sex – once there is penetration, there is dominion. And what about entities that are partially inside the body? I think this is because it is trying to reinvent some of the above incidents. The aim is presumably to cover pregnancy/abortion, what doctors may do, and other interventions at the same time. The doctor case is easy, since it is roughly what we agree on today: we have the power to allow doctors to work on our bodies, but we can also withdraw this whenever we want

Some other thoughts

Nigel on the screenThe recent case where the police subpoenad the pacemaker data of a suspected arsonist brings some of these rights into relief. The subpoena occurred with due process, so it was allowed by the freedom from disassembly. In fact, since it is only information and that it is copied one can argue that there was no real “disassembly”. There have been cases where police wanted bullets lodged in people in order to do ballistics on them, but US courts have generally found that bodily integrity trumps the need for evidence. Maybe one could argue for a derived right to bodily privacy, but social needs can presumably trump this just as it trumps normal privacy. Right now views on bodily integrity and privacy are still based on the assumption that bodies are integral and opaque. In a cyborg world this is no longer true, and the law may well move in a more invasive direction.

“Legally recognized mutant”? What about mutants denied legal recognition? Legal recognition makes sense for things that the law must differentiate between, not for things the law is blind to. Legally recognized mutants (whatever they are) would be a group that needs to be treated in some special way. If they are just like natural humans they do not need special recognition. We may have laws making it illegal to discriminate against mutants, but this is a law about a certain kind of behavior rather than the recipient. If I racially discriminate against somebody but happens to be wrong about their race, I am still guilty. So the legal recognition part does not do any work in this right.

And why just mutants? Presumably the aim here is to cover cyborgs, transhumans and other prefix-humans so they are recognized as legal and moral agents with the same standing. The issue is whether this is achieved by arguing that they were human and “mutated”, or are descended from humans, and hence should have the same standing, or whether this is due to them having the right kind of mental states to be persons. The first approach is really problematic: anencephalic infants are mutants but hardly persons, and basing rights on lineage seems ripe for abuse. The second is much simpler, and allows us to generalize to other beings like brain emulations, aliens, hypothetical intelligent moral animals, or the Swampman.

This links to a question that might deserve a section on its own: who are the rightsholders? Normal human rights typically deal with persons, which at least includes adults capable of moral thinking and acting (they are moral agents). Someone who is incapable, for example due to insanity or being a child, have reduced rights but are still a moral patient (someone we have duties towards). A child may not have full privileges and powers, but they do have claims and immunities. I like to argue that once you can comprehend and make use of a right you deserve to have it, since you have capacity relative to the right. Some people also think prepersons like fertilized eggs are persons and have rights; I think this does not make much sense since they lack any form of mind, but others think that having the potential for a future mind is enough to grant immunity. Tricky border cases like persistent vegetative states, cryonics patients, great apes and weird neurological states keep bioethicists busy.

In the cyborg case the issue is what properties make something a potential rightsholder and how to delineate the border of the being. I would argue that if you have a moral agent system it is a rightsholder no matter what it is made of. That is fine, except that cyborgs might have interchangeable parts: if cyborg A gives her arm to cyborg B, have anything changed? I would argue that the arm switched from being a part of/property of A to being a part of/property of B, but the individuals did not change since the parts that make them moral agents are unchanged (this is just how transplants don’t change identity). But what if A gave part of her brain to B? A turns into A’, B turns into B’, and these may be new agents. Or what if A has outsourced a lot of her mind to external systems running in the cloud or in B’s brain? We may still argue that rights adhere to being a moral agent and person rather than being the same person or a person that can easily be separated from other persons or infrastructure. But clearly we can make things really complicated through overlapping bodies and minds.


I have looked at the cyborg bill of rights and how it fits with rights in law, society and ethics. Overall it is a first stab at establishing social conventions for enhanced, modular people. It likely needs a lot of tightening up to work, and people need to actually understand and care about its contents for it to have any chance of becoming something legally or socially “real”. From an ethical standpoint one can motivate the bill in a lot of ways; for maximum acceptance one needs to use a wide and general set of motivations, but these will lead to trouble when we try to implement things practically since they give no way of trading one off against another one in a principled way. There is a fair bit of work needed to refine the incidences of the rights, not to mention who is a rightsholder (and why). That will be fun.

Doing right and feeling good

My panel at Hay-on-Wye (me, Elaine Glaser, Peter Dews and Simon Baron-Cohen) talked about compassion, the sentiment model of morality, effective altruism and how to really help the world. Now available as video!

My view is largely that moral action is strongly driven and motivated by emotions rather than reason, but outside the world of the blindingly obvious or everyday human activity our intuitions and feelings are not great guides. We do not function well morally when the numbers get too big or the cognitive biases become maladaptive. Morality may be about the heart, but ethics is in the brain.

Aristotle on trolling

Watchful AristotleA new translation of Aristotle’s classic “On trolling” by Rachel Barney! (Open Access)

That trolling is a shameful thing, and that no one of sense would accept to be called ‘troll’, all are agreed; but what trolling is, and how many its species are, and whether there is an excellence of the troll, is unclear. And indeed trolling is said in many ways; for some call ‘troll’ anyone who is abusive on the internet, but this is only the disagreeable person, or in newspaper comments the angry old man. And the one who disagrees loudly on the blog on each occasion is a lover of controversy, or an attention-seeker. And none of these is the troll, or perhaps some are of a mixed type; for there is no art in what they do. (Whether it is possible to troll one’s own blog is unclear; for the one who poses divisive questions seems only to seek controversy, and to do so openly; and this is not trolling but rather a kind of clickbait.)

Aristotle’s definition is quite useful:

The troll in the proper sense is one who speaks to a community and as being part of the community; only he is not part of it, but opposed. And the community has some good in common, and this the troll must know, and what things promote and destroy it: for he seeks to destroy.

He then goes on analysing the knowledge requirements of trolling, the techniques, the types or motivations of trolls, the difference between a gadfly like Socrates and a troll, and what communities are vulnerable to trolls. All in a mere two pages.

(If only the medieval copyists had saved his other writings on the Athenian Internet! But the crash and split of Alexander the Great’s social media empire destroyed many of them before that era.)

The text reminds me of another must-read classic, Harry Frankfurt’s “On Bullshit”. There Frankfurt analyses the nature of bullshitting. His point is that normal deception cares about the truth: it aims to keep someone from learning it. But somebody producing bullshit does not care about the truth or falsity of the statements made, merely that they fit some manipulative, social or even time-filling aim.

It is just this lack of connection to a concern with truth – this indifference to how things really are – that I regard as of the essence of bullshit.

It is pernicious, since it fills our social and epistemic arena with dodgy statements whose value is uncorrelated to reality, and the bullshitters gain from the discourse being more about the quality (or the sincerity) of bullshitting than any actual content.

Both of these essays are worth reading in this era of the Trump candidacy and Dugin’s Eurasianism. Know your epistemic enemies.

Desperately Seeking Eternity

Circle of lifeMe on BBC3 talking about eternity, the universe, life extension and growing up as a species.

Online text of the essay.

Overall, I am pretty happy with it (hard to get everything I want into a short essay and without using my academic caveats, footnotes and digressions). Except maybe for the title, since “desperate” literally means “without hope”. You do not seek eternity if you do not hope for anything.

Thanks for the razor, Bill!

Thermocouple, Piotr KowalskiI like the idea of a thanksgiving day, leaving out all the Americana turkeys, problematic immigrant-native relations and family logistics: just the moment to consider what really matters to you and why life is good. And giving thanks for intellectual achievements and tools makes eminent sense: This thanksgiving Sean Carroll gave thanks for the Fourier transform.

Inspired by this, I want to give thanks for Occam’s razor.

These days a razor in philosophy denotes a rule of thumb that allows one to eliminate something unnecessary or unlikely. Occam’s was the first: William of Ockham (ca. 1285-1349) stated “Pluralitas non est ponenda sine neccesitate” (“plurality should not be posited without necessity.”) Today we usually phrase it as “the simplest theory that fits is best”.

Principles of parsimony have been suggested for a long time; Aristotle had one, so did Maimonides and various other medieval thinkers. But let’s give Bill from Ockham the name in the spirit of Stigler’s law of eponymy.

Of course, it is not always easy to use. Is the many worlds interpretation of quantum mechanics possible to shave away? It posits an infinite number of worlds that we cannot interact with… except that it does so by taking the quantum mechanics formalism seriously (each possible world is assigned a probability) and not adding extra things like wavefunction collapse or pilot waves. In many ways it is conceptually simpler: just because there are a lot of worlds doesn’t mean they are wildly different. Somebody claiming there is a spirit world is doubling the amount of stuff in the universe, but that there is a lot of ordinary worlds is not too different from the existence of a lot of planets.

Simplicity is actually quite complicated. One can argue about which theory has the fewest and most concise basic principles, but also the number of kinds of entities postulated by the theory. Not to mention why one should go for parsimony at all.

In my circles, we like to think of the principle in terms of Bayesian statistics and computational complexity. The more complex a theory is, the better it can typically fit known data – but it will also generalize worse to new data, since it overfits the first set of data points. Parsimonious theories have fewer degrees of freedom, so they cannot fit as well as complex theories, but they are less sensitive to noise and generalize better. One can operationalize the optimal balance using various statistical information criteria (AIC = minimize the information lost when fitting, BIC = maximize likeliehood of the model). And Solomonoff gave a version of the razor in theoretical computer science: for computable sequences of bits there exists a unique (up to choice of Turing machine) prior that promotes sequences generated by simple programs and has awesome powers of inference.

But in day-to-day life Occam works well, especially with a maximum probability principle (you are more likely to see likely things than unlikely; if you see hoofprints in the UK, think horses not zebras). A surprising number of people fall for the salient stories inherent in unlikely scenarios and then choose to ignore Occam (just think of conspiracy theories). If the losses from low-probability risks are great enough one should rationally focus on them, but then one must check one’s priors for such risks. Starting out with a possibilistic view that anything is possible (and hence have roughly equal chance) means that one becomes paranoid or frozen with indecision. Occam tells you to look for the simple, robust ways of reasoning about the world. When they turn out to be wrong, shift gears and come up with the next simplest thing.

Simplicity might sometimes be elegant, but that is not why we should choose it. To me it is the robustness that matters: given our biased, flawed thought processes and our limited and noisy data, we should not build too elaborate castles on those foundations.