Blueberry Earth

[Update: I have a paper version of this essay on arXiv:1807.10553, extending and correcting some of the results.]

On Physics Stackexchange billybodega asked the question:

BlueberrySupposing that the entire Earth was instantaneously replaced with an equal volume of closely packed, but uncompressed blueberries, what would happen from the perspective of a person on the surface?

Unfortunately the site tends to frown on fun questions like this, so it was in my opinion prematurely closed while I was working out the answer. So here it is, with some extra extensions:

The density of blueberries has been estimated to 625.56 kg/m3, WillO on Stackexchange estimated it to 13% of Earth’s density (5510*0.13=716.3 kg/m3), so assuming it to be around \rho_{berries}=700 kg/m3 appears to be reasonable. Blueberry pulp has a density similar to water,  980 to 1050 kg per m3 although this is both temperature dependent and depends on how much solids there are. The difference to the whole berries is due to the air between the berries. Note that these are likely the big, thick-skinned “American” blueberries rather than the small wild thin-skinned blueberries (bilberries) I grew up with; the latter would have higher density due to their smaller size and break far more easily.

So instantaneously turning Earth into blueberries will reduce its mass to 0.1274 of what it was. Gravity will become correspondingly weaker, g_{BE}=0.1274 g.

However, blueberries are not particularly sturdy. While there is a literature on blueberry mechanics (of course!), I did not manage to find a great source on their compressive strength. A rough estimate is possible: stacking a sugar cube (1 g) on a berry will not break it, while a milk carton (1 kg) will; 100 g has a decent but not certain chance. So if we assume the blueberry area to be one square centimetre the breaking pressure is on the order of P_{break}=0.1 g / 10^{-4} \approx 10,000 N/m2. This allows us to estimate at what depth the berries will start to break: z=P_{break}/g_{BE}\rho_{berries} = 11.4188 m. So while the surface will be free blueberries they will start pulping within a few meters of the surface.

This pulping has an important effect: the pulp separates from the air, coalescing into a smaller sphere. If we assume pulp to be an incompressible fluid, then a sphere of pulp with the same mass as the initial berries will be \rho_{pulp} r_{pulp}^3 = \rho_{berries}r_{earth}^3, or r_{pulp} = (\rho_{berries}/ \rho_{pulp} )^{1/3}r_{earth}. In this case we end up with a planet with 0.8879 times smaller radius (5,657 km), surrounded by a vast atmosphere.

The freefall timescale for the planet is initially 41 minutes, but relatively shortly the pulping interactions, the air convection etc will slow things down in a complicated way. I expect that the the actual coalescence will take hours, with some late bubbles from the deep interior erupting fairly late.

The gravity on the pulp surface is just 1.5833 m/s2, 16% of normal gravity – almost exactly lunar gravity. This weakens convection currents and the speed with which bubbles move up. The scale height of the atmosphere, assuming the same composition and temperature as on Earth, will be 6.2 times higher. This means that pressure will decline much less with altitude, allowing far thicker clouds and weather systems. As we will see, the atmosphere will puff up more.

The separation has big consequences. Enormous amounts of air will be pushing out from the pulp as bubbles and jets, producing spectacular geysers (especially since the gravity is low). Even more dramatic is the heating: a lot of gravitational energy is released as the mass is compacted. The total gravitational energy of a constant density sphere of radius R is

\int_0^R G [4\pi r^2 \rho] [4 \pi r^3 \rho/3] / r dr  = (16\pi^2 G\rho^2/3) \int_0^R r^4 dr
=(16\pi^2 G/15)\rho^2 R^5

(the first factor in the integral is the mass of a spherical shell of radius r, the second the mass of the stuff inside, and the third the 1/r gravitational potential). If we ignore the mass of the air since it is small and we just want an order of magnitude estimate,  the compression of the berry mass gives energy

E=(16\pi^2 G/15)(\rho_{berries}^2 r_{earth}^5 - \rho_{pulp}^2R_{pulp}^5) \approx 4.3594\times 10^{29} J.

This is the energy output of the sun over half an hour, nothing to sneeze at: blueberry earth will become hot. There is about 573,000 J per kg, enough to heat the blueberries from freezing to boiling.

The result is that blueberry earth will turn into a roaring ocean of boiling jam, with the geysers of released air and steam likely ejecting at least a few berries into orbit (escape velocity is just 4.234 km/s, and berries at the initial surface will be even higher up in the potential). As the planet evolves a thick atmosphere of released steam will add to the already considerable air from the berries. It is not inconceivable that the planet may heat up further due to a water vapour greenhouse effect, turning into a very odd Venusian world.

Meanwhile the jam ocean is very deep, and the pressure at depth will be enough to cause the formation of high pressure ice even if it is warm. If the formation process is slow there will be some separation of water into ice and a concentration of other chemicals in the jam ocean, but I suspect the rapid collapse will instead make some kind of composite pulp ice. Ice VII forms above 9 GPa, so if we just use constant gravity this happens at a depth z_{ice}=P_{VII}/g_{BE}\rho_{pulp}\approx 1,909 km, about two-thirds of the radius. This would make up most of the interior. However, gravity is a bit weaker in the interior, so we need to take that into account. The pressure from all the matter above radius r is P(r) =(3GM^2/8\pi R^4)(1-(r/R)^2), and the ice core will have radius r_{ice}=\sqrt{1-P_{VII}/P(0)}  \approx 3,258 km. This is smaller, about 57% of the radius, and just 20% of the total volume.

The coalescence will also speed up rotation. The original blueberry earth would of course make one rotation every 24 hours, but the smaller result would have a smaller moment of inertia. The angular momentum conservation gives (2/5)MR_1^2(2\pi/T_1) = (2/5)MR_2^2(2\pi/T_2), or T_2 = (R_2/R_1)^2 T_1, in this case 18.9210 hours. This in turn will increase the oblateness a bit, to approximately 0.038 – an 8.8 times increase over Earth.

Another effect is the orbit of the Moon. Now the two bodies have about equal mass. Is the Moon bound to blueberry earth? A kilogram of lunar material has potential energy GM_{BE}/r_{moon} \approx 1.6925 \times 10^{5} J, while the kinetic energy is 2.6442\times 10^5 J – more than enough to escape. Had it remained the jam ocean would have made an excellent tidal dissipation mechanism that would have slowed down rotation and moved blueberry earth towards tidal lock with the moon much earlier than the 50 billion years it would otherwise have taken.

So, to sum up, to a person standing on the surface of the Earth when it turns into blueberries, the first effect would be a drastic reduction of gravity. Standing on the blueberries might be possible in theory, except that almost immediately they begin to compress rapidly and air starts erupting everywhere. The effect is basically the worst earthquake ever, and it keeps on going until everything has fallen 714 km. While this is going on everything heats up drastically until the entire environment is boiling jam and steam. The end result is a world that has a steam atmosphere covering an ocean of jam on top of warm blueberry granita.

Why Cherry 2000 should not be banned, Terminator should, and what this has to do with Oscar Wilde

Binary curious[This is what happens when I blog after two glasses of wine. Trigger warning for possibly stupid cultural criticism and misuse of Oscar Wilde.]

From robots to artificiality

On practical ethics I discuss what kind of robots we ought to campaign against. I have signed up against autonomous military robots, but I think sex robots are fine. The dividing line is that the harm done (if any) is indirect and victimless, and best handled through sociocultural means rather than legislation.

I think the campaign against sex robots has a point in that there are some pretty creepy ideas floating around in the world of current sex bots. But I also think it assumes these ideas are the only possible motivations. As I pointed out in my comments on another practical ethics post, there are likely people turned on by pure artificiality – human sexuality can be far queerer than most think.

Going off on a tangent, I am reminded of Oscar Wilde’s epigram

“The first duty in life is to be as artificial as possible. What the second duty is no one has as yet discovered.”

Being artificial is not the same thing as being an object. As noted by Barris, Wilde’s artificiality actually fits in with pluralism and liberalism. Things could be different. Yes, in the artificial world nothing is absolutely given, everything is the result of some design choices. But assuming some eternal Essence/Law/God is necessary for meaning or moral exposes one to a fruitless search for that Thing (or worse, a premature assumption one has found It, typically when looking in the mirror). Indeed, as Dorian Gray muses, “Is insincerity such a terrible thing? I think not. It is merely a method by which we can multiply our personalities.” We are not single personas with unitary identities and well defined destinies, and this is most clearly visible in our social plays.

Sex, power and robots

Continuing on my Wildean binge, I encountered another epigram:

“Everything in the world is about sex except sex. Sex is about power.”

I think this cuts close to the Terminator vs. Cherry 2000 debate. Most modern theorists of gender and sex are of course power-obsessed (let’s blame Foucault). The campaign against sex robots clearly see the problem as the robots embodying and perpetuating a problematic unequal power structure. I detect a whiff of paternalism there, where women and children – rather than people – seem to be assumed to be the victims and in the need of being saved from this new technology (at least it is not going as far as some other campaigns that fully assume they are also suffering from false consciousness and must be saved from themselves, the poor things). But sometimes a cigar is just a cigar… I mean sex is sex: it is important to recognize that one of the reasons for sex robots (and indeed prostitution) is the desire for sex and the sometimes awkward social or biological constraints of experiencing it.

The problem with autonomous weapons is that power really comes out of a gun. (Must resist making a Zardoz reference…) It might be wielded arbitrarily by an autonomous system with unclear or bad orders, or it might wielded far too efficiently by an automated armed force perfectly obedient to its commanders – removing the constraint that soldiers might turn against their rulers if being aimed against their citizenry. Terminator is far more about unequal and dangerous power than sex (although I still have fond memories of seeing a naked Arnie back in 1984). The cultural critic may argue that the power games in the bedroom are more insidious and affect more of our lives than some remote gleaming gun-metal threat, but I think I’d rather have sexism than killing and automated totalitarianism. The uniforms of the killer robots are not even going to look sexy.

It is for your own good

Trying to ban sex robots is about trying to shape society into an appealing way – the goal of the campaign is to support “development of ethical technologies that reflect human principles of dignity, mutuality and freedom” and the right for everybody to have their subjectivity recognized without coercion. But while these are liberal principles when stated like this, I suspect the campaign or groups like it will have a hard time keeping out of our bedrooms. After all, they need to ensure that there is no lack of mutuality or creepy sex robots there. The liberal respect for mutuality can become a very non-liberal worship of Mutuality, embodied in requiring partners to sign consent forms, demanding trigger warnings, and treating everybody who is not responding right to its keywords as suspects of future crimes. The fact that this absolutism comes from a very well-meaning impulse to protect something fine makes it even more vicious, since any criticism is easily mistaken as an attack on the very core Dignity/Mutuality/Autonomy of humanity (and hence any means of defence are OK). And now we have all the ingredients for a nicely self-indulgent power trip.

This is why Wilde’s pluralism is healthy. Superficiality, accepting the contrived and artificial nature of our created relationships, means that we become humble in asserting their truth and value. Yes, absolute relativism is stupid and self defeating. Yes, we need to treat each other decently, but I think it is better to start from the Lockean liberalism that allows people to have independent projects rather than assume that society and its technology must be designed to embody the Good Values. Replacing “human dignity” with the word “respect” usually makes ethics clearer.

Instead of assuming we can a priori figure out how technology will change us and then select the right technology, we try and learn. We can make some predictions with reasonable accuracy, which is why trying to rein in autonomous weapons makes sense (the probability that they lead to a world of stability and peace seems remote). But predicting cultural responses to technology is not something we have any good track record of: most deliberate improvements of our culture have come from social means and institutions, not banning technology.

“The fact is, that civilisation requires slaves. The Greeks were quite right there. Unless there are slaves to do the ugly, horrible, uninteresting work, culture and contemplation become almost impossible. Human slavery is wrong, insecure, and demoralising. On mechanical slavery, on the slavery of the machine, the future of the world depends.”

Awesome blogs

I recently discovered Alex Wellerstein’s excellent blog Restricted data: the nuclear secrecy blog. I found it while looking for nuclear stockpiles data, but was drawn in by a post on the evolution of nuclear yield to mass. Then I started reading the rest of it. And finally, when reading this post about the logo of the IAEA I realized I needed to mention to the world how good it is. Be sure to test the critical assembly simulator to learn just why critical mass is not the right concept.

Another awesome blog is Almost looks like work by Jasmcole. I originally found it through a wonderfully over the top approach to positioning a wifi router (solving Maxwell’s equations turns out to be easier than the Helmholz equation!). But there are many other fascinating blog essays on physics, mathematics, data visualisation, and how to figure out propeller speeds from camera distortion.

 

Quantifying busyness

Tempus fugit

If I have one piece of advice to give to people, it is that they typically have way more time now than they will ever have in the future. Do not procrastinate, take chances when you see them – you might never have the time to do it later.

One reason is the gradual speeding up of subjective time as we age: one day is less time for a 40 year old than for a 20 year old, and way less than the eon it is to a 5 year old. Another is that there is a finite risk that opportunities will go away (including our own finite lifespans). The main reason is of course the planning fallacy: since we underestimate how long our tasks will take, our lives tend to crowd up. Accepting to give a paper in several months time is easy, since there seems to be a lot of time to do it in between… which mysteriously disappears until you sit there doing an all-nighter. There is also the likely effect that as you grow in skill, reputation and career there will be more demands on your time. All in all, expect your time to grow in preciousness!

Mining my calendar

I recently noted that my calendar had filled up several weeks in advance, something I think did not happen to this extent a few years back. A sign of a career taking off, worsening time management, or just bad memory? I decided to do some self-quantification using my Google calendar. I exported the calendar as an .ics file and made a simple parser in Matlab.

Histogram of time distance between scheduling time and actual event.
Histogram of time distance between scheduling time and actual event.

It is pretty clear from a scatter plot that most entries are for the near future – a few days or weeks ahead. Looking at a histogram shows that most are within a month (a few are in the past – I sometimes use my calendar to note when I have done something like an interview that I may want to remember later).

Log-log plot of the histogram of event scheduling intervals.
Log-log plot of the histogram of event scheduling intervals.

Plotting it as a log-log diagram suggests it is lighter-tailed than a power-law: there is a characteristic scale. And there are a few wobbles suggesting 1-week, 2-week and 3-week periodicities.

Mean and median distance to newly scheduled events (top), annual number of events scheduled (bottom). The eventual 2015 annual number has been estimated (dashed line).
Mean and median distance to newly scheduled events (top), annual number of events scheduled (bottom). The eventual 2015 annual number has been estimated (dashed line).

Am I getting busier? Plotting the mean and median distance to scheduled events, and the number of events per year, suggests yes. The median distance to the things I schedule seems to be creeping downwards, while the number of events per year has clearly doubled from about 400 in 2008 to 800 in 2014 (and extrapolating 2015 suggests about 1000 scheduled events).

Number of calendar events per 14 day period.
Number of calendar events per 14 day period. Red line marks present.

Plotting the number of events I had per 14-day period also suggests that I have way more going on now than a few years ago. The peaks are getting higher and the mean period is more intense.

When am I free?

A good measure of busyness would be the time horizon: how far ahead should you ask me for a meeting if you want to have a high chance of getting it?

One approach would be to look for the probability Q(t) that a day t days ahead is entirely empty. If the probability that I will fill in something i days ahead is P(i), then the chance for an empty day is Q(t) = \prod_{i=t}^\infty (1-P(i)). We can estimate P(i) by doing a curve-fit (a second degree curve works well), but we can of course just estimate from the histogram counts: \hat{P}(i)=N(i)/N.

Probability that I will have an entirely free day a certain number of days ahead.
Probability that I will have an entirely free day a certain number of days ahead.

However, this method is slightly wrong. Some days are free, others have many different events. If I schedule twice as many events the chance of a free day should be lower. A better way of estimating Q(t) is to think in terms of the rate of scheduling. We can view this as a Poisson process, where the rate of scheduling \lambda(i) tells us how often I schedule something i days ahead. An approximation is \hat{\lambda}(i)=N(i)/T, where T is the time interval we base our estimate on. This way Q(t) = \prod_{i=t}^\infty e^{-\lambda(i)}.

Probability that I will be free a certain number of days ahead for different years of my calender, estimated using a Poisson rate model.

 

If we slice the data by year, then there seems to be a fairly clear trend towards the planning horizon growing – I have more and more events far into future, and I have more to do. Oh, those halcyon days in 2007 when I presumably just lazed around…

Distance to first day where I have 50%, 75% or 90% chance of being entirely unscheduled.

 

If we plot when I have 50%, 75% and 90% chance of being free, the trend is even clearer. At present you need to ask about three weeks in advance to have a 50% chance of grabbing me, and 187 days in advance to be 90% certain (if you want an entire working week with 50% chance, this is close to where you should go). Back in 2008 the 50% point was about a week and the 90% point 1.5 months ahead. I have become around 3 times busier.

Conclusions

So, I have become busier. This is of course no evidence of getting more done – a lot of events are pointless meetings, and who knows if I am doing anything helpful at the other events. Plus, I might actually be wasting my time doing statistics and blogging instead of working.

But the exercise shows that it is possible to automatically estimate necessary planning horizons. Maybe we should add this to calendar apps to help scheduling: my contact page or virtual secretary might give you an automatically updated estimate of how far ahead you need to schedule things to have a good chance of getting me. It doesn’t have to tell you my detailed schedule (in principle one could do a privacy attack on the schedule by asking for very specific dates and seeing if they were blocked).

We can also use this method to look at levels of busyness across organisations. Who have flexibility in their schedules, who are so overloaded that they cannot be effectively involved in projects? In the past, tasks tended to be simple and the issue was just the amount of time people had. But today we work individually yet as part of teams, and coordination (meetings, seminars, lectures) are the key links: figuring out how to schedule them right is important for effectivity.

If team member j has scheduling rates \lambda_j(i) and they are are uncorrelated (yeah, right), then Q(t)=\prod_{i=t}^\infty e^{-\sum_j\lambda_j(i)}. The most important lesson is that the chance of everybody being able to make it to any given meeting day declines exponentially with the number of people. If the \lambda_j(i) decline exponentially with time (plausible in at least my case) then scheduling a meeting requires the time ahead to be proportional to the number of people involved: double the meeting size, at least double the planning horizon. So if you want nimble meetings, make them tiny.

In the end, I prefer to live by the advice my German teacher Ulla Landvik once gave me, glancing at the school clock: “I see we have 30 seconds left of the lesson. Let’s do this excercise – we have plenty of time!” Time not only flies, it can be stretched too.

Addendum  2015-05-01

Some further explorations.

Days until next completely free day as a function of time. Grey shows data day-by-day, blue averaged over 7 days, green 30 days and red one year.
Days until next completely free day as a function of time. Grey shows data day-by-day, blue averaged over 7 days, green 30 days and red one year.

Owen Cotton-Barratt pointed out that another measure of busyness might be the distance to the next free day. Plotting it shows a very bursty pattern, with noisy peaks. The mean time was about 2-3 days: even though a lot of time the horizon is far away, often an empty day slips through too. It is just that it cannot be relied on.

Histogram of the timing of events by weekday.
Histogram of the timing of events by weekday.

Are there periodicities? The most obvious is the weekly dynamics: Thursdays are busiest, weekend least busy. I tend to do scheduling in a roughly similar manner, with Tuesdays as the top scheduling day.

Number of events scheduled per day, plotted across my calendar.
Number of events scheduled per day, plotted across my calendar.

Over the years, plotting the number of events per day (“event intensity”) it is also clear that there is a loose pattern. Back in 2008-2011 one can see a lower rate around day 75 – that is the break between Hilary and Trinity term here in Oxford. There is another trough around day 200-250, the summer break and the time before the Michaelmas term. However, this is getting filled up over time.

Periodogram of event intensity, showing periodicities in my schedule. Note the weekly and yearly peaks.
Periodogram of event intensity, showing periodicities in my schedule. Note the weekly and yearly peaks.

Making a periodogram produces an obvious peak for 7 days, and a loose yearly periodicity. Between them there is a bunch of harmonics. The funny thing is that the week periodicity is very strong but hard to see in the map above.

Anthropic negatives

Inverted cumulusStuart Armstrong has come up with another twist on the anthropic shadow phenomenon. If existential risk needs two kinds of disasters to coincide in order to kill everybody, then observers will notice the disaster types to be anticorrelated.

The minimal example would be if each risk had 50% independent chance of happening: then the observable correlation coefficient would be -0.5 (not -1, since there is 1/3 chance to get neither risk; the possible outcomes are: no event, risk A, and risk B). If the probability of no disaster happening is N/(N+2) and the risks are equal 1/(N+2), then the correlation will be -1/(N+1).

I tried a slightly more elaborate model. Assume X and Y to be independent power-law distributed disasters (say war and pestillence outbreaks), and that if X+Y is larger than seven billion no observers will remain to see the outcome. If we ramp up their size (by multiplying X and Y with some constant) we get the following behaviour (for alpha=3):

(Top) correlation between observed power-law distributed independent variables multiplied by an increasing multiplier, where observation is contingent on their sum being smaller than 7 billion. Each point corresponds to 100,000 trials. (Bottom) Fraction of trials where observers were wiped out.
(Top) correlation between observed power-law distributed independent variables multiplied by an increasing multiplier, where observation is contingent on their sum being smaller than 7 billion. Each point corresponds to 100,000 trials. (Bottom) Fraction of trials where observers were wiped out.

As the situation gets more deadly the correlation becomes more negative. This also happens when allowing the exponent run from the very fat (alpha=1) to the thinner (alpha=3):

(top) Correlation between observed independent power-law distributed variables  (where observability requires their sum to be smaller than seven billion) for different exponents. (Bottom) fraction of trials ending in existential disaster. Multiplier=500 million.
(top) Correlation between observed independent power-law distributed variables (where observability requires their sum to be smaller than seven billion) for different exponents. (Bottom) fraction of trials ending in existential disaster. Multiplier=500 million.

The same thing also happens if we multiply X and Y.

I like the phenomenon: it gives us a way to look for anthropic effects by looking for suspicious anticorrelations. In particular, for the same variable the correlation ought to shift from near zero for small cases to negative for large cases. One prediction might be that periods of high superpower tension would be anticorrelated with mishaps in the nuclear weapon control systems. Of course, getting the data might be another matter. We might start by looking at extant companies with multiple risk factors like insurance companies and see if capital risk becomes anticorrelated with insurance risk at the high end.