Large information processing objects have some serious limitations due to signal delays and heat production.
Latency
Consider a spherical “Jupiter-brain” of radius . It will take maximally seconds to signal across it, and the average time between two random points (selected uniformly) will be .
Whether this is too much depends on the requirements of the system. Typically the relevant question is if the transmission latency is long compared to the processing time of the local processing. In the case of the human brain delays range between a few milliseconds up to 100 milliseconds, and neurons have typical frequencies up to maximally 100 Hz. The ratio between transmission time and a “processing cycle” will hence be between 0.1-10, i.e. not far from unity. In a microprocessor the processing time is on the order of s and delays across the chip (assuming 10% c signals) s, .
If signals move at lightspeed and the system needs to maintain a ratio close to unity, then the maximal size will be (or if information must also be sent back after a request). For nanosecond cycles this is on the order of centimeters, for femtosecond cycles 0.1 microns; conversely, for a planet-sized system (R=6000 km) s, 25 Hz.
The cycle size is itself bounded by lightspeed: a computational element such as a transistor needs to have a radius smaller than the time it takes to signal across it, otherwise it would not function as a unitary element. Hence it must be of size or, conversely, the cycle time must be slower than seconds. If a unit volume performs computations per second close to this limit, , or . (More elaborate analysis can deal with quantum limitations to processing, but this post will be classical.)
This does not mean larger systems are impossible, merely that the latency will be long compared to local processing (compare the Web). It is possible to split the larger system into a hierarchy of subsystems that are internally synchronized and communicate on slower timescales to form a unified larger system. It is sometimes claimed that very fast solid state civilizations will be uninterested in the outside world since it both moves immeasurably slowly and any interaction will take a long time as measured inside the fast civilization. However, such hierarchical arrangements may be both very large and arbitrarily slow: the civilization as a whole may find the universe moving at a convenient speed, despite individual members finding it frozen.
Waste heat dissipation
Information processing leads to waste heat production at some rate Watts per cubic meter.
Passive cooling
If the system just cools by blackbody radiation, the maximal radius for a given maximal temperature is
where is the Stefan–Boltzmann constant. This assumes heat is efficiently distributed in the interior.
If it does computations per volume per second, the total computations are – it really pays off being able to run it hot!
Still, molecular matter will melt above 3600 K, giving a max radius of around km. Current CPUs have power densities somewhat below 100 Watts per cm; if we assume 100 W per cubic centimetre and $R<29$ cm! If we assume a power dissipation similar to human brains the the max size becomes 2 km. Clearly the average power density needs to be very low to motivate a large system.
Using quantum dot logic gives a power dissipation of 61,787 W/m^3 and a radius of 470 meters. However, by slowing down operations by a factor the energy needs decrease by the factor . A reduction of speed to 3% gives a reduction of dissipation by a factor , enabling a 470 kilometre system. Since the total computations per second for the whole system scales with the size as slow reversible computing produces more computations per second in total than hotter computing. The slower clockspeed also makes it easier to maintain unitary subsystems. The maximal size of each such system scales as , and the total amount of computation inside them scales as . In the total system the number of subsystems change as : although they get larger, the whole system grows even faster and becomes less unified.
The limit of heat emissions is set by the Landauer principle: we need to pay at least Joules for each erased bit. So the number of bit erasures per second and cubic meter will be less than . To get a planet-sized system P will be around 1-10 W, implying for a hot 3600 K system, and for a cold 3 K system.
Active cooling
Passive cooling just uses the surface area of the system to radiate away heat to space. But we can pump coolants from the interior to the surface, and we can use heat radiators much larger than the surface area. This is especially effective for low temperatures, where radiation cooling is very weak and heat flows normally gentle (remember, they are driven by temperature differences: not much room for big differences when everything is close to 0 K).
If we have a sphere with radius R with internal volume of heat-emitting computronium, the surface must have area devoted to cooling pipes to get rid of the heat, where is the amount of Watts of heat that can b carried away by a square meter of piping. This can be formulated as the differential equation:
This grows as for larger . The average computronium density across the system falls as as the system becomes larger.
If we go for a cooling substance with great heat capacity per mass at 25 degrees C, hydrogen has 14.30 J/g/K. But in terms of volume water is better at 4.2 J/cm/K. However, near absolute zero heat capacities drop down towards zero and there are few choices of fluids. One neat possibility is superfluid cooling. They carry no thermal energy – they can however transport heat by being converted into normal fluid and have a frictionless countercurrent bringing back superfluid from the cold end. The rate is limited by the viscosity of the normal fluid, and apparently there are critical velocities of the order of mm/s. A CERN paper gives the formula for the heat transport rate per square meter, where is 800 ms/kg at 1.8K, is the density of normal fluid, the superfluid, is the entropy per unit mass. Looking at it as a technical coolant gives a steady state heat flux along a pipe around 1.2 W/cm in a 1 meter pipe for a 1.9-1.8K difference in temperature. There are various nonlinearities and limitations due to the need to keep things below the lambda point. Overall, this produces a heat transfer coefficient of about , in line with the range 10,000-100,000 W/m^2/K found in forced convection (liquid metals have maximal transfer ability).
So if we assume about 1 K temperature difference, then for quantum dots at full speed we have a computational volume for a one km system 7.7 million cubic meters of computronium, or about 0.001 of the total volume. Slowing it down to 3% (reducing emissions by 1000) boosts the density to 86%. At this intensity a 1000 km system would look the same as the previous low-density one.
Conclusion
If the figure of merit is just computational capacity, then obviously a larger computer is always better. But if it matters that parts stay synchronized, then there is a size limit set by lightspeed. Smaller components are better in this analysis, which leaves out issues of error correction – below a certain size level thermal noise, quantum tunneling and cosmic rays will start to induce errors. Handling high temperatures well pays off enormously for a computer not limited by synchronization or latency in terms of computational power; after that, reducing volume heat production has a higher influence on total computation than actual computation density.
Active cooling is better than passive cooling, but the cost is wasted volume, which means longer signal delays. In the above model there is more computronium at the centre than at the periphery, somewhat ameliorating the effect (the mean distance is just 0.03R). However, this ignores the key issue of wiring, which is likely to be significant if everything needs to be connected to everything else.
In short, building a Jupiter-sized computer is tough. Asteroid-sized ones are far easier. If we ever find or build planet-sized systems they will either be reversible computing, or mostly passive storage rather than processing. Processors by their nature tend to be hot and small.
[Addendum: this article has been republished in H+ Magazine thanks to Peter Rothman. ]
A solid state cooling, where some “shape changing asteroids” in very elliptical orbits comes close to the center, where they collect the heat and emits it later in the far away cold space – could be used.
It could be much better than a liquid cooling.
Maybe. This is essentially a giant radiator with moving parts. I did not go into detail of how to emit the waste heat to the cosmic thermal bath, since one can do it in a lot of ways.
Space is not really cold or hot; what matters is how much incident sunlightthere is at any spot. Place a big reflector in front of the Jupiter brain radiators and they “see” just the 3K background plus some starlight. So there is no real need to have them move far except to avoid them heating up each other. A flat radiator array works pretty well for that too.
I think the exception would be if one wanted a very cold system: there even the blackbody radiation from the back of the reflector might be troublesome. But at that point one would want to move the whole system to the outer part of the solar system anyway. However, a very big (lots of heat) yet cold system might indeed want to put radiators well away from the computational core, and then maybe the moving radiators are a good idea.
Not only moving, but also of a flexible shape. To spread across a million square kilometers out there to cool rapidly. And to shrink back to a football size when returning back to heat up again.
For example.
However, this is not very different from having a fluid pick up the heat, get spread out across huge radiator fins, and then pumped back. Imagine your solid objects moving with shorter and shorter spacing, perhaps subdivided into smaller and smaller chunks: in the end you have a fluid. Smart matter will incur some energy losses when it changes shape; this is not very different from fluid viscosity.
What matters is the heat capacity per kilogram and how big mass flows we can get. The state of the matter involved or whether it is in continous flows or chunks doesn’t matter.
You need some pipes for a liquid, but no pipes for the “shape changing asteroids”. The energy needed for the reshaping them can be arbitrary small.
I do agree with you, otherwise.
I’d be interested in republishing this article in h+ Magazine with your permission.
Sure! I am happy for a repub. I should just recheck my math, just in case… 🙂
I am not a scientist, but am very curious about the Jupiter Brain concept. I am not interested as much as if it can be done, or how it would be done, but more of a high level, what the heck are you talking about.
If my brain is a single processor, is a Jupiter brain similar to all brains on earth linked together working like the borg, as one big computer? Again, please pardon my ignorance. I’m just trying to put together the high level what it is.
Thank you for indulging me.
Natalie
A Jupiter brain is essentially just a gigantic computer, but usually envisioned as home to software minds – perhaps a single gigantic intellect distributed across billions of processors, or a society of minds living in virtual realities. Of course, the computing might be used for other things too, like simulations.
The term is maybe a bit misleading, as the size may be nowhere near Jupiter’s and it might not even correspond to anything brain-like. It is just a term that stuck back in the early 1990s when we were discussing on the Extropians mailing list.