TIME WITHOUT END: PHYSICS AND BIOLOGY IN AN OPEN UNIVERSE (*) Freeman J. Dyson Institute for Advanced Studies, Princeton New Jersey 08540 Reviews of Modern Physics, Vol. 51, No. 3, July 1979 (c) 1979 American Physical Society Quantitative estimates are derived for three classes of phenomena that may occur in an open cosmological model of Friedmann type. (1) Normal physical processes taking place with very long time-scales. (2) Biological processes that will result if life adapts itself to low ambient temperatures according to a postulated scaling law. (3) Communication by radio between life forms existing in different parts of the universe. The general conlusion of the analysis is that an open universe need not evolve into a state of permanent quiescence. Life and communication can continue for ever, utilizing a finite store of energy, if the assumed scaling laws are valid. (*) This material was originally presented as four lectures, the "James Arthur Lectures on Time and its Mysteries" at New York University, Autumn 1978. The first lecture is addressed to a general audience, the other three to an audience of physicists and astronomers. CONTENTS Lecture I. Philosophy Lecture II. Physics A. Stellar evolution B. Detachment of planets from stars C. Detachment of stars from galaxies D. Decay of orbits by gravitational radiation E. Decay of black holes by the Hawking process F. Matter is liquid at zero temperature G. All matter decays to iron H. Collapse of iron star to neutron star I. Collapse of ordinary matter to black hole Lecture III. Biology Lecture IV. Communication References LECTURE I. PHILOSOPHY A year ago Steven Weinberg published an excellent book, _The First Three Minutes_, (Weinberg, 1977), explaining to a lay audience the state of our knowledge about the beginning of the universe. In his sixth chapter he describes in detail how progress in understanding and observing the universe was delayed by the timidity of theorists. "This is often the way it is in physics - our mistake is not that we take our theories too seriously, but that we do not take them seriously enough. It is always hard to realize that these numbers and equations we play with at our desks have something to do with the real world. Even worse, there often seems to be a general agreement that certain phenomena are just not fit subjects for respectable theoretical and experimental effort. Alpher, Herman and Gamow (1948) deserve tremendous credit above all for being willing to take the early universe seriously, for working out what known physical laws have to say about the first three minutes. Yet even they did not take the final step, to convince the radio astronomers that they ought to look for a microwave radiation background. The most important thing accomplished by the ultimate discovery of the 3 K radiation background (Penzias and Wilson, 1965) was to force all of us to take seriously the idea that there _was_ an early universe." Thanks to Penzias and Wilson, Weinberg and others, the study of the beginning of the universe is now respectable. Professional physicists who investigate the first three minutes or the first microsecond no longer need to feel shy when they talk about their work. But the end of the universe is another matter. I have searched the literature for papers about the end of the universe (Rees, 1969; Davies, 1973; Islam, 1977 and 1979; Barrow and Tipler, 1978). This list is certainly not complete. But the striking thing about these papers is that they are written in an apologetic or jocular style, as if the authors were begging us not to take them seriously. The study of the remote future still seems to be as disreputable today as the study of the remote past was thirty years ago. I am particularly indebted to Jamal Islam for an early draft of his 1977 paper which started me thinking seriously about the remote future. I hope with these lectures to hasten the arrival of the day when eschatology, the study of the end of the universe, will be a respectable scientific discipline and not merely a branch of theology. Weinberg himself is not immune to the prejudices that I am trying to dispel. At the end of his book about the past history of the universe, he adds a short chapter about the future. He takes 150 pages to describe the first three minutes, and then dismisses the whole of the future in five pages. Without any discussion of technical details, he sums up his view of the future in twelve words: "The more the universe seems comprehensible, the more it also seems pointless." Weinberg has here, perhaps unintentionally, identified a real problem. It is impossible to calculate in detail the long-range future of the universe without including the effects of life and intelligence. It is impossible to calculate the capabilities of life and intelligence without touching, at least peripherally, philosophical questions. If we are to examine how intelligent life may be able to guide the physical development of the universe for its own purposes, we cannot altogether avoid considering what the values and purposes of intelligent life may be. But as soon as we mention the words value and purpose, we run into one of the most firmly entrenched taboos of twentieth-century science. Hear the voice of Jacques Monod (1970), high priest of scientific rationality, in his book _Chance and Necessity_: "Any mingling of knowledge with values is unlawful, forbidden." Monod was one of the seminal minds in the flowering of molecular biology in this century. It takes some courage to defy his anathema. But I will defy him, and encourage others to do so. The taboo against mixing knowledge with values arose during the nineteenth century out of the great battle between the evolutionary biologists led by Thomas Huxley and the churchmen led by Bishop Wilberforce. Huxley won the battle, but a hundred years later Monod and Weinberg were still fighting Bishop Wilberforce's ghost. Physicists today have no reason to be afraid of Wilberforce's ghost. If our analysis of the long-range future leads us to raise questions related to the ultimate meaning and purpose of life, then let us examine these questions boldly and without embarrassment. If our answers to these questions are naive and preliminary, so much the better for the continued vitality of our science. I propose in these lectures to explore the future as Weinberg in his book explored the past. My arguments will be rough and simple but always quantitative. The aim is to establish numerical bounds within which the destiny of the universe must lie. I shall make no further apology for mixing philosophical speculations with mathematical equations. The two simplest cosmological models (Weinberg, 1972) describe a uniform zero-pressure universe which may be either closed or open. The closed universe has its geometry described by the metric ds^2 = R^2 [dpsi^2 - dchi^2 - sin^2 chi dOmega^2], (1) where chi is a space coordinate moving with the matter, psi is a time coordinate related to physical time t by t = T_0 (psi - sin psi), (2) and R is the radius of the universe given by R = c T_0 (1 - cos psi). (3) The whole universe is represented in terms of the coordinates (psi, chi) by a finite rectangular box 0 < psi < 2 pi, 0 < chi < pi. (4) This universe is closed both in space and in time. Its total duration is 2 pi T_0, (5) where T_0 is a quantity that is in principle measurable. If our universe is described by this model, then T_0 must be at least 10^10 years. The simple model of a uniform zero-pressure open universe has instead of (1) the metric ds^2 = R^2 [dpsi^2 - dchi^2 - sinh^2 chi dOmega^2], (6) where now t = T_0 (sinh psi - psi), (7) R = c T_0 (cosh psi - 1), (8) and the coordinates (psi, chi) extend over an infinite range 0 < psi < infinity, 0 < chi < infinity. (9) The open universe is infinite both in space and in time. The models (1) and (6) are only the simplest possibilities. Many more complicated models can be found in the literature. For my purpose it is sufficient to discuss (1) and (6) as representative of closed and open universes. The great question, whether our universe is in fact closed or open, will before long be settled by observation. I do not say more about this question, except to remark that my philosophical bias strongly favors an open universe and that the observational evidence does not exclude it (Gott, Gunn, Schramm, and Tinsley, 1974 and 1976). The prevailing view (Weinberg, 1977) holds the future of open and closed universes to be equally dismal. According to this view, we have only the choice of being fried in a closed universe or frozen in an open one. The end of the closed universe has been studied in detail by Rees (1969). Regrettably I have to concur with Rees' verdict that in this case we have no escape from frying. No matter how deep we burrow into the earth to shield ourselves from the ever-increasing fury of the blue-shifted background radiation, we can only postpone by a few million years our miserable end. I shall not discuss the closed universe in detail, since it gives me a feeling of claustrophobia to imagine our whole existence confined within the box (4). I only raise one question which may offer us a thin chance of survival. Supposing that we discover the universe to be naturally closed and doomed to collapse, is it conceivable that by intelligent intervention, converting matter into radiation and causing energy to flow purposefully on a cosmic scale, we could break open a closed universe and change the topology of space-time so that only a part of it would collapse and another part of it would expand forever? I do not know the answer to this question. If it turns out that the universe is closed, we shall still have about 10^10 years to explore the possibility of a technological fix that would burst it open. I am mainly interested in the open cosmology, since it seems to give enormously greater scope for the activities of life and intelligence. Horizons in the open cosmology expand indefinitely. To be precise, the distance to the horizon in the metric (6) is d = R psi, (10) with R given by (8), and the number of galaxies visible within the horizon is N = N_0 (sinh 2 psi - 2 psi), (11) where N_0 is a number of the order of 10^10. Comparing (11) with (7), we see that the number of visible galaxies varies with t^2 at late times. It happens by a curious numerical accident that the angular size of a typical galaxy at time t is delta ~ 10^5 t^(-1) rad, (12) with t measured in years. Since (11) and (7) give N ~ 10^(-10) t^2, N delta^2 ~ 1, (13) it turns out that _the sky is always just filled with galaxies_, no matter how far into the future we go. As the apparent size of each galaxy dwindles, new galaxies constantly appear at the horizon to fill in the gaps. The light from the distant galaxies will be strongly red-shifted. But the sky will never become empty and dark, if we can tune our eyes to longer and longer wavelengths as time goes on. I shall discuss three principal questions within the framework of the open universe with the metric (6). (1) Does the universe freeze into a state of permanent physical quiescence as it expands and cools? (2) Is it possible for life and intelligence to survive indefinitely? (3) Is it possible to maintain communication and transmit information across the constantly expanding distances between galaxies? These three questions will be discussed in detail in Lectures 2, 3 and 4. Tentatively, I shall answer them with a no, a yes, and a maybe. My answers are perhaps only a reflection of my optimistic philosophical bias. I do not expect everybody to agree with the answers. My purpose is to start people thinking seriously about the questions. If, as I hope, my answers turn out to be right, what does it mean? It means that we have discovered in physics and astronomy an analog to the theorem of Goedel (1931) in pure mathematics. Goedel proved [see Nagel and Newman (1956)] tht the world of pure mathematics is inexhaustible; no finite set of axioms and rules of inference can ever encompass the whole of mathematics; given any finite set of axioms, we can find meaningful mathematical questions which the axioms leave unanswered. I hope that an analogous situation exists in the physical world. If my view of the future is correct, it means that the world of physics and astronomy is also inexhaustible; no matter how far we go into the future, there will always be new things happening, new information coming in, new worlds to explore, a constantly expanding domain of life, consciousness, and memory. When I talk in this style, I am mixing knowledge with values, disobeying Monod's prohibition. But I am in good company. Before the days of Darwin and Huxley and Bishop Wilberforce, in the eighteenth century, scientists were not subject to any taboo against mixing science and values. When Thomas Wright (1750), the discoverer of galaxies, announced his discovery, he was not afraid to use a theological argument to support an astronomical theory. "Since as the Creation is, so is the Creator also magnified, we may conclude in consequence of an infinity, and an infinite all-active power, that as the visible creation is supposed to be full of siderial systems and planetary worlds, so on, in like similar manner, the endless immensity is an unlimited plenum of creations not unlike the known.... That this in all probability may be the real case, is in some degree made evident by the many cloudy spots, just perceivable by us, as far without our starry Regions, in which tho' visibly luminous spaces, no one star or particular constituent body can possibly be distinguished; those in all likelyhood may be external creation, bordering upon the known one, too remote for even our telescopes to reach." Thirty-five years later, Wright's speculations were confirmed by William Herschel's precise observations. Wright also computed the number of habitable worlds in our galaxy: "In all together then we may safely reckon 170,000,000, and yet be much within compass, exclusive of the comets which I judge to be by far the most numerous part of creation." His statement about the comets may also be correct, although he does not tell us how he estimated their number. For him the existence of so many habitable worlds was not just a scientific hypothesis but a cause for moral reflection: "In this great celestial creation, the catastrophy of a world, such as ours, or even the total dissolution of a system of worlds, may be possibly be no more to the great Author of Nature, than the most common accident in life with us, and in all probability such final and general Doomsdays may be as frequent there, as even Birthdays or mortality with us upon the earth. This idea has something so cheerful in it, that I know I can never look upon the stars without wondering why the whole world does not become astronomers; and that endowed with sense and reason should neglect a science they are naturally so much interested in, and so capable of enlarging their understanding, as next to a demonstration must convince them of their immortality, and reconcile them to all those little difficulties incident to human nature, without the least anxiety. "All this the vast apparent provision in the starry mansions seem to promise: What ought we then not to do, to preserve our natural birthright to it and to merit such inheritance, which alas we think created all to gratify alone a race of vain-glorious gigantic beings, while they are confined to this world, chained like so many atoms to a grain of sand." There speaks the eighteenth century. But Steven Weinberg says, "The more the universe seems comprehensible, the more it also seems pointless." If Weinberg is speaking for the twentieth century, I prefer the eighteenth. LECTURE II. PHYSICS In this lecture, following Islam (1977), I investigate the physical processes that will occur in an open universe over very long periods of time. I consider the natural universe undisturbed by effects of life and intelligence. Life and intelligence will be discussed in lectures 3 and 4. Two assumptions underlie the discussion. (1) The laws of physics do not change with time. (2) The relevant laws of physics are already known to us. These two assumptions were also made by Weinberg (1977) in his description of the past. My justification for making them is the same as his. Whether or not we believe that the presently known laws of physics are the final and unchanging truth, it is illuminating to explore the consequences of these laws as far as we can reach into the past or the future. It is better to be too bold than too timid in extrapolating our knowledge from the known into the unknown. It may happen again, as it happened with the cosmological speculations of Alpher, Herman, and Gamow (1948), that a naive extrapolation of known laws into new territory will lead us to ask important new questions. I have summarized elsewhere (Dyson, 1972, 1978) the evidence supporting the hypothesis that the laws of physics do not change. The most striking piece of evidence was discovered recently by Shlyakhter (1976) in the measurements of isotope ratios in ore samples taken from the natural fission reactor that operated about 2 billion years ago in the Oklo uranium mine in Gabon (Maurette, 1976). The crucial quantity is the ratio (149Sm/147Sm) between the abundances of two light isotopes of samarium which are not fission products. In normal samarium this ratio is about 0.9; in the Oklo reactor it is about 0.02. Evidently the 149Sm has been heavily depleted by the dose of thermal neutrons to which it was exposed during the operation of the reactor. If we measure in a modern reactor the thermal neutron capture cross section of 149Sm, we find the value 55kb, dominated by a strong capture resonance at a neutron energy of 0.1 eV. A detailed analysis of the Oklo isotope ratio leads to the conclusion that the 149Sm cross setion was in the range 55+-8 kb two billion years ago. This means that the position of the capture resonance cannot have shifted by as much as 0.02 eV over 2.10^9 yr. But the position of this resonance measures the difference between the binding energies of the 149Sm ground state and of the 150Sm compound state into which the neutron is captured. These binding energies are each of the order of 10^9 eV and depend in a complicated way upon the strengths of nuclear and Coulomb interactions. The fact that the two binding energies remained in balance to an accuracy of two parts in 10^11 over 2.10^9 yr indicates that the strength of nuclear and Coulomb forces cannot have varied by more than a few parts in 10^18 per year. This is by far the most sensitive test that we have yet found of the constancy of the laws of physics. The fact that no evidence of change was found does not, of course, prove that the laws are strictly constant. In particular, it does not exclude the possibility of a variation in strength of gravitational forces with a time scale much shorter than 10^18 yr. For the sake of simplicity, I assume that the laws are strictly constant. Any other assumption would be more complicated and would introduce additional arbitrary hypotheses. It is in principle impossible for me to bring experimental evidence to support the hypothesis that the laws of physics relevant to the remote future are already known to us. The most serious uncertainty affecting the ultimate fate of the universe is the question whether the proton is absolutely stable against decay into lighter particles. If the proton is unstable, all matter is transitory and must dissolve into radiation. Some serious theoretical arguments have been put forward (Zeldovich, 1977; Barrow and Tipler, 1978; Feinberg, Goldhaber, and Steigman, 1978) supporting the view that the proton should decay with a long half-life, perhaps through virtual processes involving black holes. The experimental limits on the rate of proton decay (Kropp and Reines, 1965) do not exclude the existence of such processes. Again on grounds of simplicity, I disregard these possibilities and suppose the proton to be absolutely stable. I will discuss in detail later the effect of real processes involving black holes on the stability of matter in bulk. I am now ready to begin the discussion of physical processes that will occur in the open cosmology (6), going successively to longer and longer timescales. Classical astronomical processes come first, quantum-mechanical processes later. _Note added in proof._ Since these lectures were given, a spate of papers has appeared discussing grand unification models of particle physics in which the proton is unstable (Nanopoulos, 1978; Pati, 1979; Turner and Schramm, 1979). A. Stellar evolution The longest-lived low-mass stars will exhaust their hydrogen fuel, contract into white dwarf configurations, and cool down to very low temperatures, within times of the order of 10^14 years. Stars of larger mass will take a shorter time to reach a cold final state, which may be a white dwarf, a neutron star, or a black hole configuration, depending on the details of their evolution. B. Detachment of planets from stars The average time required to detach a planet from a star by a close encounter with a second star is T = (rho V sigma)^(-1), (14) where rho is the density of stars in space, V the mean relative velocity of two stars, and sigma the cross section for an encounter resulting in detachment. For the earth-sun system, moving in the outer regions of the disk of a spiral galaxy, approximate numerical values are rho = 3.10^(-41) km^(-3), (15) V = 50 k/sec, (16) sigma = 2.10^16 km^2, (17) T = 10^15 yr. (18) The time scale for an encounter causing serious disruption of planetary orbits will be considerably shorter than 10^15 yr. C. Detachment of stars from galaxies The dynamical evolution of galaxies is a complicated process, not yet completely understood. I give here only a very rough estimate of the time scale. If a galaxy of N stars of mass M in a volume of radius R, their root-mean-square velocity will be of order V = [GNM/R]^(1/2). (19) The cross section for a close encounter between two stars, changing their directions of motion b a large angle, is sigma = (GM/V^2)^2 = (R/N)^2. (20) The average time that a star spends between two close encounters is T = (rho V sigma)^(-1) = (NR^3/GM)^(1/2). (21) If we are considering a typical large galaxy with N = 10^11, R = 3.10^17 km, then T = 10^19 yr. (22) Dynamical relaxation of the galaxy proceeds mainly through distant stellar encounters with a time scale T_N = T (log N)^(-1) = 10^18 yr. (23) The combined effect of dynamical relaxation and close encounters is to produce a collapse of the central regions of the galaxy into a black hole, together with an evaporation of stars from the outer regions. The evaporated stars achieve escape velocity and become detached from the galaxy after a time of the order of 10^19 yr. We do not know what fraction of the mass of the galaxy ultimately collapses and what fraction escapes. The fraction escaping probably lies between 90% and 99%. The violent events which we now observe occurring in the central regions of many galaxies are probably caused by a similar process of dynamical evolution oprating on a much shorter time scale. According to (21), the time scale for evolution and collapse will be short if the dynamical units are few and massive, for example compact star clusters and gas clouds rather than individual stars. The long time scale (22) applies to a galaxy containing no dynamical units larger than individual stars. D. Decay of orbits by gravitational radiation If a mass is orbiting around a fixed center with velocity V, period P, and kinetic energy E, it will have energy by gravitational radiation at a rate of order E_g = (V/c)^5 (E/P). (24) Any gravitationally bound system of objects orbiting around each other will decay by this mechanism of radiation drag with a time scale t_g = (c/V)^5 P. (25) For the earth orbiting around the sun, the gravitational radiation time scale is T_g = 10^20 yr. (26) Since this is much longer than (18), the earth will almost certainly escape fom the sun before gravitational radiation can pull it inward. But if it should happen that the sun should escape from the galaxy with the earth still attached to it, then the earth will ultimately coalesce with the sun after a time of order (26). The orbits of the stars in a galaxy will also be decaying by gravitational radiation with time scale (25), where P is now the period of their galactic orbits. For a galaxy like our own, with V = 200 km/sec and P = 2.10^8 yr, the time scale is T_g = 10^24 yr. (27) This is again much longer than (22), showing that dynamical relaxation dominates gravitational radiation in the evolution of galaxies. E. Decay of black holes by the Hawking process According to Hawking (1975), every black hole of mass M decays by emission of thermal radiation and finally disappears after a time T = (G^2 M^3 / hbar c^4). (28) For a black hole of one solar mass the lifetime is T = 10^64 yr. (29) Black holes of galactic mass will have lifetimes extending up to 10^100 yr. At the end of its life, every black hole will emit about 10^31 erg of high-temperature radiation. The cold expanding universe will be illuminated by occasional fireworks for a very long time. F. Matter is liquid at zero temperature I next discuss a group of physical processes which occur in ordinary matter at zero temperature as a result of quantum-mechanical barrier penetration. The lifetimes for such processes are given by the Gamow formula T = exp(S) T_0, (30) where T_0 is a natural vibration period of the system, and S is the action integral S = (2/hbar) INT (2MU(x))^(1/2) dx. (31) Here x is a coordinate measuring the state of the system as it goes across the barrier, and U(x) is the height of the barrier as a function of x. To obtain a rough estimate of S, I replace (31) by S = (8MUd^2/hbar^2)^(1/2), (32) where d is the thickness, and U the average height of the barrier, and M is the mass of the object that is moving across it. I shall consider processes for which S is large, so that the lifetime (30) is extremely long. Asan example, consider the behavior of a lump of matter, a rock or a planet, after it has cooled to zero temperature. Its atoms are frozen into an apparently fixed arrangement by the forces of cohesion and chemical bonding. But from time to time the atoms will move and rearrange themselves, crossing energy barriers by quantum-mechanical tunneling. The height of the barrier will typically be of the order of a tenth of a Rydberg unit, U = (1/20)(e^4 m/hbar^2), (33) and the thickness will be of the order of a Bohr radius d = (hbar^2/me^2), (34) where m is the electron mass. The action integral (32) is then S = (2Am_p/5m)^(1/2) = 27A^1/2, (35) where m_P is the proton mass, and A is the atomic weight of the moving atom. For an iron atom with A = 56, S = 200, and (30) gives T = 10^65 yr. (36) Even the most rigid materials cannot preserve their shapes or their chemical structures for times long compared with (36). On a time scale of 10^65 yr, every piece of rock behaves like a liquid, flowing into a spherical shape under the influence of gravity. Its atoms and molecules will be ceaselessly diffusing around like the molecules in a drop of water. G. All matter decays to iron In matter at zero temperature, nuclear as well as chemical reactions will continue to occur. Elements heavier than iron will decay to iron b varius processes such as fission and alpha emission. Elements lighter than iron will combine by nuclear fusion reactions, building gradually up to iron. Consider for example the fusion reaction in which two nuclei of atomic weight 1/2 A, charge 1/2 Z combine to form a nucleus (A,Z). The Coulomb repulsion of the two nuclei is effectively screened by electrons until they come within a distance d = Z^(-1/2) (hbar^2/me^2) (37) of each other. The Coulomb barrier has thickness d and height U = (Z^2 e^2 / 4d) = 1/2 Z^(7/3) (e^4 m/hbar^2). (38) The reduced mass for the relative motion of the two nuclei is M = 1/4 AM_p. (39) The action integral (32) then becomes S = (1/2 A Z^(5/3)(m_p/m))^(1/2) = 30 A^(1/2) Z^(5/6). (40) For two nuclei combining to form iron, Z = 26, A = 56, S = 3500, and T = 10^1500 yr. (41) On the time scale (41), ordinary matter is radioactive and is constantly generating nuclear energy. H. Collapse of iron star to neutron star After the time (41) has elapsed, most of the matter in the universe is in the form of ordinary low-mass stars that have settled down into white dwarf configurations and become cold spheres of pure iron. But an iron star is still not in its state of lowest energy. It could release a huge amount of energy if it could collapse into a neutron star configuration. To collapse, it has only to penetrate a barrier of finite height and thickness. It is an interesting question, whether there is an unsymmetrical mode of collapse passing over a lower saddle point than the symmetric mode. I have not been able to find a plausible unsymmetric mode, and so I assume the collapse to be spherically symmetrical. In the action integral (31), the coordinate x will be the radius of the star, and the integral will extend from r, the radius of a neutron star, to R, the radius of the iron star from which the collapse begins. The barrier height U(x) will depend on the equation of state of the matter, which is vey uncertain when x is close to r. Fortunately the equation of state is well known over the major part of the range of integration, when x is large compared to r and the main contribution to U(x) is the energy of nonrelativistic degenerate electrons U(x) = (N^(5/3)hbar^2/2mx^2), (42) where N is the number of electrons in the star. The integration over x in (31) gives alogarithm log(R/R_0), (43) where R_0 is the radius at which the electrons become relativistic and the formula (42) fails. For low-mass stars the logarithm will be of the order of unity, and the part of the integral coming from the relativistic region x < R_0 will also be of the order of unity. The mass of the star is M = 2Nm_p. (44) I replace the logarithm (43) by unity and obtain for the action integral (31) the estimate S = N^(4/3) (8m_p/m)^(1/2) = 120N^(4/3). (45) The lifetime is then by (30) T = exp(120N^(4/3))T_0. (46) For a typical low-mass star we have N = 10^56, S = 10^77, T = 10^(10^76) yr. (47) In (46) it is completely immaterial whether T_0 is a small fraction of a second or a large number of years. We do not know whether every collapse of an iron star into a neutron star will produce a supernova explosion. At the very least, it will produce a huge outburst of energy in the form of neutrinos and a modest burst of energy in the form of x rays and visible light. The universe will still be producing occasional fireworks after times as long as (47). I. Collapse of ordinary matter to black holes The long lifetime (47) of iron stars is only correet if they do not collapse with a shorter lifetime into black holes. For collapse of any piece of bulk matter into a black hole, the same formulae apply as for collapse into a neutron star. The only difference is that the integration in the action integral (31) now extends down to the black hole radius instead of to the neutron star radius. The main part of the integral comes from larger values of x and is the same in both cases. The lifetime for collapse into a black hole is therefore still given by (46). But there is an important change in the meaning of N. If small black holes are possible, a small part of a star can collapse by itself into a black hole. Once a small black hole has been formed, it will in a short time swallow the rest of the star. The lifetime for collapse of any star is then given by T = exp(120N_B^(4/3)) T_0, (48) where N_B is the number of electrons in a piece of iron of mass equal to the minimum mass M_B of a black hole. The lifetime (48) is the same for any piece of matter of mass greater than M_B. Matter in pieces with mass smaller than M_B is absolutely stable. For a more complete discussion of the problem of collapse into black holes, see Harrison, Thorne, Wakano, and Wheeler (1965). The numerical value of the lifetime (48) depends on the value of M_B. All that we know for sure is 0 <= M_B <= M_c, (49) where M_c = (hbar c/G)^(3/2) m_p^(-2) = 4.10^33 g (50) is the Chandrasekhar mass. Black holes must exist for every mass larger than M_c, because stars with mass larger than M_c have no stable final state and must inevitably collapse. Four hypotheses concerning M_B have been put forward: (i) M_B = 0. Then black holes of arbitrarily small mass exist and the formula (48) is meaningless. In this case all matter is unstable with a comparatively short lifetime, as suggested by Zeldovich (1977). (ii) M_B is equal to the Planck mass M_B = M_PL = (hbar c/G)^(1/2) = 2.10^(-5) g. (51) This value of M_B is suggested by Hawking's theory of radiation from black holes (Hawking, 1975), according to which every black hole loses mass until it reaches a mass of order M_PL, at which point it disappears in a burst of radiation. In this case (48) gives N_B = 10^19, T = 10^(10^26) yr. (52) (iii) M_B is equal to the quantum mass M_B = M_Q = (hbar c/Gm_P) = 3.10^14 g, (53) as suggested by Harrison, Thorne, Wakano, and Wheeler (1965). Here M_Q is the mass of the smallest black hole for which a classical theory is meaningful. Only for masses larger than M_Q can we consider the barrier penetration formula (31) to be physically justified. If (53) holds, then N_B = 10^38, T = 10^(10^32) yr. (54) (iv) M_B is equal to the Chandrasekhar mass (50). In this case the lifetime for collapse into a black hole is of the same order as the lifetime (47) for collapse into a neutron star. The long-range future of the universe depends crucially on which of these four alternatives is correct. If (iv) is correct, stars may collapse into black holes and dissolve into pure radiation, but masses of planetary size exist forever. If (iii) is correct, planets will disappear with the lifetime (54), but material objects with masses up to a few million tons are stable. If (ii) is correct, human-sized objects will disappear with the lifetime (52), but dust grains with diameter less than about 100 mu will last for ever. If (i) is correct, all material objects disappear and only radiation is left. If I were compelled to choose one of the four alternatives as more likely than the others, I would choose (ii). I consider (iii) and (iv) unlikely because they are inconsistent with Hawking's theory of black-hole radiation. I find (i) implausible because it is difficult to see why a proton should not decay rapidly if it can decay at all. But in our present state of ignorance, none of the four possibilities can be excluded. The results of this lecture are summarized in Table I. This list of time scales of physical processes makes no claim to be complete. Undoubtedly many other physical processes will be occurring with time scales as long as, or longer than, those I have listed. The main conclusion I wish to draw from my analysis is the following: So far as we can imagine into the future, things continue to happen. In the open cosmology, history has no end. TABLE I. Summary of time scales. Closed Universe Total duration 10^11 yr Open Universe Low-mass stars cool off 10^14 yr Planets detached from stars 10^15 yr Stars detached from galaxies 10^19 yr Decay of orbits by gravitational radiation 10^20 yr Decay of black holes by Hawking process 10^64 yr Matter liquid at zero temperature 10^65 yr All matter decays to iron 10^1500 yr Collapse of ordinary matter to black hole [alternative (ii)] 10^(10^26) yr Collapse of stars to neutron stars or black holes [alternative (iv)] 10^(10^76) yr LECTURE III. BIOLOGY Looking at the past history of life, we see that it takes about 10^6 years to evolve a new species, 10^7 years to evolve a genus, 10^8 years to evolve a class, 10^9 years to evolve a phylum, and less than 10^10 years to evolve all the way from the primeval slime to Homo Sapiens. If life continues in this fashion in the future, it is impossible to set any limit to the variety of physical forms that life may assume. What changes could occur in the next 10^10 years to rival the changes of the past? It is conceivable that in another 10^10 years life could evolve away from flesh and blood and become embodied in an intrstellar black cloud (Hoyle, 1957) or in a sentient computer (Capek, 1923). Here is a list of deep questions concerning the nature of life and consciousness. (i) Is the basis of consciousness matter or structure? (ii) Are sentient black clouds, or sentient computers, possible? (iii) Can we apply scaling laws in biology? These are questions that we do not know how to answer. But they are not in principle unanswerable. It is possible that they will be answered fairly soon as a result of progress in experimental biology. Let me spell out more explicitly the meaning of question (i). My consciousness is somehow associated with a collection of organic molecules inside my head. The question is, whether the existence of my consciousness depends on the actual substance of a particular set of molecules or whether it only depends on the structure of the molecules. In other words, if I could make a copy of my brain with the same structure but using different materials, would the copy think it was me? If the answer to question (i) is "matter", then life and consciousness can never evolve away from flesh and blood. In this case the answers to questions (ii) and (iii) are negative. Life can then continue to exist only so long as warm environments exist, with liquid water and a continuing supply of free energy to support a constant rate of metabolism. In this case, since a galaxy has only a finite supply of free energy, the duration of life is finite. As the universe expands and cools, the sources of free energy that life requires for its metabolism will ultimately be exhausted. Since I am a philosophical optimist, I assume as a working hypothesis that the answer to question (i) is "structure". Then life is free to evolve into whatever material embodiment best suits its purposes. The answers to questions (ii) and (iii) are affirmative, and a quantitative discussion of the future of life in the universe becomes possible. If it should happen, for example, that matter is ultimately stable against collapse into black holes only when it is subdivided into dust grains a few microns in diameter, then the preferred embodiment for life in the remote future must be something like Hoyle's black cloud, a large assemblage of dust grains carrying positive and negative charges, organizing itself and communicating with itself by means of electromagnetic forces. We cannot imagine in detail how such a cloud could maintain the state of dynamic equilibrium that we call life. But we also could not have imagined the architecture of a living cell of protoplasm if we had never seen one. For a quantitative description of the way life may adapt itself to a cold environment, I need to assume a scaling law that is independent of any particular material embodiment that life may find for itself. The following is a formal statement of my scaling law: _Biological Scaling Hypothesis. If we copy a living creature, quantum state by quantum state, so that the Hamiltonian of the copy is H_c = lambda U H U^(-1), (55) where H is the Hamiltonian of the creature, U is a unitary oprator, and lambda is a positive scaling factor, and if the environment is similarly copied so that the temperatures of the environments of the creature and the copy are respectively T and lambda T, then the copy is alive, subjectively identical to the original creature, with all its vital functions reduced in speed by the same factor lambda._ The structure of the Schroedinger equation, with time and energy appearing as conjugate variables, makes the form of this scaling hypothesis plausible. It is at present a purely theoretical hypothesis, not susceptible to any experimental test. To avoid misunderstanding, I should emphasize that the scaling law does not apply to the change of the metabolic rate of a given organism as a function of temperature. For example, when a snake or a lizard changes its temperature, its metabolic rate varies exponentially rather than linearly with T. The linear scaling law applies to an ensemble of copies of a snake, each copy adapted to a different temperature. It does not apply to a particular snake with varying T. From this point on, I assume the scaling hypothesis to be valid and examine its consequences for the potentialities of life. The first consequence is that the appropriate measure of time as experienced subjectively by a living creature is not physical time t but the quantity u(t) = f INT(0,t) theta(t') dt', (56) where theta(t) is the temperature of the creature and f = (300 deg sec)^(-1) is a scale factor which it is convenient to introduce so as to make u dimensionless. I call u "subjective time". The second consequence of the scaling law is that any creature is characterized by a quantity Q which measures its rate of entropy production per unit of subjective time. If entropy is measured in information units or bits, and if u is measured in "moments of consciousness", then Q is a pure number expressing the amount of information that must be processed in order to keep the creature alive long enough to say "Cogito, ergo sum". I call Q the "complexity" of the creature. For example, a human being dissipates about 200 W of power at a temperature of 300 K, with each moment of consciousness lasting about a second. A human being therefore has Q = 10^23 bits. (57) This Q is a measure of the complexity of the molecular structures involved in a single act of human awareness. For the human species as a whole, Q = 10^33 bits. (58) a number which tells us the order of magnitude of the material resources required for maintenance of an intelligent society. A creature or a society with given Q and given temperature theta will dissipate energy at a rate m = kfQ theta^2. (59) Here m is the metabolic rate measured in ergs per second, k is Boltzmann's constant, and f is the coefficient appearing in (56). It is important that m varies with the square of theta, one factor theta coming from the relationship between energy and entropy, the other factor theta coming from the assumed temperature dependence of the rate of vital processes. I am assuming that life is free to choose its temperature theta(t) so as to maximize its chances of survival. There are two physical constraints on theta(t). The first constraint is that theta(t) must always be greater than the temperature of the universal background radiation, which is the lowest temperature available for a heat sink. That is to say theta(t) > aR^(-1), a = 3.10^28 deg cm, (60) where R is the radius of the universe, varying with t according to (7) and (8). At the present time the condition (60) is satisfied with a factor of 100 to spare. The second constraint on theta(t) is that a physical mechanism must exist for radiating away into space the waste heat generated by metabolism. To formulate the second constraint quantitatively, I assume that the ultimate disposal of waste heat is by radiation and that the only relevant form of radiation is electromagnetic. There is an absolute upper limit I(theta) < 2 gamma (Ne^2/m hbar^2 c^3) (k theta)^3 (61) on the power that can be radiated by a material radiator containing N electrons at temperature theta. Here gamma = max[x^3(e^x-1)^(-1)] = 1.42 (62) is the height of the maximum of the Planck radiation spectrum. Since I could not find (61) in the textbooks, I give a quick proof, following the Handbuch article of Bethe and Saltpeter (1957). The formula for the power emitted by electric dipole radiation is I(theta) = SUM(p) INT dOmega SUM(i,j) rho_i (omega_ij^4/2.pi.c^3) |D_ij|^2. (63) Here p is the polarization vector of a photon emitted into the solid angle dOmega, i is the initial and j the final state of the radiator, rho_i = Z^(-1) exp(-E_i/k theta) (64) is the probability that the radiator is initially in state i, omega_ij = hbar^(-1) (E_i - E_j) (65) is the frequency of the photon, and D_ij is the matrix element of the radiator dipole moment between states i and j. The sum (63) is taken only over pairs of states (i,j) with E_i > E_j. (66) Now there is an exact sum rule for dipole moments, SUM(i) omega_ij |D_ij|^2 = (1/2i) _jj = (N e^2 hbar / 2m). (67) But we have to be careful in using (67) to find a bound for (63), since some of the terms in (67) are negative. The following trick works. In every term of (63), omega_ij is positive by (66), and so (62) gives rho_i omega_ij^3 < gamma rho_i (k theta/hbar)^3 (exp(hbar omega_ij/k theta)-1) = gamma (rho_j - rho_i) (k theta / hbar)^3. (68) Therefore (63) implies I(theta) < gamma(k theta/hbar)^3 . SUM(p) INT dOmega [ SUM(i,j) (rho_j - rho_i) (omega_ij 2pi c^3)|D_ij|^2 ] . (69) Now the summation indices (i,j) can be exchanged in the part of (69) involving rho_i. The result is I(theta) < gamma(k theta/hbar)^3 . SUM(p) INT dOmega [ SUM(i,j) rho_j (omega_ij 2pi c^3)|D_ij|^2 ] , (70) with the summation now extending over all (i,j) whether (66) holds or not. The sum rule (67) can then be used in (70) and gives the result (61). This proof of (61) assumes that all particles other than electrons have so large a mass that they are negligible in generating radiation. It also assumes that magnetic dipole and higher multipole radiation is negligible. It is an interesting question whether (61) could be proved without using the dipole approximation (63). It may at first sight appear strange that the right side of (61) is proportional to theta^3 rather than theta^4, since the standard Stefan-Boltzmann formula for the power radiated by a black body is proportional to theta^4. The Stefan-Boltzmann formula does not apply in this case because it requires the radiator to be optically thick. The maximum radiated power given by (61) can be attained only when the radiator is optically thin. Afer this little digression into physics, I return to biology. The second constraint on the temperature theta of an enduring form of life is that the rate of energy dissipation (59) must not exceed the power (61) that can be radiated away into space. This constraint implies a fixed lower bound for the temperature, k theta > (Q/N) epsilon = (Q/N) 10^(-28) erg, (71) epsilon = (137 / 2 gamma) (hbar f/k) mc^2, (72) theta > (Q/N) (epsilon / k) = (Q/N) 10^(-12) deg. (73) The ratio (Q/N) between the complexity of a society and the number of electrons at its disposal cannot be made arbitrarily small. For the present human species, with Q given by (58) and N = 10^42 (74) being the number of electrons in the earth's biosphere, the ratio is 10^(-9). As a society improves in mental capacity and sophistication, the ratio is likely to increase rather than decrease. Therefore (73) and (59) imply a lower bound to the rate of energy dissipation of a society of a given complexity. Since the total store of energy available to a society is finite, its lifetime is also finite. We have reached the sad conclusion that the slowing down of metabolism described by my biological scaling hypothesis is insufficient to allow a society to survive indefinitely. Fortunately, life has another strategy with which to escape from this impasse, namely hibernation. Life may metabolize intermittently, but may continue to radiate waste heat into space during its periods of hibernation. When life is in its active phase, it will be in thermal contact with its radiator at temperature theta. When life is hibernating, the radiator will still be at temperature theta bu the life will be at a much lower temperature so that metabolism is effectively stopped. Suppose then that a society spends a fraction g(t) of its time in the active phase and a fraction [1-g(t)] hibernating. The cycles of activity and hibernation should be short enough so that g(t) and theta(t) do not vary appreciably during any one cycle. Then (56) and (59) no longer hold. Instead, subjective time is given by u(t) = f INT(0,t) g(t') theta(t') dt', (74) and the average rate of dissipation of energy is m = kfQg theta^2. (75) The constraint (71) is replaced by theta(t) > (Q/N)(epsilon/k)g(t). (76) Life keeps in step with the limit (61) on radiated power by lowering its duty cycle in proportion to its temperature. As an example of a possible strategy for a long-lived society, we can satisfy the constraints (60) and (76) by a wide margin if we take g(t) = (theta(t)/theta_0) = (t/t_0)^(-alpha), (77) where theta_0 and t_0 are the present temperature of life the present age of the universe. The exponent alpha has to lie in the range 1/3 < alpha <1/2, (78) and for definiteness we take alpha = 3/8. (79) Subjective time then becomes by (74) u(t) = A(t/t_0)^(1/4), (80) where A = 4f theta_0 t_0 = 10^18 (81) is the present age of the universe measured in moments of consciousness. The average rate of energy dissipation is by (75) m(t) = kfQ theta_0^2 (t/t_0)^(-9/8). (82) The total energy metabolized over all time from t_0 to infinity is INT(t_0,infinity) m(t)dt = BQ, (83) B = 2AK theta_0 = 6.10^4 erg. (84) This example shows that it is possible for life with the strategy of hibernation to achieve simultaneously its two main objectives. First, according to (80), _subjective time is infinite_; although the biological clocks are slowing down and running intermittently as the universe expands, subjective time goes on forever. Second, according to (83), _the total energy required for indefinite survival is finite_. The conditions (78) are sufficient to make the integral (83) convergent and the integral (74) divergent as t -> infinity. According to (83) and (84), the supply of free energy required for the indefinite survival of a society with the complexity (58) of the present human species, starting from the present time and continuing forever, is of the order BQ = 6.10^37 erg, (85) about as much energy as the sun radiates in eight hours. The energy resources of a galaxy would be sufficient to support indefinitely a society with a complexity about 10^24 times greater than our own. These conclusions are valid in an open cosmology. It is interesting to examine the very different situation that exists in a closed cosmology. If life tries to survive for an infinite subjective time in a closed cosmology, speeding up its metabolism as the universe contracts and the background radiation temperature rises, the relations (56) and (59) still hold, but physical time t has only a finite duration (5). If tau = 2 pi T_0 - t, (86) the background radiation temperature theta_R(t) = a(R(t))^(-1) (87) is proportional to tau^(-2/3) as tau -> 0, by virtue of (2) and (3). If the temperature theta(t) of life remains close to theta_R as as tau -> 0, then the integral (56) is finite while the integral of (59) is infinite. We have an infinite energy requirement to achieve a finite subjective lifetime. If theta(t) tends to infinity more slowly than theta_R, the total duration of subjective time remains finite. If theta(t) tends to infinity more rapidly than theta_R, the energy requirement for metabolism remains infinite. The biological clocks can never speed up fast enough to squeeze an infinite subjective time into a finite universe. I return with a feeling of relief to the wide open spaces of the open universe. I do not need to emphasize the partial and preliminary character of the conclusions that I have presented in this lecture. I have only delineated in the crudest fashion a few of the physical problems that life must encounter in its effort to survive in a cold universe. I have not addressed at all the multitude of questions that arise as soon as one tries to imagine in detail the architecture of a form of life adapted to extremely low temperatures. Do there exist functional equivalents in low-temperature systems for muscle, nerve, hand, voice, eye, ear, brain, and memory? I have no answers to these questions. It is possible to say a little about memory without getting into detailed architectural problems, since memory is an abstract concept. The capacity of a memory can be described quantitatively as a certain number of bits of information. I would like our descendants to be endowed not only with an infinitely long subjective lifetime but also with a memory of endlessly growing capacity. To be immortal with a finite memory is highly unsatisfactory; it seems hardly worthwhile to be immortal if one must ultimately erase all trace of one's origins in order to make room for new experience. There are two forms of memory known to physicists, analog and digital. All our computer technology nowadays is based on digital memory. But digital memory is in principle limited in capacity by the number of atoms available for its construction. A society with finite material resources can never build a digital memory beyond a certain finite capacity. Therefore digital memory cannot be adequate to the needs of a life form planning to survive indefinitely. Fortunately, there is no limit in principle to the capacity of an analog memory built out of a fixed number of components in an expanding universe. For example, a physical quantity such as the angle between two stars in the sky can be used as an analog memory unit. The capacity of this memory unit is equal to the number of significant binary digits to which the angle can be measured. As the universe expands and the stars recede, the number of significant digits in the angle will increase logarithmically with time. Measurements of atomic frequencies and energy levels can also in principle be measured with a number of significant figures proportional to (log t). Therefore an immortal civilization should ultimately find ways to code its archives in an analog memory with capacity growing like (log t). Such a memory will put severe constraints on the rate of acquisition of permanent new knowledge, but at least it does not forbid it altogether. LECTURE IV. COMMUNICATION In this last lecture I examine the problem of communication between two societies separated by a large distance in the open universe with metric (6). I assume that they communicate by means of electromagnetic signals. Without loss of generality I suppose that society A, moving along the world-line chi = 0, transmits, while society B, moving along a world-line with the co-moving coordinate chi = eta, receives. A signal transmitted by A when the time coordinate psi = xi will be received by B when psi = xi + eta. If the transmitted frequency is omega, the received frequency will be red-shifted to omega' = omega/(1+z) = omega R_A / R_B, (88) R_A = cT_0 (cosh xi - 1), (89) R_B = cT_0 (cosh (xi+eta) - 1). (90) The bandwidths B and B' will be related by the same factor (1+z). The proper distance between A and B at the time the signal is received is d_L = R_B eta. However, the area of the sphere chi = eta at the same instant is 4 pi d_T^2, with d_T = R_B sinh eta. (91) If A transmits F photons per steradian in the direction of B, the number of photons received by B will be F' = (F Sigma' / d_T^2), (92) where Sigma' is the effective cross section of the receiver. Now the cross section of a receiver for absorbing a photon of frequency omega' is given by a formula similar to (63) in the previous lecture Sigma' = SUM(i,j) rho_i (4 pi^2 omega_ji/hbar c) |D_ij|^2 . Diracdelta(omega_ji - omega'), (93) with D_ij again a dipole matrix element between states i and j. When this is integrated over all omega', we obtain precisely the left side of the sum rule (67). The contribution from negative omega' represents induced emission of a photon by the receiver. I assume that the receiver is incoherent with the incident photon, so that induced emission is negligible. Then the sum rule gives INT(0,infinity) Sigma' domega' = N' (2 pi^2 e^2 / mc), (94) where N' is the number of electrons in the receiver. If the receiver is tuned to the frequency omega' with bandwidth B', (94) gives Sigma' B' <= N' S_0, (95) S_0 = (2 pi^2 e^2 / mc) = 0.167 cm^2 sec^(-1). (96) To avoid confusion of units, I measure both omega' and B' in radians per second rather than in hertz. I assume that an advanced civilization will be able to design a receiver which makes (95) hold with equality. Then (92) becomes F' = (FN'S_0/d_T^2 B'). (97) I assume that the transmitter contains N electrons which can be driven in phase so as to produce a beam of radiation with angular spread of the order N^(-1/2). If the transmitter is considered to be an array of N dipoles with optimum phasing, the number of photons per steradian in the beam is F = (3N/8 pi) (E/hbar omega), (98) where E is the total energy transmitted. The number of received photons is then F' = (3 N N' E S_0 / 8 pi hbar omega d_T^2 B'). (99) We see at once from (99) that low frequencies and small bandwidths are desirable for increasing the number of photons received. But we are interested in transmitting information rather than photons. To extract information efficiently from a given number of photons we should use a bandwidth equal to the detection rate, B' = (F'/tau_B), B = (F'/tau_A), (100) where tau_B is the duration of the reception, and tau_A is the duration of the transmission. With this bandwidth, F' represents both the number of photons and also the number of bits of information received. It is convenient to express tau_A and tau_B as a fraction of the radius of the universe at the times of transmission and reception tau_A = (delta R_A/c), tau_B = (delta R_B/c). (101) The condition delta <= 1 (102) then puts a lower bound on the bandwidth B. I shall also assume for simplicity that the frequency omega is chosen to be as low as possible consistent with the bandwidth B, namely omega = B, omega' = B'. (103) Then (99), (100), (101) give F' = [ N N' delta^2 E / (1+z) (sinh^2 eta) E_c) ] ^ (1/3), (104) where by (96) E_c = (8 pi hbar c^2 / 3 S_0) = (4/3pi)137 mc^2 = 3.10^(-5) erg. (105) We see from (104) that the quantity of information that can be transmitted from A to B with a given expenditure of energy does not decrease with time as the universe expands and A and B move apart. The increase in distance is compensated by the decrease in the energy cost of each photon and by the increase of receiver cross seetion with decreasing bandwidth. The received signal is (104). We now have to compare it with the received noise. The background noise in the universe at frequency omega can be described by an equivalent noise temperature T_N, so that the number of photons per unit bandwidth per steradian per square centimeter per second is given by the Rayleigh-Jeans formula I(omega) = (kT_N omega/4pi^3 hbar c^2). (106) This formula is merely a definition of T_N, which is in general a function of omega and t. I do not assume that the noise has a Planck spectrum over the whole range of frequencies. Only a part of the noise is due to the primordial background radiation, which has a Planck spectrum with temperature theta_R. The primordial noise temperature theta_R varies inversely with the radius of the universe, (k theta_R R / hbar c) = Lambda = 10^29, (107) with R given by (8). I assume that the total noise spectrum scales in the same way with radius as the universe expands, thus (T_N / theta_R) = f(x), x = (hbar omega / k theta_R), (108) with f a universal function of x. When x is of the order of unity, the noise is dominated by the primordial radiation, and f(x) takes the Planck form f(x) = f_p(x) = x (e^x - 1)^(-1), x ~ 1. (109) But there will be strong deviations from (109) at large x (due to red-shifted starlight) and at small x (due to nonthermal radio sources). Without going into details, we can say that f(x) is a generally decreasing function of x and tends to zero rapidly as x -> infinity. The total energy density of radiation in the universe is 4pi/c INT I(omega)hbar omega domega = (k theta_R)^4 I/(pi^2 hbar^3 c^3), (110) with I = INT(0,infinity) f(x) x^2 dx. (111) The integral I must be convergent at both high and low frequencies. Therefore we can find a numerical bound b such that x^3 f(x) < b (112) for all x. In fact (112) probably holds with b = 10 if we avoid certain discrete frequencies such as the 1420 MHz hydrogen line. The number of noise photons received during the time tau_B by the receiver with bandwidth B' and cross section Sigma' is F_N = 4 pi Sigma' B' tau_B I(omega'). (113) We substitute from (95), (96), (100), (103), (106), and (108) into (113) and obtain F_N = (2r_0/gamma_B)fN'F', (114) where r_0 = (e^2/mc^2) = 3.10^(-13) cm, (115) and gamma_B = (hc/k theta_R') = Lambda^(-1) R_B (116) is the wavelength of the primordial background radiation at the time of reception. Since F' is the signal, the signal-to-noise ratio is R_SN = (gamma_B/2f N' r_0). (117) In this formula, f is the noise-temperature ratio given by (108), N' is the number of electrons in the receiver, and r_0, gamma_B are given by (115), (116). Note that in calculating (117) we have not given the receiver any credit for angular discrimination, since the cross section Sigma' given by (95) is independent of direction. I now summarize the conclusions of the analysis so far. We have a transmitter and a receiver on the world-lines A and B, transmitting and receiving at times t_A = T_0 (sinh xi - xi), t_B = T_0 (sinh(xi+eta) - (xi+eta)), (118) According to (89) and (101), tau_A = delta (dt_A/dxi), tau_B = delta (dt_B/dxi). (119) It is convenient to think of the transmitter as prmanently aimed at the receiver, and transmitting intermittently with a certain duty cycle delta which may vary with xi. When delta = 1 the transmitter is on all the time. The number F' of photons received in the time tau_B can then be considered as a bit rate in terms of the variable xi. In fact, F' dxi is the number of bits received in the interval dxi. It is useful to work with the variable xi since it maintains a constant difference eta between A and B. From (100), (101), (103), (107), and (108) we derive a simple formula for the bit rate, F' = Lambda x delta. (120) The energy E transmitted in the time tau_A can also be considered as the rate of energy transmission per unit interval dxi. From (104) and (120) we find E = (Lambda^3/NN')(1+z)(sinh^2 eta)x^3 delta E_c. (121) We are still free to choose the parameters x [determining the frequency omega by (108)] and delta, both of which may vary with xi. The only constraints are (102) and the signal-to-noise condition R_SN >= 10, (122) the signal-to-noise ratio being defined by (117). If I assume that (112) holds with b=10, then (122) will be satisfied provided that x > (G/r)^(1/3), (123) with G = (200 r_0 / gamma_p) N' (1+z)^(-1) = 10^(-9) N' (1+z)^(-1), (124) r = (R_A/R_p) = (cosh xi - 1 ) / (cosh xi_p - 1). (125) Here gamma_p, R_p and xi_p are the present values of the background radiation wavelength, the radius of the universe, and the time coordinate psi. It is noteworthy that the signal-to-noise condition (123) may be difficult to satisfy at early times when r is small, but gets progressively easier as time goes on and the universe becomes quieter. To avoid an extavagant expenditure of energy at earlier times, I choose the duty cycle delta to be small at the beginning, increasing gradually until it reaches unity. All the requirements are satisfied if we choose x = max [(G/r)^(1/3), xi^(-1/2)], (126) delta = min [(r/G)xi^(-3/2), 1], (127) so that x^3 delta = xi^(-3/2) (128) for all xi. The transition between the two ranges in (126) and (127) occurs at xi = xi_T ~ log G, (129) since xi increases logarithmically with r by (125). With these choices of x and delta, (120) and (121) become F' = Lambda min [(r/G)^(2/3) xi^(-3/2), xi^(-1/2)], (130) E = (Lambda^3/ NN') (1+z) (sinh^2 eta) E_c xi^(-3/2). (131) Now consider the total number of bits received at B up to some epoch xi in the remote future. According to (130), this number is approximately F_T = INT(.,xi) F' dxi = 2 Lambda xi^(1/2), (132) and increases without limit as xi increases. On the other hand, the total energy expended by the transmitter over the entire future is finite, E_T = INT(.,infinity) E dxi = 2(Lambda^3/NN')(e^eta sinh^2 eta) (xi_p)^(-1/2) E_c. (133) In (133) I have replaced the red shift (1+z) by its asymptotic value e^eta as xi -> infinity. I have thus reached the same optimistic conclusion concerning communication as I reached in the previous lecture about bilogical survival. It is in principle possible to communicate forever with a remote society in an expanding universe, using a finite expenditure of energy. It is interesting to make some crude numerical estimates of the magnitudes of F_T and E_T. By (107), the cumulative bit count in every communication channel is the same, of the order F_T = 10^29 xi^(1/2), (134) a quantity of information amply sufficient to encompass the history of a complex civilization. To estimate E_T, I suppose that the transmitter and the receiver each contain 1 kg of electrons, so that N = N' = 10^30. (135) Then (133) with (105) gives E_T = 10^23 (e^eta sinh^2 eta) erg. (136) This is of the order of 10^9 W yr, an extremely small quantity of energy by astronomical standards. A society which has available to it the energy resources of a solar-type star (about 10^36 W yr) could easily provide the energy to power permanent communication channels with all the 10^22 stars that lie within the sphere eta < 1. That is to say, all societies within a red shift z = e - 1 = 1.718 (137) of one another could remain in permanent communication. On the other hand, direct communication between two societies with large separation would be prohibitively expensive. Because of the rapid exponential growrth of E_T with eta, the upper limit to the range of possible direct communication lies at about eta = 10. It is easier to transmit information to larger distances than eta = 10 without great expenditure of energy, if several societies en route serve as relay stations, receiving and amplifying and retransmitting the signal in turn. In this way messages could be delivered over arbitrarily great distances across the universe. Every society in the universe could ultimately be brought into contact with every other society. As I remarked in the first lecture [see Eq. (11)], the number of galaxies that lie within a sphere eta < psi grows like e^2psi when psi is large. So, when we try to establish linkages between distant societies, there will be a severe problem of selection. There are too many galaxies at large distances. To which of them should we listen? To which of them should we relay messages? The moreperfect our technical means of communication become, the more difficulty we shall have in deciding which communications to ignore. In conlusion, I would like to emphasize that I have not given any definitive proof of my statement that communication of an infinite quantity of information at a finite cost in energy is possible. To give a definitive proof, I would have to design in detail a transmitter and a receiver and demonstrate that they can do what I claim. I have not even tried to design the hardware for my communications system. All I have done is to show that a system performing according to my specifications is not in obvious contradiction with the known laws of physics and information theory. The universe that I have explored in a preliminary way in these lectures is very different from the universe which Steven Weinberg had in mind when he said, "The more the universe seems comprehensible, the more it also seems pointless." I have found a universe growing without limit in richness and complexity, a universe of life surviving forever and making itself known to its neighbors across unimaginable gulfs of space and time. Is Weinberg's universe or mine closer to the truth? One day, before long, we should know. Whethr the details of my calculations turn out to be correct or not, I think I have shown that there are good scientific reasons for taking seriously the possibility that life and intelligence can succeed in molding this universe of ours to their own purposes. As Haldane (1924) the biologist wrote fifty years ago, "The human intellect is feeble, and there are times when it does not assert the infinity of its claims. But even then: Though in black jest it bows and nods, I know it is roaring at the gods, Waiting the last eclipse." REFERENCES Alpher, R.A., R.C. Herman, and G. Gamow, 1948, Phys.Rev. 74, 1198. Barrow, J.D., and F.J. Tipler, 1978, "Eternity is Unstable", Nature (Lond.) 276, 453. Bethe, H.A., and E.E. Salpeter, 1957, "Quantum Mechanics of One- and Two-Electron Systems", in Handbuch Phys. 35, 334-348. Capek, K., _R.U.R._, translated by Paul Selver (Doubleday, Garden City, N.Y.) Davies, P.C.W., 1973, Mon.Not.Roy.Astron.Soc. 161, 1. Dyson, F.J., 1972, _Aspects of Quantum Theory_, edited by A. Salam and E.P. Wigner (Cambridge University, Cambridge, England), Chap.13. Dyson, F.J., 1978, "Variation of Constants", in _Current Trends in the Theory of Fields_, edited by J.E. Lannutti and P.K. Williams (American Institute of Physics, New York), pp163-167. Feinberg, G., M. Goldhaber, and G. Steigman, 1978, "Multiplicative Baryon Number Conservation and the Oscillation of Hydrogen into Antihydrogen", Columbia University preprint CU-TP-117. Goedel, K., Monatsh.Math.Phys. 38, 173. Gott, J.R., III, J.E. Gunn, D.N. Schramm, and B.M. Tinsley, 1974, Astrophys.J. 194, 543. Gott, J.R., III, J.E. Gunn, D.N. Schramm, and B.M. Tinsley, 1976, Sci.Am. 234, 62 (March, 1976). Haldane, J.B.S., _Daedalus, or, Science and the Future_ (Kegan Paul, London). Harrison, B.K., K.S. Thorne, M. Wakano, and J.A. Wheeler, 1965, _Gravitation Theory and Gravitational Collapse_ (Univesity of Chicago, Chicago), Chap.11. Hawking, S.W., 1975, Commun.Math.Phys. 43, 199. Hoyle, F., 1957, _The Black Cloud_ (Harper, New York). Islam, J.N., 1977, Q.J.R.Astron.Soc. 18, 3. Islam, J.N., 1979, Sky Telesc. 57, 13. Kropp, W.P., and F. Reines, 1965, Phys.Rev. 137, 740. Maurette, M., 1976, Annu.Rev.Nucl.Sci. 26, 319. Monod, J., _Chance and Necessity_, translated by A. Wainhouse (Knopf, New York) [_Le Hasard et la Necessite_, 1970 (Editions du Seuil, Paris)]. Nagel, E., and J.R. Newman, 1956, Sci.Am. 194, 71 (June, 1956). Nanopoulos, D.V., 1978, _Protons are not Forever_, Harvard Preprint HUTP-78/A062. Pati, J.C., 1979, _Grand Unification and Proton Stability_, Univesity of Maryland Preprint No. 79-171. Penzias, A.A., and R.W. Wilson, 1965, Astrophys.J. 142, 419. Rees, M.J., 1969, Observatory 89, 193. Shlyakhter, A.I., 1976, Nature (Lond.,) 264, 340. Turner, M.S., and D.N. Schramm, 1979, _The Origin of Baryons in the Universe and the Astrophysical Implications_, Enrico Fermi Institute Preprint No. 79-10. Weinberg, S., 1972, _Gravitation and Cosmology_ (Wiley, New York), Chap.15. Weinberg, S., 1977, _The First Three Minutes_ (Basic, New York). Wright, T., 1750, _An Original Theory or New Hypothesis of the Universe_, facsimile reprint with introduction by M.A. Hoskin, 1971 (MacDonald, London, and American Elsevier, New York). Zeldovich, Y.B., 1977, Sov.Phys.-JETP 45, 9.