Copernicus taught us that we don’t live in a special place in space. Translated into time, that led to the very important Copernican principle that all points in time are the same. Now we’ve discovered that the universe is accelerating, and we do live in a special place in time. We’re right near the transition point between deceleration and acceleration, and not all times are the same. I think that is something that has to have profound meaning for science.
Paul Steinhardt, Princeton cosmologist
After Einstein completed general relativity, he was satisfied but rather exhausted. The intensity of the project took a toll on his health. Nevertheless, he felt intellectually compelled to apply his master-work toward unraveling one of the deepest mysteries of science: the shape and form of the cosmos itself.
Today, the notion of galaxies as immense groupings of stars is so familiar that it’s hard to believe the concept is less than a century old. Before Hubble measured the distances to Andromeda and other spiral forms in the sky in 1924 and established them as “island universes” in their own right, many astronomers thought they were
simply nebulas (gas clouds) within the Milky Way itself. In other words, astronomers believed that the Milky Way constituted the entire universe and that all celestial bodies belonged to it. The cosmos, they thought, was a homogeneous sea of stars (and other formations) that had remained roughly the same since the beginning of time.
Before Hubble’s discoveries, Einstein shared this early perception, believing that the overall distribution of material in space was essentially static. Therefore, when he applied general relativity toward the universe, he was astonished to discover that his result was highly unstable. Like an acrobat teetering on a wire, a slight push in any direction would send his model flying. A bit too much matter and his solution collapsed. A bit too little and it blew up. In either case, the universe seemed a fleeting creation, not a rock of the ages.
Reluctantly, the German physicist felt compelled to supplement his elegant equation with an extra term, known as the cosmological constant or the Greek letter (lambda). This addition served to stabilize his model of the universe by counteracting gravitational attraction with a kind of antigravitational repulsion. It effectively offered a balancing pole to the teetering acrobat. Where the anti-gravity came from, Einstein couldn’t say. Finding it a bit crazy, he informed his friend Paul Ehrenfest that he had “committed something in the theory of gravitation that threatens to get me interned in a lunatic asylum.”
The geometry Einstein had chosen for his model of the universe too was rather unusual. Instead of a stretched-out, speckled sheet, as we often imagine the canopy of the heavens to be, it resembled a polka-dot balloon. Rather than infinite, it was closed and finite. A beam of light heading in any direction would circumnavigate the entire universe and eventually return to its starting place.
Einstein selected a bounded, rather than unlimited, cosmos purely for philosophical reasons. He ardently wanted general relativity to obey Mach’s principle—with the distant stars guiding
local inertia—but found that he couldn’t do so for an infinite collection of stars. A finite universe would fit that model much more easily. Naturally, though, the universe couldn’t end with a wall. It would be far more eloquent to imagine the cosmos as sufficiently curved that it connects up with itself—in other words, as what mathematicians call a “hypersphere.”
A hypersphere is a higher-dimensional version of an ordinary sphere. Take a dot, spin it around a loop, and it becomes a circle. Twirl that circle about an axis and it becomes a sphere. Now choose an additional dimension, perpendicular to the ordinary three dimensions of space, and whirl that sphere around. It traces out a higher-dimensional object. Naturally, that last step of this extrapolation is hard to fathom. Yet there are creative ways of picturing higher dimensions and of determining the actual geometry of the universe.
Let’s say that you’ve never heard of the game of basketball. You come from a tiny island nation where the only two sports are synchronized and unsynchronized swimming. Suppose you enter a gym in the United States and see a basketball on the floor. Without picking it up, how do you know it’s spherical?
The answer is trickier than you might think. Our vision carves out two-dimensional planes in three-dimensional space. Yet nuances of shade and color, the diminution of apparent size with distance, and varied perceptions from each eye offer us a sense of depth. These optical tools help us ascertain objects’ shapes and positions. Artists make use of such subtle cues to enliven their works, lending them an extra dimension. Such illusions leap out at us most vividly in 3D movies.
How then can we really be certain that a basketball is spherical, not just a cleverly disguised orange pancake? A sure way of telling involves measuring angles on its surface. If you trace out a triangle
on a flat pancake and add up its angles, the sum is precisely 180 degrees. Do the same for the exterior of a ball, and you arrive at a figure greater than 180 degrees. The differences between flat and curved surfaces were first noted by the mathematicians Gauss, Lobachevsky, and Bolyai in the early 19th century, and involve the subject known as non-Euclidean geometry.
Now imagine an ant crawling along the basketball—perhaps the same savvy insect that rode on top of Newton’s bucket. Constrained on the ball’s surface, it would be unaware that the ball has depth. It would believe that it lives in a two-dimensional world. However, two unmistakable facts would strongly suggest that the ball is a sphere. First, the ant could easily circumnavigate the surface and return to its original position. That would at least tell it that the ball is finite, not stretched out indefinitely. Second, it could measure out its own triangle and sum up the angles. A quick calculation would prove that the surface is curved.
But suppose the ant was easily distracted and somehow never completed a full circle around the ball. If it could never physically enter the ball’s interior, how could it really be sure that it lived in a three-dimensional world? Maybe it would even discount the presence of a third dimension, since it couldn’t actually see it. Dismissing mysterious, unseen directions, perhaps the ant would conclude that it resided on a two-dimensional pancake with strange geometric properties. Becoming an expert in non-Euclidean geometry, it would consider the basketball’s inside merely a hypothetical construct, lacking a physical basis.
Similarly, what if astronomical observations in space and time show that the four-dimensional space-time of general relativity is actually curved? Then we are led to ask: Curved into what? The logical answer is that space-time bends into the fifth dimension, which we may not be able to sample directly because we cannot step out of our world; which leads us to a fundamental question not so much technical as conceptual: Are extra dimensions merely hypo-
thetical constructs, convenient for mathematical discourse, or are they in some sense real?
The nature of reality in science is a tricky game. For example, in quantum physics, particles are described by wave functions, containing all manner of information. However, at any given time not all these data can be accessed. According to Heisenberg’s uncertainty principle, by taking a measurement of one quantity of a particle (position, for instance), other quantities (such as momentum) tend to blur. Hence, these measured quantities, called observables, generally don’t constitute a complete picture of the particle. One might ask, then, which is the true physical reality—the shadowy realm of wave functions or the incomplete set of observables?
Astronomers face a similar dilemma when they examine phenomena that cannot be directly observed. Consider, for example, the hundreds of planets discovered, during the past decade, to be orbiting distant stars. Most of them were detected through their gravitational tugs on their suns. Assuming (as in most of the cases) that the planets themselves are too dim to be seen, astronomers must infer their existence. They are presumed real because that’s the best explanation researchers have developed to account for their parent stars’ slight movements. In our own solar system, for many years Neptune’s existence was merely presumed. Well before its image was seen with a telescope, astronomers surmised its presence from perturbations in the orbits of the other planets. Did those observations alone make Neptune real, or did its light have to be detected first?
Most physicists and astronomers today would say that something is real if they infer its existence through a logical explanation that preserves the established laws of nature. The subatomic particle called the neutrino is a good example of this philosophy. Theorist Wolfgang Pauli postulated its existence through applying the principles of the conservation of energy and momentum. Although his peers gently taunted him about his advocacy of a particle that had never been seen, Pauli stood his ground. Almost two decades later,
experimenters finally proved him right. The neutrino, though notoriously one of the most elusive particles in nature, was indeed real.
Einstein went back and forth throughout his career on whether or not extra dimensions were real. This indecision related to his mixed feelings about the role of experimentation in physics. Philosophically, he had one foot firmly in each of two camps. He often argued that experimentation was needed to establish any proposition. That’s why he breathed easier once the Mercury precession and light-bending measurements seemed to confirm general relativity. On the other hand, he spent much of his later years trying to use his own intuition to surmise the deep mathematical principles underlying reality. At least to the outside world, these musings seemed to have little to do with what was experimentally known at the time.
Einstein’s propositions that the universe is shaped like a hypersphere, and that a cosmological constant is needed to bolster it from either expansion or collapse, could not be tested for many years. Only in recent times have astronomers been able to map the likely shape of the cosmos and consider the likelihood of an antigravity term. Nevertheless, just by bringing up these issues Einstein ushered in a new age for cosmology. For the first time, science addressed the possibility that space itself has an overall shape.
A sphere is not the only way a surface can be curved. Saddles, for example, are often curved one way on the bottom, to accommodate the horse, and another way on the top, to provide comfort to the rider. Similarly, three-dimensional spaces can curve several ways into a higher dimension while preserving constant curvature. Besides a hypersphere (known as closed or positively curved), spaces can be saddled-shaped hyperboloids (known as open or negatively curved). The third possibility is for the space to be completely flat (known as zero curvature).
In 1922, Russian mathematician Alexander Friedmann explored each of these geometric possibilities for the universe. In the absence of a cosmological constant, he found that they corresponded to three
distinct cosmologies. He characterized these by a parameter, known as the scale factor, that measures the size of space. If space grows, for instance, the scale factor increases over time. This results in its content (galaxies in the present era) moving farther apart.
The universe, as Friedmann envisioned, started out extremely small. Then, like a pumped-up balloon, it began to expand. If the universe’s overall geometry is closed, this expansion will eventually reverse itself—like air being let out of a balloon—and collapse it back down. This catastrophic demise is often called the Big Crunch. If, in contrast, the universe is open or flat, it will expand forever. The difference between the two models pertains to how quickly the scale factor grows; it grows faster for open than flat geometries. These three possibilities (closed, open, and flat) delineate what are known as the Friedmann cosmological models.
Which geometry is feasible for a particular universe depends on its overall density. Denser universes follow a closed scenario, while sparser ones obey an open scenario. Universes of densities precisely equal to a critical value are flat. The ratio between the actual density and critical density is called the omega parameter. For omega greater than one, the cosmos is closed; less than one it is open; and equal to one it is flat.
According to physicist George Gamow, Friedmann sent his results to Einstein, pointing out inaccuracies in Einstein’s static model. Einstein did not reply for quite some time. Finally, he responded with a “grumpy letter,” reluctantly agreeing with Friedmann’s conclusions. Although Friedmann published his results in a prestigious German journal, they were overlooked for several years—until Hubble’s remarkable findings brought them to prominence.
Hubble’s discovery, in 1929, of the expansion of the universe came at a fortuitous moment for cosmology. By that time, general relativity
was a well-established theory with a host of solutions. In addition to Einstein’s static universe and the dynamic Friedmann models, de Sitter had proposed a curious cosmological model that was completely empty but expanded anyway. Thus, scientists wishing to describe the cosmos could choose from possibilities galore: stable, dynamic, expanding, collapsing, full of matter, bereft of matter, and so on.
Consequently, when Hubble revealed that all the galaxies in creation were fleeing from each other like a roomful of angry solipsists, theorists were well prepared. They dusted off expansion scenarios and put them to good use—relegating static models to the bottom drawers of musty filing cabinets. The final vestige of the Newtonian cosmos—the notion that space doesn’t evolve—crumpled under the weight of immutable facts.
It did not take long for Einstein to realize he had erred in presuming that the cosmos was immutable. In January 1931, during a trip to the United States, he visited Mount Wilson Observatory in California to see for himself the instrument that had provided a window to cosmic truth. By that time Einstein was extraordinarily famous, so film crews accompanied him as he rode the elevator up to the 100-inch Hooker telescope and glanced through its eyepiece. Hubble was beaming with pride as he showed the German physicist the most powerful telescope on Earth and the spectral evidence he had gathered with it. Paying tribute to Hubble’s work, Einstein admitted that the cosmological constant had been a mistake. “Not for a moment,” said Einstein, “did I doubt that this formalism was merely a makeshift to give the general principle of relativity a preliminary closed form.” The purest form of the general relativistic equations, he declared, had been the correct one. The age of the expanding universe had begun.
One of the articles unearthed at that time was remarkable for its prescience. Written in 1927, it came into prominence in 1931, when Eddington had it translated from French into English and published
in the Monthly Notices of the Royal Astronomical Society. The piece was remarkable for predicting not only a growing universe but also one whose growth is tempered or accelerated during various phases of its life. Moreover, it suggested a seeming connection between cosmological theories and the biblical notion of a moment of creation. Perhaps this was not surprising, given that the author of the paper, Georges Lemaitre, was an ordained priest as well as a scientist.
Born in Belgium in 1894, Lemaitre studied math and physics while attending seminary, devouring all he could read about general relativity. After his ordination in 1923, he attended the University of Cambridge, where he took courses under Eddington. He completed his education at MIT, obtaining his Ph.D. in 1927.
In his seminal research paper, Lemaitre devised a hybrid between the cosmological theories of Einstein and Friedmann. Adding a cosmological constant to Friedmann’s disparate geometries, Lemaitre found that Einstein’s equations produced a curious assortment of behaviors. The resulting solutions became known as the Friedmann-Lemaitre models.
The solution Lemaitre found the most promising is sometimes called the “hesitation universe.” According to his theory, all of space and time began with a solitary burst of energy—a singular moment of genesis. Before that explosive instant, absolutely nothing existed. Afterward, the universe was a rapidly growing fireball, hurled outward by the blast. Fred Hoyle, a leading critic of this idea, later dubbed it the Big Bang. Lemaitre preferred to call the initial state the primordial atom. (It was sometimes also called the cosmic egg.)
During its initial era, the cosmos was very dense. Consequently, the sticky force of its gravity was strong enough to slow the expansion. As the universe got bigger and bigger, its expansion became slower and slower. Eventually, its expansion was languid enough that galaxies could assemble from the hot matter. The galaxies in this model were distributed like a fluid with no center and no edge. As in
the Einstein universe, they resembled dots painted on the surface of a higher-dimensional “balloon.” Just as every point on a balloon is as central as every other point, no galaxy can rightly claim to be in the middle of the universe.
According to Lemaitre, the lazy-growth period or “hesitation era” lasted for billions of years, allowing for the formation of all the galaxies we see in the sky. Then a new force began to dominate cosmic dynamics—the repulsive power of the cosmological constant term. We now call this extra push the “dark energy.”
One of the advantages of Lemaitre’s proposal was its flexibility. By tinkering with the value of the cosmological constant, one could reduce or extend the hesitation era as much as one wanted—like tuning a radio dial to produce the best reception. Presumably the optimal time frame of Lemaitre’s model would be one that reproduced the known age of the universe and other observed astrophysical facts.
Although Eddington helped bring Lemaitre’s paper to publication, he vehemently disagreed with its premise of a universal beginning. The British astronomer found distasteful the idea that time could have a starting gate, preferring to believe that the cosmos existed eternally. To remove the concept of genesis from the equations, Eddington pondered an infinitely long quiescent period, similar to Einstein’s static realm, in which the universe was like a solid lump of dough. This space-time dough would have persisted in the same state forever, except that somehow a disturbance (acting as a kind of cosmic yeast) caused it to rise. It expanded, under the influence of a cosmological constant, until it reached its present-day size—hence the colossal cosmos we observe today.
Why would a sleeping cosmos of infinite duration suddenly wake up? This profound philosophical question dates at least as far back as St. Augustine of Hippo. In City of God, he argued that there was no contradiction between an immortal creator and a finite creation at a fixed instant in time. Eddington believed the awakening stemmed
from a chance occurrence that could have occurred at any moment. If someone were to bet in the lottery an infinite number of times, eventually they’d win and their life would be changed forever. The universe simply won the lottery.
A third alternative, to both a singular explosion and a slow waking up, dates back (at least philosophically) to traditional Eastern notions of eternal cycles. The Hindus, Babylonians, dynastic Chinese, ancient Greeks, and many other cultures have advocated an ever-repeating universe in which the slate is periodically wiped clean. In the mid-1930s, Caltech physicist Richard Tolman explored a similar concept with his “oscillatory universe.” According to this model, instead of a universal beginning, the Big Bang was preceded by the “Big Crunch” of an earlier cycle. That crunch stemmed from the earlier era’s collapse, which was precipitated by a previous Big Bang, and so forth. Each era resembled a closed Friedmann model, glued by fate to its predecessors and successors.
Tolman realized, however, that his model could not produce an endless succession of viable worlds. Rather than starting afresh, each era would preserve the entropy (amount of disorder) of the previous era. Like a movie theater that never sweeps up between screenings, the universe would accumulate more and more disorderly energy. Tolman calculated that this entropy increase would make each cycle longer and longer, with higher and higher temperatures, while less and less hospitable to the development of galaxies, stars, planets, and life. Ultimately, the cosmos would recycle itself into an indefinite array of lifeless stages. We might ask a philosophical question: If a universe arises that no living being is around to observe, does it truly exist?
Note that the various cosmological theories of that period had markedly different suppositions. Both Lemaitre’s model and Eddington’s model made use of a cosmological constant term. Even though Einstein called this term his greatest blunder, it offered cosmologists greater freedom to “fine-tune” each universe model to
bring it into line with astrophysical predictions. Tolman’s model, on the other hand, based on an extrapolation of Friedmann’s universe indefinitely into the past and future, did not have such a term. Hence, it was simpler but did not possess the same flexibility.
Einstein expressed interest in the variety of cosmological models that attempted to explain Hubble’s discovery, but he did not step into the fray and advocate a particular scenario. He consulted with both Tolman and Lemaitre but did very little research himself on this question. (A paper Einstein published on the topic mainly summarized what was known at the time.) While this discourse was taking place, Einstein had become intensely focused on a different goal: to describe two of the known forces of nature (electromagnetism and gravitation) by means of a unified field theory that would replace quantum theory with an alternative explanation of atomic phenomena. Ironically, while Einstein pressed on with this goal, it was a third interaction—the nuclear force—that would come to dominate discussions in physics for quite some time.
By the 1940s it became clear to the astronomical community that any credible theory of the universe would need not only to have expansion but also to address the origin of the chemical elements. Hans Bethe (who passed away recently at the age of 98—still productive in his later years) had proposed a brilliant model of stellar nucleosynthesis, showing how helium, carbon, and other higher elements could be built up from hydrogen through the process of fusion. Hydrogen nuclei (protons) could weld together to form deuterium (a heavier form of hydrogen, with both a neutron and a proton in each core). Deuterium, in turn, could meld with hydrogen, to form helium-3 (two protons and one neutron per nucleus). Helium-3 nuclei could fuse into helium-4 (two protons and two neutrons per nucleus), releasing protons in the process, and so forth.
These processes could not occur just anywhere, however. They needed extremely high energies to overcome the electrical repulsion of protons—enabling these particles to be close enough to feel the attractive nuclear force. At the time of Bethe’s proposal, it was unclear if stars were hot enough to produce all the higher elements in the universe (beyond helium). It was also uncertain how such material, once created, could be disseminated.
Russian physicist George Gamow, a former student of Friedmann’s, found in Lemaitre’s notion of a “primordial atom” a perfect opportunity to explain how the ultrahigh energies needed for nucleosynthesis could arise. Along with Ralph Alpher and Robert Herman, young researchers at Johns Hopkins, Gamow proposed that all the known elements, from hydrogen to uranium, were forged in the blazing furnace of the Big Bang. The Big Bang, they reckoned, was hot enough to allow for the assembly of dozens of elements out of hydrogen building blocks.
Gamow had a splendid sense of humor and could not resist a good joke. When in 1948 he submitted his paper, placing Alpher’s name first and his own name last, he could not resist inserting Bethe’s appellation in the middle. This did not reflect an actual contribution by Bethe to the project. It was just so that the “authorship” of the paper—Alpher, Bethe, and Gamow—could resemble the first three letters of the Greek alphabet: alpha, beta, and gamma. Like a mischievous schoolboy who had just pulled off a prank, Gamow sent a copy of the paper to his friend Oskar Klein. He included this personal message to Klein: “It seems that this ‘alphabetical’ article may represent alpha to omega of the element production. How do you like it?”
Perhaps not quite realizing the importance of the paper, Klein wrote back: “Thank you very much for sending me your charming alphabetical paper. Will you allow me, however, to have some doubt as to its representing ‘the alpha to omega of the element production.’ As far as gamma goes, I agree of course completely with you and that
this bright beginning looks most promising indeed, but as to the further development I see difficulties.” In pointing out that Gamow hadn’t accounted for all the Greek letters, just three of them, Klein was only kidding. Ironically, there were indeed real physical difficulties with the paper, going beyond its “bright beginning,” but Klein didn’t find them. It took a rival group of scientists to point out some of the model’s limitations.
That rival group, including British cosmologists Hermann Bondi, Thomas Gold, and later Fred Hoyle, advanced what is known as the “steady state theory.” Their theory derived from profound philosophical objections to the Big Bang. Like Eddington, they couldn’t imagine the cosmos emerging in a flash. Given the time-tested laws governing the conservation of matter and energy, they found it preposterous that all the material in the cosmos could suddenly arise from nothing, like a magician’s trick. Why should one moment in the universe’s history be so radically different from all the other moments?
Bondi and Gold proposed a new law of nature, called the “perfect cosmological principle,” stating that the universe has maintained a largely consistent appearance for all times. Billions of years ago, according to this view, there were different stars and galaxies—perhaps even different life forms—but the overall distribution of these objects was roughly the same as it is now. This is an extension of what is called the “cosmological principle,” the Copernican notion that Earth has no special place in space. The perfect cosmological principle further supposes that Earth has no special place in time as well.
If the cosmos has been as consistent throughout the ages as Dorian Gray’s visage, how can we explain the Hubble recession of the galaxies? Doesn’t universal expansion imply change? The steady state theory addresses this issue by purporting that as the galaxies move away from each other, a gradual infusion of new material would fill in the gaps, leaving everything pretty much the same. This process is called “continuous creation.” The amount of new matter
needed to restock the vacant regions and maintain consistency over time would be extraordinarily tiny. Just one hydrogen atom per cubic mile of space would need to materialize each year. Eventually, these newly created atoms would coalesce into new galaxies, replacing the older ones that have moved away.
Critics of the steady state theory pointed to its continuous creation of atoms as an egregious violation of the conservation of mass. How could atoms simply appear out of nowhere? Proponents of the theory responded that a minuscule violation of a physical law, spread out over the eons, was far superior to a colossal breakdown at a single point in time. If a matter is pulled out of a hat, they argued, isn’t it best performed through a slow trickle than with a Big Bang?
Throughout his career, Hoyle (later joined by astronomers Geoffrey Burbidge and Jayant Narlikar) advanced every conceivable argument for various versions of steady state cosmology. He developed the machinery for a “creation field” that would explain how new material could arise from nowhere. The stretching out of space would enrich this energy pool, providing a source for new particles. (Later a very similar mechanism would be used to explain the much more popular “inflationary” model.) Technically, one chooses the average pressure of the universe to be exactly equal in value but opposite in sign to the density, for all times. In this manner, matter would be produced at just the right rate to offset the dilution caused by the expansion and keep the density constant: a steady state.
A critical byproduct of the steady state theory was the development of a viable model of how the heavier elements came to be. In 1957 Hoyle, along with E. Margaret Burbidge, Geoffrey Burbidge, and William Fowler, wrote an extraordinary paper detailing the synthesis of elements. Elements are germinated, they proposed, in the fiery bellies of stars and released in catastrophic supernova explosions. The cores of stars, they showed, have high enough temperatures to forge each atomic nucleus from simpler ones. Thus, virtually everyone around us was once embryonic material in stellar wombs. This
provocative idea was later proven correct by detailed calculations performed by Donald Clayton and other researchers. The original paper (dubbed “B2FH” after the initials of its authors) became a landmark in astrophysics. Fowler (but not the others) received aNobel Prize in 1983 for this discovery.
As astrophysicists measured the relative abundance of various elements in space and determined the energies required to synthesize these, they realized that nature used two different means of assembly. For lighter elements, such as deuterium, helium, and lithium, Gamow’s fireball proposal seemed to account well for the recorded amounts. One could use the theory, for example, to calculate the amount of helium synthesized by nuclear reactions during the fireball. This figure nicely matched observed quantities. For elements heavier than lithium, however (such as carbon, oxygen, and so forth), the fireball explanation did not suffice and the supernova theory fit well. Hence, at a birthday party, while the helium in a balloon may well have been multi-billion-year-old Big Bang leftovers, the cake’s ingredients were certainly more freshly made in a stellar oven.
By the early 1960s, the debate between Big Bangers and steady staters had assumed epic proportions. Without sufficient evidence supporting either position, discussions of the issue veered toward the philosophical rather than the physical. Those who liked to think of time as precious and unique tended to agree with Gamow, while those who preferred imagining it as copious and indistinguishable tended to support Hoyle. It would take a buzz from the distant past to help settle the matter.
The old Bell Labs was well known in the 1960s (and a number of years thereafter) as a haven for unfettered basic research. Though privately owned (by the phone company, no less), it kept its employees on looser reins than the government or even many academic settings.
Researchers were largely free to follow their own creative instincts as long as a reasonable number of their projects eventually bore fruit. With enough brilliant people pursuing their dreams, the result was a steady stream of groundbreaking achievement in fields from linguistics to physics.
Arno Penzias and Robert Wilson are two of Bell Labs’ most famous sons. In 1965, while scanning for radio emissions from a gaseous ring surrounding the Milky Way with the giant Horn Antenna in Holmdel, New Jersey, they uncovered veiled truths about the essence of deepest space. Designed for satellite communications as part of NASA’s Project Echo, the antenna assumed profound astronomical importance in the hands of these capable researchers. Unexpectedly, the funnel-shaped aluminum structure acted like an ear to the distant past. To their amazement, instead of satellite signals or more conventional reverberations, they encountered the echoes of the early universe. Their unprecedented findings demonstrated that the cosmos is bathed in the cooling afterglow of a searing earlier epoch.
Detecting and analyzing astronomical radio waves is a tricky business. There are many different types of earthly noise (such as radio and television broadcasts) that can mask celestial signals. Consequently, when Penzias and Wilson were preparing the Horn Antenna for their sky scan and heard a strange persistent background hiss, their first thoughts were to rule out a variety of mundane possibilities. Aiming the receiver in a wide range of directions, they were surprised that the background noise did not vary at all. It seemed to be coming from everywhere. As a last stab at eliminating the peculiar sound, they decided to investigate the possibility that “white dielectric material” was fouling up the receiver. You may have seen such a substance on windshields from time to time. It drops out of pigeons. But no, after thoroughly cleaning every square inch of the antenna, the hiss remained.
Finally, Penzias and Wilson decided to consult with Dicke, just down the road at Princeton. Dicke, as it turned out, was planning to
search for relics of the Big Bang in the high-wavelength (radio and microwave) region of the spectrum. He had long suspected that hot primordial radiation, cooled over time through cosmological expansion, would be present throughout the cosmos. Along with young astronomers P. J. E. (Jim) Peebles and David Wilkinson, he was developing a radiometer to scan for such remnant signals. They were astonished to learn that they had been beaten to the punch.
The Princeton group quickly calculated the temperature of the radiation that would produce the signals that Penzias and Wilson observed. It turned out to be roughly three degrees Kelvin (three degrees above absolute zero, or minus 454 degrees Fahrenheit). Then, employing techniques in thermal physics, they determined the temperature of a fireball that had been chilled by billions of years of expansion. That value also turned out to be a few degrees Kelvin. Hence, Dicke and his co-workers proclaimed Penzias and Wilson’s findings as proof that the universe was once enormously hot and dense. The low-temperature radiation that fills all of space became known as the cosmic microwave background (CMB).
When Penzias, Wilson, and the Princeton group published these results, they were proclaimed as the most important cosmological discovery since the time of Hubble. In a stunning omission, however, the articles did not cite key work by Gamow, Alpher, and Herman regarding the temperature and content of the early universe. Gamow hurriedly pointed out that he and his colleagues had predicted the relic radiation back in 1948. In his memoirs, Dicke later wrote:
There is one unfortunate and embarrassing aspect of our work on the fireball radiation. We failed to make an adequate literature search and missed the more important papers of Gamow, Alpher and Herman. I must take the major blame for this, for the others in our group were too young to know these old papers. In ancient times I had heard Gamow talk at Princeton but I had remembered his model universe as cold and initially filled only with neutrons.
By the late 1960s, the steady state theory was tottering. Despite repeated attempts to amend it by Hoyle and his associates, it lost considerable support. Lacking a fireball stage, it simply could not account for the origin of the CMB. The battle seemed to have been won by the Big Bangers—at least for the moment. For their epic discovery Penzias and Wilson would receive the Nobel Prize for physics in 1978.
For about a decade after the CMB was discovered, the astronomical community (for the most part) stood entranced by its achievement— so awestruck by the Big Bang model that few among them pointed out any flaws. At last humankind could delve into the first instants of time and pen a new scientific Genesis. The major issue that needed to be worked out, many scientists seemed to argue, was the precise timing of cosmological events.
In 1977 a book by physicist Steven Weinberg, audaciously entitled The First Three Minutes, celebrated humankind’s newfound ability to map the infant moments of the cosmos. It provided a remarkably detailed picture, dating as far back as one-hundredth of a second after the initial explosion. Weinberg explained:
Throughout most of the history of modern physics and astronomy, there simply has not existed an adequate observational and theoretical foundation on which to build a history of the early universe. Now, in just the past decade, all this has changed. A theory of the early universe has become so widely accepted that astronomers often call it the “standard model.”
Weinberg proceeded to explain the step-by-step process by which matter was created—from elementary particles such as photons, electrons, protons, neutrons, and neutrinos, to deuterium and then
higher elements. Each successive phase occurred as the universe cooled enough to accommodate more complex structures. Yet even he admitted, “I cannot deny a feeling of unreality in writing about the first three minutes as if we really know what we are talking about.” However, the unanswered questions seemed mainly to concern the fate of the universe rather than its origins, because researchers of the time shared a feeling that the universe could well be modeled by the dynamics prescribed by Friedmann. Recall that there are three basic Friedmann models: closed, open, and flat. These models are characterized by a parameter “omega” that relates the actual density of the universe to a critical value. If omega is greater than one, the universe is closed and doomed eventually to collapse. If, on the other hand, omega is less than or equal to one, the universe is open or flat, respectively, and fated to expand forever. Thus, the burning question of the time concerned the exact value of omega.
The omega question is akin to asking whether or not a rocket has enough impetus to clear Earth’s gravity and blast off into space. If its initial thrust is piddling, there’s no way it can make the jump. Rather, it will arc back down toward the ground and crash. With sufficient liftoff speed, however, its momentum will take it well past Earth’s gravitational pull and deep into the interplanetary void. These two possibilities are analogous to omega greater than or less than one, respectively. A third possibility, analogous to omega equals one, is that the rocket would have just the right initial push to propel itself into orbit. It would neither be forced down by Earth’s gravity nor escape it. Instead, it would be forever at the brink of conquest and freedom—allowed to sail but forbidden to follow an independent course. Such is the fate of a flat universe. What destiny then is in store for us?
Even in the 1960s and 1970s, before the advent of space telescopes and other precision instruments, astronomers knew something about omega. They realized that it was probably not miniscule or enormous (one-thousandth or 1 million, let’s say) but rather stood
reasonably close to one (within what scientists call an “order of magnitude” or factor of 10). This extraordinary proximity to a particular value introduced a thorny theoretical problem known as the “flatness dilemma.”
How special is the universe? Are its features akin to a Rolls Royce or a Yugo—meticulously assembled to order or common mass production? Western religious tradition suggests that the cosmos was custom-made for man. If it weren’t for the slipping in of sin by slithering agents, we’d be living in a state of sheer perfection. Eastern belief is perhaps less egocentric, positing that our race and civilization comprise but a minute component of endless creations. Scattered through space and time, like myriad shiny pebbles on a surf-scrubbed beach, every conceivable possibility exists.
Since the age of Copernicus, science has veered steadily away from specialness. Earth, it asserts, is but an ordinary rock tucked into an average corner of the cosmos—a speck in the dustbin of the utility closet of space’s arena. Life is but a random brew, concocted by the blind chefs of time. Consciousness is merely a curious combination of chemicals whose interactions cause awareness. Art and poetry stem from particular firings of neurons that trigger pleasing receptors in the brain—and so forth.
Perhaps the ultimate expression of this philosophy is the so-called “chaotic cosmology programme”—a phrase coined by cosmologist John Barrow to characterize a far-reaching scientific goal. Given the complete range of possible characteristics of the early universe, it says, which of these could have resulted in the current cosmic state of affairs? Barrow designated the complete set of initial possibilities to be the collection of all possible solutions to Einstein’s equations of general relativity. More recently, astrophysicist Max Tegmark has extended this to include all conceivable laws of nature.
Speculative writers and historians often consider such “what if” scenarios—applied to Earth, that is. If the plague hadn’t decimated Europe (or, in the other extreme, if it had left but a small percentage of survivors), would there still have been a Renaissance? Or would medieval institutions have lingered for many more centuries— possibly even until the present day? Just as one might contemplate alternative histories of Earth, one might consider disparate scenarios for the universe itself.
How wide a range, for example, could the value of omega have been in the earliest stages of the universe and still lead to the current state of affairs? If omega started out as one-half (representing an open universe) or two (representing a closed universe), to pick some values, could the cosmos have evolved over billions of years into present-day conditions? Or to put this question another way: If a cosmic designer threw a dart to select the initial value of omega, how close to bull’s eye would it need to land?
The answer, according to theorists’ calculations, is astonishing. If, by the end of the first second after the Big Bang, omega differed from one by as little as one part in one quadrillion (the digit one, followed by 15 zeros), this minute discrepancy would have ballooned over time. In the eons that followed, the ensuing dynamics would little resemble that of the actual cosmos. Omega, by now, would be either much too large or way too small. That is, if the universe wasn’t extraordinarily close to flat to begin with, it could not possibly be anywhere close to flat today. (Note that “flat” in this context refers to the shape of the ordinary spatial part of four-dimensional space-time.) This conundrum, first posed by Dicke in the late 1960s, is called the “flatness problem.”
To picture this bizarre situation, imagine if the Three Bears opened a bed and breakfast and Goldilocks was one of their customers. Upon arriving at the inn, she found that her bed had no linens on it. Next to it, on top of a table, was a folded pile of sheets. A sign hung above it: “We are currently hibernating and shouldn’t be
disturbed. Please help yourself to a sheet. Note that these are enchanted sheets and must be placed perfectly flat upon the bed.”
Goldilocks was very sleepy. Ignoring the sign’s warnings, she picked up a sheet and placed it loosely on the mattress. “Flat enough,” she thought, and then fell asleep. Imagine her horror when an hour later she woke up completely entangled in the sheet. It had curled up and was starting to squeeze tighter and tighter. “This sheet is too snug!” she screamed as she pulled herself away.
As soon as Goldilocks left the bed, the sheet folded itself and then hopped back up on the table. She decided to try again. “This time I’ll make it really flat,” she muttered to herself. “Great idea,” the sheet echoed back. Goldilocks picked it up, placed it back on the bed, and then gently tucked in its corners. Trying to smooth it out, she failed to notice that one of the previous guests (a princess) had left a pea under the mattress. This tiny legume caused a minute bump in the fabric, so subtle that it could scarcely be noticed.
Nevertheless, barely an hour’s time after she fell asleep again, Goldilocks woke up feeling quite odd. Suddenly, she realized that she was floating close to the ceiling. The sheet, not being perfectly flat, had billowed outward, becoming puffier and puffier until it lifted off the bed. “This sheet is too bloated!” she cried out. Cautiously, Goldilocks climbed down the sheet back to the floor. Once again, the sheet folded itself up and resumed its perch on the table.
Now, being a savvy girl, she lifted the mattress to find the source of the problem. Discovering the pea, she chucked it out the window into the garden (where it promptly grew into an ornamental stalk). Then she replaced the mattress, carefully making sure it was absolutely flat. Next, she spread the sheet out, eyeing it from every direction to be certain it showed not the slightest slant or kink. Once she completed her inspection, she decided to chance slumber once more. This time she was successful—the sheet was flat enough to stay that way throughout the night. Waking up, feeling the most rested she
had ever been, she exclaimed, “This sheet is just right,” and then checked off five happy stars on her Zagat’s survey.
Just like Goldilocks, we want conditions in our world to be just right. If omega diminishes or bellows, that corresponds respectively to the universe bursting outward either too quickly or too slowly (the smaller the omega, the lesser the cosmic density and the greater it can expand). In the former case, stable structures such as galaxies would not be able to form. In the latter, the universe would expand for a relatively brief period, run out of steam, and then collapse back down to an ultradense state. Either way, Earth would not have been able to form. Thus, life as we know it is predicated on the universe starting out as flat as a Kansas cornfield.
Why should the cosmos be so flat in the beginning? Could it be that flatness is an inherent feature of the universe? Perhaps. But making such a special assumption seems most at odds with the idea that early conditions were a chaotic jumble. To resolve this contradiction, one might imagine a way of stretching out the rumpled bedsheets of the universe and eliminating all its wrinkles. Such a cosmological process would not only help ameliorate the flatness issue, it would also address another thorny dilemma, the “horizon problem.”
When astronomers point their telescopes in any direction and map out the average distribution of galaxies, they find an astonishing degree of uniformity. Detailed galaxy counts yield essentially the same values no matter in which quadrant they are taken. Statistically, for instance, the northern sky looks virtually the same as the southern. Researchers recognize, however, that this approximation holds only for the largest scales of viewing. A more focused look reveals that many galaxies are actually in clumps, such as groups, clusters, and superclusters. Moreover, certain segments of the sky, the voids, are relatively empty, and other regions appear like the
surfaces of bubbles. There is even a long sheet of galaxies stretching out hundreds of millions of light-years, known as the “Great Wall.” The existence of these structures implies a certain degree of irregularity on smaller scales.
Nevertheless, the greater the scope of sky surveys, the smoother the picture of the cosmos they reveal. The superclusters, for instance, appear to be randomly distributed. Over the largest distances we can probe with telescopes, their density has only small fluctuations. Moreover, on the greatest scales, each sector of space has roughly the same temperature and composition. This smoothness can similarly be seen in statistical averages of the microwave background. Astronomers call this situation “isotropic,” meaning the same in all directions.
Given the notion that everything emerged from the same fireball, 13.7 billion years ago, is such uniformity surprising? Indeed, it is, considering that according to the Big Bang model, light was not always free to move throughout space. Models of universal evolution indicate that photons bounced from particle to particle in a cosmic pinball game for about 300,000 years after the initial burst. Only then did atoms form, in the process known as “recombination.” (This is a misnomer, since atoms were never really together in the first place.) Atomic nuclei grabbed up available electrons, leaving photons free to move through space. While the atoms would eventually coalesce into the seeds of stellar and other material, the photons would cool over time and form the basis of the observed microwave background radiation. Thus, it was during the era of recombination, not during the initial blast, that the universe’s profile established itself.
The trouble is that by that era different parts of the universe had theoretically long been out of contact with each other. Various regions of space lay well beyond each other’s “horizons”—the maximum reach of light (and all other forms of communication) during a particular time interval. Therefore, there should be no
reason to expect the temperature of the fireball radiation to be the same in all directions. Yet the observed cosmic microwave background does have nearly the same temperature throughout the heavenly dome. Current data indicate that temperatures fluctuate only a few parts in 100,000. Physicists call this situation the horizon problem.
Imagine that 100 high school alumni arrived at their 10-year reunion each clad in ruffled purple dresses or suits and that you found out that the classmates had been completely out of touch for the entire decade. No phone calls, e-mails or letters had been exchanged, except to announce the time and place of the event. How would you explain such a startling wardrobe synchronicity?
You could chalk it up to pure coincidence or shared fashion sense. Or if you did some detective work, perhaps you might discover hidden commonalities that led to such color uniformity. For example, maybe a mixer was held shortly before graduation that brought all the seniors together. Suppose the sponsors of the event asked students to dress like the pop star Prince, whose favorite color is purple. At the mixer, students shared smiles and came to associate their outfits with graduation. Therefore, even when they were beyond communication for years, they retained certain commonalities.
Different parts of the universe have been out of touch for far longer than that. Nevertheless, they are all costumed the same. Could there have been some kind of cosmic “mixer” well before the photons “graduated” and moved away?
In 1969, physicist Charles Misner of the University of Maryland proposed the Mixmaster universe as a potential way of resolving the horizon problem. The Mixmaster universe is an anisotropic variation of the Big Bang theory. In the standard Big Bang, the universe bursts forth at equal rates in all directions, like an evenly gushing fountain. The Mixmaster universe, on the other hand, behaves far more erratically. It expands in certain directions while contracting in others. Furthermore, the directions of expansion and contraction keep changing in an essentially unpredictable manner. Misner
believed that this chaotic behavior would help smooth out the universe, like the action of a blender, and explain why it currently appears pretty much uniform in all directions. He dubbed it “Mixmaster” after a vegetable processor advertised heavily at the time.
Subsequent research, however, showed that the Mixmaster wasn’t quite as effective as first thought. Like broken thermostats in a massive apartment complex, it didn’t level off the temperatures in different spaces enough. Some sectors would be hotter and others colder—unlike what astronomers observe today. Furthermore, an important paper by Stephen Hawking and C. B. (Barry) Collins, published in 1973, placed a damper on ideas that the universe ever was less isotropic than it is now. Entitled, appropriately enough, “Why Is the Universe Isotropic?,” the article made a strong case that the chances of any anisotropic universe (such as the Mixmaster model) evolving into what we currently observe were effectively zero.
The authors of the paper suggested one way of handling the matter, an argument known as the anthropic principle. Established by Brandon Carter, the anthropic principle asserts that the universe is the way it is because if it were any different it couldn’t possibility support advanced life. Conditions in an anisotropic cosmos would be too nasty and brutish to allow reasonable planetary systems to form. If there were any deviation from flatness and isotropy, there wouldn’t be cognizant beings, and no one would live to tell the tale. It’s the same reason why no one has written a book called “True Tales from the Earth’s Core.” No one lives in the core and no one could write such a book. Hence, by anthropic reasoning, we all live on the planetary surface.
Many physicists have dismissed the anthropic argument as too philosophical, even too religious, in nature—a far cry from the careful results of calculations. It smacks, they feel, of the line in Candide: “All’s for the best in this best of all possible worlds.” At least one physicist, Max Dresden of Stony Brook, jokingly attributed it to a modern-day version of anglocentrism. Just as the Victorians thought
that England was the most civilized of all places, he commented, anthropic reasoning purports that our universe is the most civilized of all possibilities.
Other scientists, such as Roger Penrose, suggested a more mathematical way of constraining the initial state of the universe to be isotropic. In the “Weyl curvature hypothesis,” he proposed that a segment of the Riemann tensor, called the Weyl tensor (after Hermann Weyl), must be zero at the beginning of time. A zero Weyl tensor is tantamount to pure isotropy. However, his and other people’s erudite arguments would soon be overwhelmed by a simple device.
It would be Russians and Americans, still vying in the Cold War, who would arrive at a forceful solution. “Just blow it up,” members of these nuclear superpowers proposed. No civilized selection process would be needed if the universe once underwent a period of ultrarapid expansion, much faster than the initial blast of the Big Bang. Everything would simply be evened out, like a forest after a tornado.
Like many aspects of science, the origins of the inflationary model of the universe are somewhat complex. In 1981 physicist Alan Guth proposed the term “inflation” to describe an early period of extremely rapid expansion. His goal was to help resolve the flatness, horizon, and other problems plaguing the standard Big Bang model. Thus the scientific community considers him the father of inflation.
Largely unknown to the West at that time, however, Russian scientists had developed aspects of this notion even earlier. In the early 1970s, Andrei Linde and David Kirzhnits, of Moscow’s prestigious Lebedev Physical Institute, first investigated the cosmological consequences of symmetry breaking in the very early universe. Symmetry breaking is a transformation in particle physics that creates a favored direction, akin to a spinning top falling over to one side. These ideas
led Linde and Gennady Chibisov to explore the implications of such changes in the vacuum of space and inspired Alexei Starobinsky to propose a theory independently very much like inflation.
When Guth started thinking about the maladies ailing the standard Big Bang model, he had little background in cosmology and was unaware of these alternative theories. Born in 1947 in the town of New Brunswick, New Jersey, to a family with a small grocery store, Guth became interested in science when in high school. An astronomy book inspired him to contemplate possible beginnings of the universe. Nevertheless, when he began his undergraduate studies at MIT, he decided to work with Aaron Bernstein, an experimental nuclear physicist. Guth stayed at MIT to receive his Ph.D. and then began research fellowships at Columbia, Cornell, and Stanford.
It was a talk by Dicke at Cornell that changed Guth’s career path, inspiring him to revisit his youthful interest. As Guth remembered it:
One of the things he talked about was the flatness problem…. The problem was that if you looked at the universe one second after the Big Bang, the expansion rate had to be exactly what it was to an accuracy of about one part in 1014, or else the universe would have either flown apart without ever forming galaxies or quickly recollapsed…. At the time I didn’t even understand how to derive that fact, but I believed it and was startled by it.
Shortly thereafter, while working with researcher Henry Tye on the problem of magnetic monopoles (hypothetical magnets with only single poles), Guth discovered a mechanism in field theory that would cause the fabric of the universe to stretch by at least the gargantuan factor of 1025 (one followed by 25 zeros) in the exceedingly brief interval of 10–30 seconds. He realized that this ultrarapid expansion would offer a credible solution to the flatness dilemma. Thus, cosmological inflation was born.
Linde followed these developments with intense interest. Born
in Moscow in 1948, he came to physics by way of philosophy. In particular, ancient Indian notions of an endless succession of worlds fascinated him. This abstract curiosity about an eternal universe led him to explore the tangible realm of physics, where he quickly became adept at the formalism. Soon after Guth published his paper, “Inflationary Universe: A Possible Solution to the Horizon and Flatness Problems,” Linde published his own work, entitled, “A New Inflationary Universe Scenario: A Possible Solution of the Horizon, Flatness, Homogeneity, Isotropy and Primordial Monopole Problems.” Linde’s paper addressed some of the issues raised by Guth. Guth acknowledged the importance of Linde’s contributions and later would write that “he was generous in giving credit to my work.”
All these papers centered on the idea of a phase transition from one particle state to another in the nascent instants of the cosmos. Everybody is familiar with certain kinds of phase change—water crystallizing into ice, for example. As you lower the temperature of a glass of water, eventually it locks into certain patterns. Many properties of liquid water and solid ice are very different. Blocks of ice, for instance, are less dense than cold water and thus can float.
The type of phase transition that could have happened in the very early universe is more complex in origin than familiar processes such as freezing, melting, and boiling, but shares some of the same characteristics. At very high temperatures, it is believed, elementary particles interact according to laws based on certain symmetry principles. If the temperature declines, the relevant symmetry alters, and the fluid comprised of the particles undergoes a phase change. This phenomenon, called “symmetry breaking,” raises the intriguing possibility that the primordial cosmos experienced one or more phase changes as it expanded and cooled. During such a phase change, bubbles of the new phase might have appeared in the old phase, like steam bubbles in boiling water.
In the original Guth model, the current epoch of the universe originated in an amalgam of such bubbles, spawned during an earlier
stage. However, this hypothesis proved awkward because it led to walls between the regions represented by different bubbles. These walls would lock up tremendous quantities of energy for which there would be no natural means of release. This was called the “graceful exit problem.” Astronomers have never detected such bubble walls. Consequently, the new inflationary scenario proposed that all we see around us emerged from a single bubble, created during a phase transition a tiny fraction of a second after the initial burst.
The boundaries of the great bubble would mark the limit between two different types of spatial vacuum. Given that “vacuum” means emptiness, you might wonder how there could be different varieties. Modern field theory postulates that vacuum regions are not bleak deserts but rather ponds brimming with particle activity. The uncertainty principle allows for the creation of “virtual particles” that leap from the waters temporarily before rejoining the sea. The level of such activity depends upon the vacuum’s overall temperature. Sometimes, like ice and water at zero degrees Celsius, two different vacuum phases can coexist, each with distinct properties. In the case of the new inflationary model, these are called the “false vacuum” (surrounding the bubble) and the “true vacuum” (inside the bubble).
Within the great bubble, the dynamics of the universe would take on an explosive pace because the false vacuum would be unstable and would want to turn into a true vacuum as quickly as possible. In less than a millionth of a trillionth of a trillionth of a second, the bubble of true vacuum would increase in volume exponentially, until the universe blew up into something like the size of a baseball. Finally, colossal stores of energy would be released from the vacuum, and the universe would revert to the far slower process of conventional Hubble expansion. From that point on, the inflationary picture matches the standard Big Bang scenario.
Think of an inflationary burst in terms of price wars among gas stations. Suppose all gas stations in a city charged five dollars per
gallon. Customers would stoically go to the station nearest them and pay the high cost. However, imagine that they all found out about one station charging only two dollars per gallon. Many of them would flock to that station. This situation could spur stations in the immediate vicinity to lower their prices, then stations near those, and so forth. Very rapidly, the bubble of discount prices could well expand to encompass the whole city, vanquishing the domain of high prices.
Similarly, in the inflationary picture, a region initially so small that it was causally connected (that is, contained within its own horizon) would expand so rapidly that it would become the entire visible universe. Since all parts of the initially tiny region would be within communication’s range, they’d exchange photons and thus reach a level temperature (like the temperature consistency of the human body). Then, as the great bubble blew up, regions of similar temperature and other properties would be thrust far away from each other. Once ordinary expansion kicked in, these similarities would be preserved, resulting in large-scale spatial uniformity, thereby solving the horizon problem of why widely separated parts of space are so similar.
The inflationary picture would also resolve the flatness problem by smoothing out all the wrinkles of space’s fabric. Any bumps or indentations would be stretched out so quickly they’d be indiscernible. It would be like taking a rubber sheet and attaching it to trucks facing north, south, east, and west. As the trucks pulled away from each other, the surface (assuming it didn’t break) would become absolutely taut. Similarly, the cosmos would become perfectly flat, represented by an omega value of one. Such absolute flatness is one of the key predictions of inflation.
Another welcome product of inflation is the magnification of minute quantum fluctuations. Ironically, although ripples in the very early universe would be stretched out, new perturbations would emerge through the random actions of quantum physics. These
pockets of energy would grow rapidly during the inflationary period, attract matter through their gravitational effects, and eventually form the seeds of structure in the universe (galaxies, clusters, etc.). Hence, inflation is not just a way of justifying the large-scale smoothness of the cosmos; it also explains the universe’s smaller-scale diversity.
At the same time Linde was developing the new inflationary universe, a young physicist from the University of Pennsylvania, Paul Steinhardt, along with his graduate student Andreas Albrecht, proposed an independent version of the same theory. Like Linde’s version, it avoided the graceful exit problem. Steinhardt and Albrecht joined Guth and Linde in pointing the way for a radical new conception of the early universe.
Currently at Princeton, Steinhardt has moved around quite a bit—from field to field (he’s also a renowned expert in quasicrystals) and from place to place. He has vivid memories of growing up as an “Air Force brat”—relocating from base to base every three years. When he was in fourth grade, his family settled in Miami. Around that time he began to nurture a fledgling fascination for science. “Astronomy was my first interest,” Steinhardt recalled. “Then I dropped it for many years. Many of the first books I read were on astronomy. That was really fascinating to me. But then I got interested in other things. I always liked science in general, as far back as I can remember.”
“I had a telescope, a chemistry set, a biology lab and did physics experiments. Anything that was scientific I was interested in. Doing astronomy in Miami was difficult, because you either had to go where the lights were or where the mosquitoes were. I remember going out to the Everglades, literally running out of the car, setting up the telescope, running back to the car, then putting all kinds of stuff on myself trying to fight the mosquitoes.”
During much of the 1980s and early 1990s, along with various collaborators, Steinhardt tried to perfect inflationary cosmology (henceforth we will use the term “inflation” to refer to all variations, not just Guth’s original model). Many issues remained unsolved and could only be tested through astronomical data. But such information wasn’t available—not just yet. For example, researchers didn’t know which particular phase transition in the infant universe triggered inflation. The universe could go through several such jumps, as unified forces broke down into their constituents. Many theorists believe that the universe initially had just one type of interaction—an amalgamation of the four natural forces. Such a state represented the maximum possible symmetry, or sameness. Then, somehow gravity separated from the other three. Next, the strong force pulled away, and finally the electroweak interaction divided into electromagnetism and the weak force. The end product was the current quartet of forces. In each transition a type of symmetry spontaneously broke down.
For technical reasons, spontaneous symmetric breaking requires a type of scalar field, called a Higgs boson. Mathematical fields are classified as scalar, vector, tensor, or other categories depending on their behavior on transforming the coordinate system. In physics, scalar fields represent particles, such as the Higgs, with unique physical properties, particularly a zero value of what is called total spin and invariance under certain coordinate transformations. During a phase transition this boson undergoes a change in potential energy, akin to the plunging of a barrel over a waterfall. As it plummets, it cedes mass to one or more hitherto massless exchange particles. Exchange particles are intermediaries of the natural forces. For example, the W and Z bosons, volleyed about by other particles like balls in a tennis match, generate the weak force. Once exchange particles have mass, the forces associated with them change character, decreasing in range. Consequently the weak force represents a short-range interaction.
The mix of particles during a particular phase of the cosmos affects the dynamics of its expansion. The contents of the universe during any given period can be described as a fluid with certain physical features. Each type of fluid has an “equation of state” designating the precise relationship between its density and pressure. The form of this equation sets the universe’s rate of growth during that era.
For extremely early times, we are not sure of the equation of state. However, the processes that the universe could have undergone—standard Big Bang expansion, a brief era of ultrarapid inflation, then a slowing down of growth to the current rate—all delineate possibilities for the fluid dynamics during that interval. By working together, particle physicists and cosmologists can attempt to piece together the puzzle of the dynamics of the very early universe. For instance, we can model inflation using a classical description of a fluid if the pressure is assumed to be negative. This model leads to an equation of state in which, in contrast to ordinary matter or radiation, the pressure is proportional to minus the energy density. (Normal materials have pressures that are fractions of a positive-valued density.) Under such circumstances, Einstein’s equations of general relativity mandate exponential growth. Physicists refer to a field that could do this as an “inflaton.”
Imagine a filled balloon surrounded by air. If the air pressure outside the balloon is slowly lowered, the balloon will expand. If it decreases enough it will approach zero—a classical vacuum. If it could be lowered even further, into the negative zone, imagine how large the balloon would get (assuming that it didn’t pop). Thus, negative pressure would cause inflation.
Steinhardt and his co-workers worked closely with field theorists to try to match inflationary dynamics to models of particle creation and structure formation in the early universe. This scheme had to be “fine-tuned” to a certain extent. If the inflationary epoch was too short, the cosmos would not have had enough time to smooth out.
If, on the other hand, it was too long, there would be little sign of structure today. These delimiters still allowed for a number of options that, researchers hoped, more data would whittle down. Steinhardt joked that constructing the right inflationary model was like picking from a Chinese menu—selecting one item from column A, one from column B, and so on. The approach that led to the correct solution would have very particular properties and would match up precise models of the microscopic and macroscopic worlds.
Linde, on the other hand, was convinced that inflation was a ubiquitous and natural phenomenon, akin to Darwinian evolution in biology. All one needed was the tabula rasa of empty space. On this blank pad the quill of quantum randomness would sketch fluctuations of various sizes. Through pure chance, at least one of these fluctuations would produce a scalar field able to spark the fuse of an inflationary blast. That region of space would expand exponentially, thereby dominating less explosive sectors. As it blew up, it would produce the familiar byproducts of inflation—flatness, correlations between remote domains, and so forth. Because of its reliance on sheer randomness, Linde dubbed his model “chaotic inflation.”
Steinhardt, Linde, and their various collaborators spent much of the 1980s and early 1990s developing alternative models of inflation. Many others joined in on the quest. Like the makers of Coke and Pepsi, each research team produced various concoctions, hoping that one of them would pass the taste test of astronomical inquiry. Some of the models du jour developed by various groups included extended inflation (where a field interacts with gravity, causing it to eventually put the brakes on inflation), hyperextended inflation, power law inflation, natural inflation, hybrid inflation, eternal inflation, and so on. Each posited a distinct mechanism for inducing, then halting, a burst of ultrarapid expansion. An exponential proliferation of papers filled the science journals to the bursting point—leading to a curvature dilemma for the flimsy shelves in researchers’ offices. Physicists eagerly awaited experimental data that
would help distinguish these approaches—and confirm or disprove the theory itself.
One critical source of information would be provided by new observations of the CMB’s profile. From the time of Penzias and Wilson until the end of the 1980s, the only features known about that radiation were its average temperature, spectral distribution, and overall isotropy. Experiments also accounted for a small discrepancy in the radiation’s temperature in opposite directions of the sky due to Doppler shifting caused by the Milky Way’s motion through space. Researchers, though, believed that a more precise examination would yield evidence for the seeds of structure formation in the universe. These seeds would be minute anisotropies due to slight differences in the early distribution of matter. Because inflationary theorists aspired to explain the process of structure formation in terms of stretched-out quantum fluctuations, they hoped such anisotropies would soon be found. Conversely, if no such bumps existed, advocates of any variation of the Big Bang model would have a hard time explaining how galaxies and other structures emerged.
On November 18, 1989, NASA launched the Cosmic Background Explorer (COBE) satellite, designed to take an unprecedented look at the primordial radiation bathing the cosmos. It carried several instruments, including a differential microwave radiometer, able to discern anisotropies in the background spectrum as tiny as a few parts per million. A team led by George Smoot of Lawrence Berkeley Laboratories (LBL) analyzed and interpreted the data. Physicists nervously awaited the experiment’s results. Would it confirm the Big Bang picture of a fiery beginning? Would it detect the minute raisins in the tapioca pudding of uniformity?
Tension mounted as months passed by with no wrinkles to be
found. Smoot urged patience, realizing that it would take some time for the data collected by the satellite to reach levels of statistical significance. Even after some signs of anisotropy appeared, about a year after the launch, he was reluctant to publish any results until he was absolutely certain they were valid.
During this wait, some science journalists stirred up a storm with dire warnings that the Big Bang theory was in jeopardy. A Sky and Telescope news item inquired, “The Big Bang: Dead or Alive?” A popular book by the physicist Eric J. Lerner, revising an outdated plasma cosmology, proclaimed in its title that The Big Bang Never Happened.
Steady state theorists waited in the wings, eager to come to the rescue with alternative hypotheses. In an ironic twist, Narlikar and Burbidge each pointed to the smoothness of the microwave background as evidence against the Big Bang theory. Along with Hoyle they developed what came to be known as quasi-steady state cosmology. Unlike the original model, it predicted an isotropic radiation spectrum— albeit produced in “mini big bangs” rather than a single explosion. Few mainstream cosmologists, however, rallied to their cause.
Finally, on April 23, 1992, Smoot enthralled scientists at a meeting of the American Physical Society with his long-awaited announcement of success. He had kept his results top-secret until the very end, checking and double-checking them to eliminate ambiguities. At one point he had even flown to Antarctica (where the cold night sky is especially clear) to take extra measurements. By the time he stood on the podium he was confident that his team had recorded the stunning visage of the early universe.
The wrinkles that the COBE group found matched up beautifully with the concept that galaxies were seeded in the early universe. Corresponding to slightly hotter or colder regions of the background, the COBE picture identified primordial zones of greater or lesser density. The denser areas constituted the kernels of cosmic structure. Nevertheless, these results still weren’t precise enough to nail down key cosmological parameters and distinguish particular early-universe
models (such as various inflationary scenarios). Consequently, astronomers began planning in earnest for a more detailed probe.
Meanwhile, other researchers at LBL and elsewhere pursued a wholly different way of investigating the early universe. Their investigations of distant supernovas would soon jolt the field of cosmology.
Saul Perlmutter, leader of the Supernova Cosmology Project, grew up in a family of respected academicians. His father, Daniel, was a professor of chemical engineering at the University of Pennsylvania, and his mother, Felice, was a professor of social administration at Temple University. Nurtured in a supportive, intellectual household, his interests turned to science at an early age. As a child, recalled Perlmutter, “I always enjoyed looking at the sky, but I was never one of those people who had their own backyard telescope. It was only because I started needing telescopes to answer the fundamental questions that I started learning much about astronomy.”
In addition to his scientific talents, Perlmutter became adept at music—corroborating popular theories that the two abilities go hand in hand. He’s an avid violinist and enjoys playing in orchestras. Blending his talents with others—whether harmonizing in music or collaborating in science—has become an important part of his personal philosophy. “I was somebody who had fewer individual heroes and more collective heroes,” he stated. “The idea that people working together could understand the world and that no single one of them by themselves could understand the world, that really captured my imagination.”
After receiving a Ph.D. from the University of California at Berkeley in 1986, he was appointed to a position at LBL. Along with an international team of astronomers, including Berkeley astronomers Carl Pennypacker and Gerson Goldhaber, he set out to measure the overall dynamics of the universe and the change in its expansion rate over time. This measurement would provide a way of delving into
the past and predicting the fate of the cosmos. To perform this task, they developed powerful techniques to measure the energy output of Type Ia supernovas in extremely remote galaxies.
Type Ia supernovas, the catastrophic explosions of a certain kind of star, are valuable to astronomers because they can serve as “standard candles.” Standard candles are objects with well-known energy output. Imagine walking along a dark desert road and seeing a faint street lamp way off in the distance. If you know the intrinsic power of the lamp, you can deduce from its dimness how far away it is. Type Ia supernovas serve a similar purpose for astronomers eager to map the scale of the universe. By matching the apparent brightness of such stellar blasts to their actual luminosities, astronomers can reliably ascertain their distances. Light curves, indicating the progression of each burst, offer added information. Thus, they are solid celestial yardsticks, useful for measuring the remoteness of the galaxies in which they are situated.
Once astronomers know the distances to the galaxies in a given region of space, they can readily determine the expansion rate of that region. Each galaxy’s spectral lines are shifted by the Doppler effect. By measuring this shift, they can assess the galaxies’ velocities. Finally, by combining this information with the distance data, they can calculate how fast each part of the universe is pulling away.
In astronomy the farther out you look, the deeper into the past you see. Therefore, Perlmutter and his colleagues realized they had the perfect tool for determining how the cosmological expansion rate has altered over the eons. This tool could allow them to assess omega, the universe’s density parameter, and help them decide how much of its dynamics is driven by visible material versus dark matter. A second team, led by Brian Schmidt of Australian National University and Nicholas Suntzeff of Cerro Tololo Inter-American Observatory in Chile, enacted an independent program with a similar purpose. Throughout the 1990s the two groups jockeyed for valuable telescope time and competed in a race for publications.
One aspect of the cosmos that the researchers did not expect to challenge was its deceleration. The simplest Friedmann models portrayed a universe slowing down with time. The only difference lay in how quickly this braking would occur—with the closed model representing the extreme. Adding a cosmological constant would change the situation, allowing for the option of speeding up, but few physicists saw a point to doing that. After all, even Einstein had discarded the term.
Supernova mapping is an arduous process, given that they are rare and unpredictable. It’s like knocking on doors all around the country hoping to find a family with quintuplets who had just won the lottery. If observers anywhere in the world spot a distant burst, researchers everywhere must leap into action. They may need to redirect a telescope to track the supernova’s light curve, with no time to spare. Then they can use that information to plot just one more point on their charts. As data trickle in, statistical significance builds over many years.
In 1998 each supernova team felt it had enough evidence to render a verdict. In startling announcements the groups proclaimed that the universe is not currently decelerating at all but rather is speeding up. Thus, not only will the cosmos expand forever, its expansion is accelerating. Each remote galaxy is moving farther and faster away from the others, with no end in sight.
By the end of the 20th century, scientists realized that many previous assumptions about the behavior of the cosmos were dead wrong. Along with the COBE data, the supernova results pointed to a flat universe. However, unlike the simplest flat Friedmann model, with omega equal to one, the parameter associated with the expansion was only about three-tenths. In other words, the universe had approximately 30 percent of the material density associated with flat
cosmologies. Something else must be hidden in the blackness of space. That extra component was named dark energy.
The simplest way of incorporating dark energy into cosmology is to reinstate the cosmological constant term, also known by the Greek letter (lambda). Although that makes general relativity less elegant, it also makes it more accurate. A mathematical clarification is one thing, but a true physical explanation is another. Theorists scrambled to try to explain the origins of cosmological antigravity.
It would be incorrect, though, to picture the universe as always speeding up in its expansion. Additional supernova measurements by Schmidt’s group and Perlmutter’s group have revealed that the universe began to accelerate relatively recently in its history—within a few billion years of the present day. Before then the universe was dense enough so that matter terms dominated the cosmological constant term. The attraction of gravity overpowered the repulsion of lambda, slowing the expansion. Therefore, space was decelerating before it began to accelerate. Only when the universe’s matter was dispersed enough did lambda begin to dominate and the universe start to speed up.
As Steinhardt has pointed out, the outstanding coincidence that we live within a few billion years of the turnaround time of the universe seems to contradict the Copernican ideal that humankind occupies no special place or time. Thus, the new results cry out for a wholesale rethinking of our basic concept of the universe. As he has remarked:
I think people are really missing the boat on this. This is truly a revolution of Copernican nature; this is not just another addition. What the cosmology community has done for the most part is say, “Oops, we’re missing an ingredient. Let’s add that ingredient. Everything fits beautifully. We have a wonderful model.” My reaction is: time to step back and reevaluate.
The full extent of the implications hasn’t been worked out yet. If
you were around at the time of Copernicus you might have said, “He wants to make the Sun the center. You want to make the Earth the center. It doesn’t mean too much.” But then by the time you get to Kepler and Newton it means a lot. So it wasn’t just another detail. I can imagine this will be a very profound thing by the time it’s through.
Steinhardt has proposed that the dark energy is a hitherto-unknown substance, called “quintessence.” Its name hearkens back to the ancient notion of four natural elements—earth, air, water, and fire. Quintessence would be the fifth. Instead of a steady cosmological constant, it would be a field that kicked in during a particular epoch of the universe, causing a far milder version of an inflationary burst. Using a variable field offers greater flexibility in modeling different cosmic phases. However, current observations have not been able to distinguish between variable and constant forms of dark energy.
To resolve these and other vital issues, astronomers have pressed on with further testing. The supernova teams have continued their endeavors, accumulating a bevy of examples to enhance their data. The LBL group has proposed a space-based mission, called the Supernova Acceleration Probe, to improve their capability by 20-fold. Meanwhile, spectacular new results from the Wilkinson Microwave Anisotropy Probe (WMAP), launched in 2001, have uncovered a treasure trove of critical information about the young cosmos.
The beginning of the 21st century has witnessed cosmology becoming an exacting enterprise, with ample tools to elucidate the state of the observable universe. It has also ushered in considerable confusion as to the future direction of the field. A snapshot of the early
universe captures this mixture of profound new knowledge and grave uncertainty. The stunning “baby picture” of the cosmos produced by WMAP represents one of the landmark scientific images of our times—akin to the double helix or the first photos of the Martian surface. When represented in color, like a weather map of hotter and colder sites, it is a fantastically intricate mosaic of multihued spots. Clearly the background radiation’s artist painted in pointillism.
Paintings capture moods, and the WMAP portrait is no exception. It shows the cooled-down form of a once-scalding universe, releasing long-pent-up energy into the gaps between atoms. The atoms were slightly clumped together, in patterns that depended in part on the geometry of space. Their particular arrangements indicated that they were happily settled into a flat, expanding hyperplane—with omega exactly equal to one. Perhaps they were especially content because they recalled a far more explosive period earlier on that flattened their vistas. But now they could move away from each other at a gentler pace, awaiting the day their gravitational attraction would compel them to reunite into myriad celestial bodies.
At a 2003 conference of the American Physical Society, physicist Michael Turner reveled in the high precision of the new data. He emphasized that, for the first time in the history of cosmology, researchers were able to perform exact-enough statistics to present their results with error bars (precise ranges of values). Turner also pointed out that the WMAP results ruled out the simplest inflationary models. He counseled, however, that there were other possibilities. “Fortunately, Andrei [Linde] had another 300 models left,” Turner joked.
The combined power of the supernova and microwave background observations enables cosmologists to define a “concordance model” of the universe. Any theory that satisfies known results about the geometry, age, and content of the observable universe falls into this category. You would think that this would narrow things con-
siderably. However, it still leaves the door open to diverse explanations—an inflationary era being only one of many possibilities.
Although we are now reasonably sure that the universe is flat, we still don’t know exactly what caused it to be flat. (To recall, we mean here that the ordinary three-dimensional part of a four-dimensional Friedmann model is flat.) Was it born that way, molded through inflation, or smoothed out through another mechanism? Data from WMAP and other sources have converged on an estimate of 13.7 billion years for the observable universe, since the time of the Big Bang. But what about eras that may have preceded that colossal burst of energy? Perhaps there was no Big Bang singularity at all, just a transition between different phases of the universe. And could the observable universe be part of a greater whole, conceivably in higher dimensions? What of the dark energy that constitutes some 73 percent of the substance of the cosmos? Could it be a sign of something missing in our concept of gravitation? Could fundamental constants, such as Newton’s gravitational constant or the speed of light, actually change over time? We will consider these disparate possibilities in chapters to come.
Finally, let’s remember that, in addition to visible matter and dark energy, the cosmos appears to contain a third major component—dark matter. Readings from WMAP indicate that this hidden material represents 23 percent of the universe. Although theories abound, no one has yet developed a satisfactory explanation of what dark matter actually is. This enigma has grown even deeper with the recent discovery of an entire galaxy as inscrutable as the Cheshire cat.