Cosmologists are often in error, but never in doubt.
—Lev Landau1
In the standard model of cosmology, about seventy percent of the energy density of the universe—the dark energy driving its accelerating rate of expansion—is described by Albert Einstein’s cosmological constant.2 In this essay, I argue that the standard model of cosmology is wrong. This should come as no surprise. “The history of science,” Georges Lemaître remarked, “provides many instances of discoveries which have been made for reasons which are no longer considered satisfactory.” It may be, he added suggestively, “that the discovery of the cosmological constant is such a case.”3
Einstein published the general theory of relativity in 1915; and in 1917, he attempted to apply his theory to the cosmos as a whole. The result was a universe that was either expanding or contracting, and, in any case, not static. Dissatisfied with this implication, Einstein added a constant to his field equation, its repulsive force balancing the otherwise attractive force of gravity.4 The result was a static, spherical, spatially closed but unstable universe. At almost the same time, Willem de Sitter provided a maximally symmetric vacuum solution of Einstein’s field equations. This too was initially believed to be static. The ensuing universe contained neither radiation nor matter, but, given its positive cosmological constant, it was obliged to expand at an accelerating rate. It is de Sitter’s universe that is effectively the basis of the current standard model, which was subsequently constructed by Alexander Friedmann and Lemaître.5
The Standard Model
A standard model for cosmology is nothing new. In Europe, Ptolemy’s geocentric theory held sway for nearly two thousand years. If its underlying assumptions had no physical basis, the theory was better than the competition.6 “Absolutely all phenomena, are in contradiction,” Ptolemy wrote, “to any of the alternate notions that have been propounded.” Like the geocentric model, the underlying assumptions of the standard model have no physical basis. Cosmologists still lack an understanding of dark energy, and they do not know what is driving the accelerating expansion of the universe.
The standard model embodies a number of assumptions.
First, space-time is described by the Friedmann–Lemaître–Robertson–Walker (FLRW) metric:7
ds2 $≡$ gμvxμxv = a2(η)[dη2 – d$\bar x$2], a2(η)dη2 $≡$ dt2,
where gμv is the metric tensor, a is a scale factor describing the expansion or contraction of space, and η is conformal time. The conformal transformation denoted by a2 stretches space without distorting shapes, and the redshift z measures the wavelength of light from distant objects. It reflects the scale factor at the time that the light was emitted as a ratio to its current value: 1 + z $≡$ a/a0.
Second, gravity is described by general relativity:
Rμv – Rgμv + $\lambda$gμv = 8$π$GNTμv,
where the cosmological constant, $\lambda$, acts as a repulsive force that increases with distance, thus counteracting the attractive force of gravitation.8
Third, the only contents of the universe are ideal fluids possessing pressure p and energy density $\rho$, but neither viscosity nor vorticity nor any other dissipative properties.
With the publication of Einstein’s field equations, cosmologists originally argued that matter could be modeled as a pressureless gas—dust, in effect. That is still their assumption with respect to dark matter. It is no longer a general assumption because the early universe was dominated by relativistic radiation. Even earlier, the universe was hot enough to melt all particles of matter. A description of Tμv in terms of quantum field theory is obligatory.
The complicated relationships between the geometry of space-time and the matter that it contains now admits of simplification:
- \[{H^2} \equiv {\left( {\frac{{\dot a}}{a}} \right)^2} = \frac{{8{\text{π }}{G_{\text{N}}}{\rho _{\text{m}}}}}{3} - \frac{k}{{{a^2}}} + \frac{\Lambda }{3}\] \[= H_0^2\left[ {{\Omega _{\text{m}}}{{\left( {1 + z} \right)}^3} + {\Omega _k}{{\left( {1 + z} \right)}^2} + {\Omega _\Lambda }} \right],\]
where
- Ωm $≡$ $\rho$m/(3$H_0^2$/8πGN),
- Ωk $≡$ –k/(3$H_0^2$$a_0^2$), and
- ΩΛ $≡$ Λ/(3$H_0^2$).
This is the Friedmann–Lemaître equation. It describes the rate of change of the Hubble parameter, H, as a function of the energy density of matter $\rho$m, the curvature of spatial sections k, and the cosmological constant itself. A similar term also arises from the right-hand side of this equation when Tμv describes quantum fields, a point first realized by Wolfgang Pauli and Yakov Zeldovich. If all inertial observers must see the same vacuum state, ground-state quantum fluctuations behave like a cosmological constant with negative pressure:
Tμv = –$〈$$\rho$$〉$fieldsgμv.
It is this that enters in the Friedmann–Lemaître equation (3), which has been rewritten above in terms of the fractional contributions made to the total energy density by matter, curvature, and the cosmological constant.9
The effective cosmological constant, Λ, is the sum of these unrelated terms:
Λ = $λ$ + 8πGN$〈$$\rho$$〉$fields.
The cosmic sum rule follows upon division by $H_0^2$:
Ωm + Ωk + ΩΛ = 1.
This simple equation encapsulates the standard FLRW cosmological model, and it is this equation that is used to deduce the values of various cosmological parameters. Since Ωm is ostensibly comprised of cold—i.e., nonrelativistic—dark matter (CDM), it is called the ΛCDM model.
The Bone in Our Throat
In the late 1990s, observations of Type Ia (SNe Ia) supernovae indicated that the expansion of the universe was speeding up. These stellar thermonuclear explosions are expected to release a standard amount of visible energy, and so make for a standard candle in astronomy.10 Such supernovae are detectable up to cosmological distances and, in conjunction with the redshift, provide a means for tracking the Hubble expansion rate beyond z ~ 1.11 The observed magnitude $\mu$ is a measure of the luminosity distance $\mu$ $≡$ 5log10(dL/10 pc), which is, in turn, related to various cosmological parameters by
- \[{d_{\text{L}}} = c\frac{{\left( {1 + z} \right)}}{{\sqrt {{\Omega _k}} {H_0}}}{\text{sinn}}\left( {\sqrt {{\Omega _k}} \int_0^z {{\text{d}}z'{H_0}/H\left( {z'} \right)} } \right),\]
where sinn → sinh for Ωk > 0 and sinn → sin for Ωk < 0 (1 parsec $≃$ 3.3 lightyears).
These observations indicate that 0.8Ωm – 0.6ΩΛ = –0.2 ± 0.1.12 The typical angular scale of temperature fluctuations in the cosmic microwave background shows that Ωk $≃$ 0: the universe is spatially flat. Attempts to weigh massive clusters of galaxies, on the other hand, indicate that Ωm = 0.3 ± 0.1—not enough to provide the critical density for a flat universe. Adjusting yin to yang requires that ΩΛ = 1 – Ωm – Ωk $≃$ 0.7. The universe is apparently dominated by a cosmological constant with the value Λ ~ 2$H_0^2$. The scale of Λ is thus set by H0, which is measured to be ~70 km/sec/Mpc (i.e., h $≡$ H0/100 km/s/Mpc $≃$ 0.7). Its inverse, the Hubble radius $H_0^{ - 1}$$≃$ 3000/h Mpc, corresponds to the tiny energy scale of ~10–42 GeV in fundamental physics units. Although neither fundamental nor a constant, it enters into every cosmological measurement. Any inference of Λ from the data in the FLRW framework will thus yield a value of order $H_0^2$.
Alarm bells ought to be ringing.
On the other hand, the acceleration of the expansion rate is driven by the negative pressure of the quantum vacuum: –pΛ = $ρ$Λ ~ Λ/8πGN ~ (10–12 GeV)4, since 1/8πGN $≡$ M$_{{\text{Planck}}}^2$ $≃$ (2.4 × 1018 GeV)2. This corresponds to a dark-energy scale of ~10–12 GeV. With respect to Λ, which is it to be: ~10–42 GeV or ~10–12 GeV? Cosmological and quantum-field theoretic interpretations of Λ reveal a discrepancy of between 15 and 30 orders of magnitude, corresponding to a factor of 1060 to 10120 in the energy density.
Not surprisingly, this has been called “the worst theoretical prediction in the history of physics.”13
If dramatic, the statement is misleading. The vacuum energy in quantum field theory cannot be formally calculated because it is a super-renormalizable term in the Lagrangian. Only the difference in energy density between two vacuum states can be computed. If there are changes in the vacuum state, further ambiguities arise in calculating the expected vacuum energy density. For astronomers, Λ is just another cosmological parameter, similar to $\rho$m, the matter density, or k, the curvature of spatial sections. From the viewpoint of a quantum field theorist, Λ represents a very delicate balancing act between the bare λ derived from general coordinate invariance and the divergent contributions from various quantum fields. These must yield an overall value of Λ that is consistent with cosmological observations.
The less appreciated point is that in quantum field theory, all contributions to the vacuum energy can be formally canceled order by order by adding appropriate counterterms to the Lagrangian. This would have no effect on any quantity measured in the laboratory. It is only when the standard model of particle physics is supplemented by gravity that the quantum vacuum can have a possible effect on the universe as a whole.
When these implications were first realized in the 1930s, they were quickly swept under the rug. “[A]s is obvious from experience,” Pauli confidently remarked, “the [zero-point energy] does not produce any gravitational field.”14 This is of course true. If the zero-point energy of the standard model were to gravitate, it would have dominated the universe when it cooled to temperatures of ~1015 K (⇒ ~100 GeV) at ~10–12s after the Big Bang. Depending on the sign, this would have either sent the universe off into an endless, exponentially fast expansion, or caused it to immediately re-collapse. Pauli offered no reason why the vacuum energy density should not gravitate, and overlooked, or ignored, the obvious conflict with general relativity. In Einstein’s theory, all forms of energy must gravitate.
This is the bone in our throat.15
The Cosmological Principle
The Oxford mathematician Edward Arthur Milne formulated the modern version of the cosmological principle in 1933. “The Universe,” he wrote, “must appear the same to all observers.”16 This assumption was, in fact, implicit in models constructed a century ago. The observed universe is not quite isotropic, and neither is it homogeneous. The interpretation of data in terms of the FLRW model has proceeded under the weaker assumptions of statistical isotropy and homogeneity. Although there are known inhomogeneities in the distribution of matter, homogeneity is restored when averaged on sufficiently large scales. Number counts of galaxies in spheres of increasing radius r are said to have demonstrated a close approach to homogeneity on spatial scales above ~70/h Mpc.17
For all that, we are manifestly not in the cosmic rest frame in which the universe is homogeneous and isotropic, and data can be analyzed according to the Friedmann–Lemaître equation. If this were so, then all galaxies would be receding from the earth at a rate proportional to their distance, following Hubble’s law. But it has been known since the 1960s that we have a peculiar, non-Hubble velocity due to local inhomogeneities. The idealized Hubble flow can emerge only after sufficiently large scales are averaged and peculiar velocities have died away. This scale should be of the same order as the scale on which the distribution of matter, as traced by the visible galaxies, is sensibly homogeneous, around ~70/h Mpc.
Since we are not in the cosmic rest frame, the cosmic microwave background cannot look isotropic. It ought to exhibit a pronounced dipole anisotropy. The expected anisotropy is due to aberration. An isotropically distributed set of objects should acquire a dipole anisotropy—more than average density in the direction of motion, less in the opposite direction. There is a second effect that boosts distortion: photons coming toward us from the direction of our motion are Doppler-shifted to the blue, while the photons coming from behind are Doppler-shifted to the red. The strength of both effects should be proportional to β, the ratio of our velocity to the speed of light. This was first recognized by Dennis Sciama and John Stewart, who emphasized that, “If the microwave blackbody radiation is both cosmological and isotropic, it will only be isotropic to an observer who is at rest in the rest frame of distant matter which last scattered the radiation.”18 Jim Peebles and David Wilkinson then calculated that an inertial observer moving with velocity β in an isotropic blackbody radiation bath of temperature T0 will measure an effective temperature that varies with respect to the direction of motion by the angle θ:19
- \[T\left( \theta \right) = {T_0}\sqrt {1 - {\beta ^2}} /\left( {1 - \beta {\text{cos}}\theta } \right).\]
Since our peculiar velocity was estimated to be a few hundred km/s, the amplitude of the dipole in the cosmic microwave background temperature should then be 𝒟 = β $\simeq$ 10–3. This predicted anisotropy was indeed detected soon afterward.20 It has been measured most precisely using the Planck satellite.21 Its amplitude is (1.2336 ± 0.0004) × 10–3, implying that the sun is moving with respect to the cosmic rest frame at 369.82 ± 0.11 km/s toward galactic longitude and latitude: l(deg) = 264.021 ± 0.011, b(deg) = 48.253 ± 0.005, which is in the constellation of Crater.
For this reason, the kinematic interpretation of the cosmic microwave background dipole has been widely accepted. Cosmological data are now routinely corrected by a special-relativity boost transforming the measured redshift and magnitudes of distant objects to the presumptively isotropic cosmic rest frame. In the cosmic microwave background frame, the large-scale averaged distribution of matter is also assumed to be isotropic. The Friedmann–Lemaître equations can then be applied to the transformed magnitudes and redshifts.
No Cosmic Rest Frame, No Convergence
These assumptions are no longer tenable. Several independent data sets now argue against the existence of a cosmic rest frame. At low redshift (z ≲ 0.1), all matter in our local supercluster of galaxies has a coherent bulk flow approximately aligned with the direction of the cosmic microwave background dipole: no convergence to the cosmic rest frame is seen on scales as large as ~300/h Mpc. At high redshift (z > 1), the observed dipole in the sky-distribution of distant radio sources and quasars is significantly larger than expected according to the kinematic interpretation of the cosmic microwave background dipole. Phenomena are in conflict with the cosmological principle. They directly challenge the claim that the universe is dominated by vacuum energy, which rests on its assumed large-scale homogeneity and isotropy.
These are potentially paradigm-changing developments.
Since the pioneering studies by the astronomer Vera Rubin and her collaborators in the 1970s, it has been known that our Local Group of galaxies, which includes Andromeda and the Magellanic Clouds, participates in a fast coherent bulk flow.22 This departure from the uniform Hubble expansion happens because the local distribution of matter is inhomogeneous. The flow would be expected to become negligible by ~100 Mpc, after which the averaged universe becomes homogeneous. The measurement of the cosmic microwave background dipole yields the sun’s peculiar velocity relative to the cosmic rest frame. The sun orbits around the Milky Way in nearly the opposite direction at ~200 km/s, and the Milky Way is moving toward the center of the Local Group at ~40 km/s. All this sums to a net motion of the Local Group at 620 ± 15 km/s toward l(deg) = 271.9 ± 2.0, b(deg) = 29.6 ± 1.4, which is not far from the direction of the cosmic microwave background dipole. This was ascribed to the attraction of a gigantic mass concentration dubbed the Great Attractor, weighing ~5 × 1016 solar masses at a distance of ~65/h Mpc.23
Subsequently, an infrared survey by the Infrared Astronomical Satellite reached a depth of ~100/h Mpc. It showed that while the summed effects of the known superclusters of galaxies could account for much of the observed bulk flow, there was still no convergence to the cosmic rest frame. Infrared and X-ray surveys have brought further evidence that the flow continues out to the Shapley Supercluster at ~180/h Mpc.24 We confirmed this using the Union 2 catalog of Type Ia supernovae to obtain distances to their host galaxies in order to perform tomography of the local velocity field. Using the same technique, the Nearby Supernova Factory collaboration has shown that the bulk flow continues even beyond Shapley, out to ~300 Mpc, thus requiring an even bigger inhomogeneity to drive it.25 A detailed map of local structures by Brent Tully and collaborators uses direct distance measurements to determine these peculiar velocities. It shows that this motion is in fact coherent across the Laniakea Supercluster, in which we live.26
So far as the universe has been mapped in detail, there is no convergence to the cosmic microwave background frame.
A Direct Test
In a review of these puzzling observations, the astronomer James Gunn expressed a radical thought: “Most of the problem, it seems to me, would disappear if the [cosmic microwave background] did not, in fact, provide a rest frame.”27
A direct test of this had, in fact, been formulated in 1984 when George Ellis and John Baldwin argued that if
the standard interpretation of the dipole anisotropy in the microwave background radiation as being due to our peculiar velocity in a homogeneous isotropic universe is correct, then radio-source number counts must show a similar anisotropy. Conversely, determination of a dipole anisotropy in those counts determines our velocity relative to their rest frame; this velocity must agree with that determined from the microwave background radiation anisotropy.28
Unlike the cosmic microwave background, radio sources have a nonthermal spectrum. Their flux density Sv—the power emitted in units of W/m2/Hz—is a decreasing function of the frequency of observation: Sv $∝$ v–α. The number of sources above some limiting flux density, per unit solid-angle Ω, is also a decreasing function of the threshold flux dN/dΩ(> Sv) $∝$ Sv–x. The net effect is that sources normally too faint to be detected in a flux-limited survey should be boosted above the threshold if they are in the forward direction, and boosted below the threshold if they are behind. This enhances the distortion considerably.
The predicted dipole anisotropy has amplitude
𝒟 = [2 + x(1 + α)]β,
corresponding to a maximum change of about 0.5% in the density of radio sources that have on average $〈$α$〉$ ~ 0.75 and $〈$x$〉$ ~ 1.29
The data available at that time were inadequate to perform this test—as they noted. Measuring the expected dipole requires counting the density on the sky of at least several hundred thousand sources at high redshift, in order to adequately suppress random fluctuations.
The first catalog of radio sources became available only in this millennium. It is the National Radio Astronomy Observatory (NRAO) Very Large Array (VLA) Sky Survey, which mapped the entire sky north of –40 degrees declination at 1.4 GHz. Surprisingly, the radio dipole was found to be over twice the predicted value, although consistent with the cosmic microwave background dipole in direction.30 This exercise has been redone by many researchers who have found similar results, albeit all of less than three sigma in significance.31 The anomaly has not been accepted as genuine. Some researchers argue that it is due to unidentified systematics in the mapmaking or galactic contaminants such as nearby radio sources in the Milky Way. Such concerns can be addressed by making suitable cuts on the minimum flux and by masking out the Milky Way. The latter also helps to eliminate a clustering dipole—the accidental proximity of a random fluctuation in the source density. This was in fact guarded against in some of the cited analyses by cross-correlating with other catalogs of nearby galaxies and excluding any sources in common.
It remains true that the distribution in redshift of the radio sources is not directly measured but only inferred from their flux distribution. One cannot be certain that some of them do not happen to be accidentally nearby. One might even wonder if it is just such a local cluster that is also responsible for pulling us in the faster-than-expected bulk flow that stretches out farther than expected.
Distant Sources
To eliminate this possibility, it is necessary to establish that sky-sources are cosmologically distant. Doing so requires measuring their redshifts spectroscopically, rather than by more uncertain photometric estimates. This is a laborious process. To date, redshifts have been measured for just over a million sources; and even the largest surveys such as the Sloan Digital Sky Survey, cover a relatively small portion of the sky. The Ellis and Baldwin test requires a similar number of sources spread uniformly on the sky. Fortunately such a catalog has become available recently from the Wide-Field Infrared Survey Explorer (WISE) satellite. These data were used by my collaborator Nathan Secrest and his coauthors to construct a uniform catalog of 1.36 million active galactic nuclei.32 With our collaborators Jacques Colin, Sebastian von Hausegger, Roya Mohayaee, and Mohamed Rameez, we applied various quality cuts on the CatWISE2020 catalog and masked out problematic regions, including a band of ± 30º around the galactic plane.33 This revealed a strong dipole signature, its direction consistent with the cosmic microwave background dipole. But the amplitude of 𝒟 = 0.0154 was over twice as large as the expected value of 0.0072 using the mean values of $〈$α$〉$ ~ 1.26 and $〈$x$〉$ ~ 1.7 appropriate for this catalog. We thus confirmed the anomaly first revealed by the NRAO VLA Sky Survey catalog. A representative subset of these quasars have spectroscopic redshift measurements in the Sloan Digital Sky Survey. Their mean redshift is $〈$z$〉$ = 1.2. Fewer than 1% of the sources are at z < 0.1.
The anomalously large dipole is not of local origin.
Mock Skies and Alien Metrics
To quantify this discrepancy, we simulated ten million mock skies by randomly sampling the quasar catalog according to the distributions of flux densities and spectral indices. We applied the same masks and flux cuts as for the real sky, and then introduced relativistic aberration and Doppler boosting with β = 0.00123 as per the kinematic interpretation of the cosmic microwave background dipole. Such was our null hypothesis. The calculation yielded a distribution of expectations for the quasar dipole. Only 5 out of 10 million simulated skies exhibit a dipole as large as the real one. The null hypothesis was rejected with a p-value of 5 × 10–7, which corresponds to a significance of 4.9σ for the normal distribution. The CatWISE2020 catalog has almost no objects in common with the NRAO VLA Sky Survey catalog. Combining their results forces us to reject the standard cosmological model expectation at nearly 6σ.34
This anomaly can no longer be dismissed. It appears that the cosmic rest frames of matter traced by quasars and the cosmic microwave background do not coincide. This strikes at the very heart of the FLRW models, as Ellis and Baldwin noted.35 And it calls into question all the inferences drawn from analysis of cosmological data in this framework: in particular, the inference that the universe is dominated by a cosmological constant.
Still, none of these probes is directly sensitive to Λ. Since the value of Λ is inferred from the cosmic sum rule, their analysis relies on the assumptions of isotropy and homogeneity. There is no independent evidence of accelerated expansion from any of the low redshift probes. They are equally consistent with expansion at a constant rate.36
In 2014, a collaboration of nearly all of the world’s supernova astronomers made public a catalog of 740 spectroscopically confirmed Type Ia (SNe Ia) supernovae compiled from all available surveys and uniformly calibrated in a joint light-curve analysis.37 The measured redshifts and magnitudes had all been boosted to the cosmic microwave background frame. Corrections were made for the peculiar velocities of the supernova host galaxies in our bulk flow, assuming that there is convergence to the cosmic microwave background frame beyond ~150 Mpc.
Motivated by the work of Christos Tsagas, we had reasons to question this analysis.38 Tsagas had observed that tilted observers embedded in a bulk flow may erroneously conclude that they are accelerating even when the expansion rate is globally decelerating. A clear signature would be a dipole asymmetry in the derived q0 toward the bulk-flow direction.39 In the ΛCDM model, the SNe Ia data is fitted to the magnitude-redshift relationship in Equation 7. On the other hand, to determine a kinematic quantity like acceleration, the luminosity distance can be expanded in terms of increasing derivatives of H0, q0, and the jerk parameter j0 = ä/aH3|0. The result is a power series that is adequately accurate for z ≲ 1:
- \[{d_L} = \frac{{cz}}{{{H_0}}}\left[ {1 + \frac{1}{2}\left( {1 - {q_0}} \right)z + \frac{1}{6}\left( {1 - {q_0} - 3q_0^2 + {j_0} + \frac{{k{c^2}}}{{H_0^2a_0^2}}} \right){z^2} + \ldots } \right].\]
In order to be model-independent, we used the above equation, but allowed the deceleration parameter to have both direction and scale dependence:
q = qm + qd $\cdot$ $\hat n$ℱ (z, S),
where qm and qd are the monopole and dipole components, $\hat n$ is the direction of the dipole axis, and ℱ (z, S) describes its scale dependence. The best fit was found to be an exponential form: exp(–z/S). To determine the best fit to the joint light-curve analysis data, we used the statistically principled maximum likelihood estimator we had earlier constructed,40 rather than adjusting error bars to fit the ΛCDM model.41 This revealed that the inferred acceleration is indeed a dipole and is ~50 times larger than the monopole. It falls off exponentially with a scale length of ~80/h Mpc. The monopole qm is consistent with being zero at just 1.4σ. But the dipole component qd is not zero at 3.9σ.42 This is in accordance with Tsagas’s expectations from the Raychaudhuri equation of general relativity and clearly demonstrates that the inferred acceleration is not due to a cosmological constant.43
It exists because we are non-Copernican observers embedded in a deep bulk flow.44
The original analyses by the Supernova Cosmology Project and High-z Supernova Search teams had employed, respectively, just 60 and 50 SNe Ia, 17 of which were in common. We found that most of these SNe Ia were also in the general direction where qd peaks.45 Forthcoming data from the Dark Energy Spectroscopic Instrument and the Rubin Observatory Legacy Survey of Space and Time will establish definitively whether the inferred cosmic acceleration is indeed anisotropic. If so, dark energy will be ruled out as the explanation.
In FLRW cosmology, the value assigned to Λ is inferred from the cosmic sum rule, which itself follows from the assumptions of homogeneity and isotropy encapsulated in the FLRW metric. More complicated metrics can be formulated for inhomogeneous and anisotropic universes,46 but such models have not been explored in as much detail as the FLRW models, especially with regard to structure formation. In such extended models, Einstein’s equations do not simplify to a straightforward sum rule. Additional terms are required and the inference of Λ becomes ambiguous. The Lemaître–Tolman–Bondi model provides an exact solution of Einstein’s equations for a radially inhomogeneous but isotropic universe. The same data can be fitted by an appropriate radial variation of the metric, rather than with Λ.47 Though we see gravitational instability develop from the small density fluctuations imprinted on the cosmic microwave background, the evolution of cosmic structure in such a model has yet to be fully explored. By contrast, the FLRW model successfully describes the entire evolution of the universe. Using the fundamental laws encoded in the standard model of particle physics, we can in fact extrapolate reliably as far back as ~10–12 seconds after the Big Bang.48
The universe becomes simpler as we go back to such early epochs. The energy density becomes dominated by matter, and then by radiation, which scales even faster with the redshift, when Ωm(1 + z)3 is overtaken by Ωr(1 + z)4 at z ≳ 104. The energy density is then well-described by the Einstein–de Sitter model, which has zero curvature and zero Λ. It is only in the late universe at z ≲ 1 that this oversimplified framework leads us to infer domination by unphysical dark energy. It is here that the cosmological model must be made more sophisticated to take into account the inhomogeneities that arise due to gravitational collapse.
Major attempts along these lines are the backreaction program led by Thomas Buchert and the ambitious timescape cosmology of David Wiltshire.49 These proposals are controversial and have not been widely accepted.50 Perhaps that is simply because the ΛCDM model is so much simpler and easier to confront with a wide variety of data such as gravitational lensing, baryon acoustic oscillations, galaxy clusters, and fluctuations in the cosmic microwave background. All are supposedly concordant in this framework.51
Conclusion
It is not new, this realization that the universe we inhabit does not look like the idealized FLRW model. Rubin, when interviewed for an oral history project in 1998, was asked,
Taking into account a large body of work besides the Geller, de Lapparent, Huchra work—your own work on the large-scale motions and the work of the Seven Samurai and all of that work which has shown that the universe is more inhomogeneous than might have been present in simple models—has that altered your view of the big bang model at all, or of the validity of the model, the assumptions of the model, that kind of thing?
To this her reply was,
It certainly has convinced me that we’re not living in a homogeneous, isotropic [universe]. I mean these things that I really suspected in the back of my mind, I can now say publicly. I’m not sure the Robertson–Walker universe exists. … If someone came out with a different model that could incorporate such large-scale inhomogeneities, I would be delighted to see it, but until then I will just live with the big bang model.52
We do not yet have the definitive different model, but we hope to have further motivated crucial tests of the current model with forthcoming data, which will provide the clues necessary to formulate it.53 We believe that the discovery of dark energy was an accident waiting to happen given the oversimplified framework of the standard model of cosmology.