The universe is expanding. We have known this since Edwin Hubble’s 1929 paper, which measured recession velocities of galaxies using Cepheid variable stars and established what we now call Hubble’s law:
$$v = H_0 \, d$$The proportionality constant $H_0$ — the Hubble constant — is the current rate of expansion, in units of km/s/Mpc (kilometres per second per megaparsec, where 1 Mpc $\approx 3.086 \times 10^{22}$ m). Hubble’s original measurement gave $H_0 \approx 500$ km/s/Mpc. It was wrong by a factor of about seven, which is understandable given that he was measuring distances to galaxies in the 1920s using photographic plates and a lot of courage. Over the following decades, as techniques improved, the value converged toward 70 km/s/Mpc. By the 1990s, many people considered the question largely settled: $H_0$ was somewhere between 60 and 80, and the main arguments were about where exactly in that range.
Those arguments have sharpened considerably. We now have two extremely precise, extremely well-scrutinised measurements of $H_0$, and they disagree:
$$H_0^{\text{late}} = 73.04 \pm 1.04 \text{ km/s/Mpc} \qquad \text{(distance ladder)}$$$$H_0^{\text{early}} = 67.4 \pm 0.5 \text{ km/s/Mpc} \qquad \text{(CMB)}$$The discrepancy is $73.04 - 67.4 = 5.64$ km/s/Mpc. The combined uncertainty is $\sqrt{1.04^2 + 0.5^2} \approx 1.16$ km/s/Mpc. The significance is approximately $4.9\sigma$, which rounds to what cosmologists have taken to calling — with some grimness — the Hubble tension.
$5\sigma$ is what particle physicists require before claiming a discovery. It is the threshold designed to exclude chance statistical fluctuations at the level of one in 3.5 million. When the Hubble tension first appeared around 2016 it was at $2$–$3\sigma$, which is “interesting.” It has since grown monotonically as both measurement chains have been refined. This is not the behavior you want from a systematic error that you are hoping will go away.
This is the kind of problem that keeps me reading papers at unreasonable hours. The rest of this post is an attempt to explain why both measurements are trustworthy, why the disagreement is therefore a genuine crisis, and what proposals exist for resolving it.
What $H_0$ Actually Measures and Why It Matters
Hubble’s law $v = H_0 d$ is valid in the nearby universe, for galaxies whose recession velocities are much less than the speed of light. In the full relativistic framework, the expansion is described by the scale factor $a(t)$, which encodes how distances between comoving points grow with time. The Hubble parameter is defined as
$$H(t) = \frac{\dot{a}}{a}$$and $H_0 = H(t_0)$ is its value today. The Friedmann equation — derived from general relativity applied to a homogeneous and isotropic universe — gives
$$H(z)^2 = H_0^2 \left[ \Omega_m (1+z)^3 + \Omega_r (1+z)^4 + \Omega_k (1+z)^2 + \Omega_\Lambda \right]$$where $z$ is the redshift (related to the scale factor by $1 + z = 1/a$), and the $\Omega$ parameters are the present-day fractional energy densities of matter, radiation, spatial curvature, and the cosmological constant (dark energy), respectively. Our standard cosmological model, $\Lambda$CDM, assumes a spatially flat universe ($\Omega_k = 0$), negligible radiation today ($\Omega_r \approx 0$), and
$$\Omega_\Lambda \approx 0.68, \qquad \Omega_m \approx 0.31, \qquad \Omega_b \approx 0.049$$where $\Omega_b$ is the baryon (ordinary matter) density. Dark matter makes up most of $\Omega_m$.
$H_0$ is not just one number among many. It is the normalization of the entire cosmological distance scale. It appears in the age of the universe:
$$t_0 = \int_0^\infty \frac{dz}{(1+z)\,H(z)}$$which, for $\Lambda$CDM with the above parameters, gives $t_0 \approx 13.8$ Gyr. A higher $H_0$ means a faster expansion rate, which means — for fixed $\Omega$ values — a younger universe. An error in $H_0$ propagates into every cosmological distance and every age estimate. It is not a detail.
Ladder 1: The Late-Universe Measurement ($H_0 = 73$)
The distance ladder is the name for the chain of calibrated distance measurements that extends from the Earth to the far reaches of the universe. Each rung calibrates the next.
Rung 1: Geometric parallax. For stars within roughly 1–2 kpc, we can measure distance directly from the shift in apparent position as the Earth orbits the Sun. The parallax angle $\pi$ (in arcseconds) gives the distance $d = 1/\pi$ parsecs. This is pure geometry — it follows from Euclid and Kepler, not from any physical model of stars. The Gaia space mission has measured parallaxes for more than 1.5 billion stars with precisions reaching $\sim 10\,\mu$as for the brightest objects, providing the geometric foundation of the entire ladder.
Rung 2: Cepheid variables. These are pulsating giant stars whose oscillation period $P$ is tightly correlated with their intrinsic luminosity $L$ — the Leavitt period-luminosity relation, discovered by Henrietta Swan Leavitt in 1912. The relation takes the form
$$M = \alpha \log_{10}(P/\text{days}) + \beta + \gamma \left[ \text{Fe/H} \right]$$where $M$ is the absolute magnitude, and the metallicity term $\gamma[\text{Fe/H}]$ accounts for the chemical composition of the star. Once calibrated against nearby Cepheids with known parallax distances, this relation allows the distance to any galaxy hosting Cepheids to be inferred from period measurements alone. Cepheids are luminous enough to be resolved in galaxies out to $\sim 50$ Mpc.
Rung 3: Type Ia supernovae. These are thermonuclear explosions of white dwarf stars that occur near the Chandrasekhar mass limit ($\approx 1.44 M_\odot$), and consequently near a characteristic peak luminosity. Their light curves are not perfectly standard, but the peak luminosity correlates tightly with the rate at which brightness declines after peak — the Phillips relation. After this standardisation, Type Ia SNe serve as “standardisable candles” reaching to redshifts $z \sim 2$, far beyond the reach of Cepheids.
The logic of the ladder: Gaia calibrates nearby Cepheids; those Cepheids calibrate Cepheids in SN Ia host galaxies; those SN Ia establish the absolute luminosity of the standard candle; that calibrated SN Ia sample gives recession velocities (from spectroscopic redshifts) and distances (from luminosities) simultaneously, yielding $H_0$.
The SH0ES collaboration (Supernovae and $H_0$ for the Equation of State of dark energy) has driven this measurement for the past fifteen years. Their 2022 result (Riess et al., 2022), using Hubble Space Telescope Cepheid calibrations in 37 galaxies hosting Type Ia SNe, found
$$H_0 = 73.04 \pm 1.04 \text{ km/s/Mpc}$$This is a 1.4% measurement. For context, Hubble’s original measurement had an error of several hundred percent.
The JWST confirmation. One candidate systematic error was crowding: in HST images of distant galaxies, Cepheids might be blended with unresolved neighbouring stars, artificially brightening them and biasing the distance estimate. JWST’s larger mirror and infrared sensitivity resolve individual Cepheids more cleanly in the same host galaxies. The results from JWST observations in 2023 confirmed the HST Cepheid distances. The crowding concern was not the answer. The distance ladder value is not a systematic artifact of HST resolution.
An independent late-universe measurement comes from time-delay cosmography. Gravitational lensing of a background quasar by an intervening galaxy produces multiple images; the arrival times of photons along different paths differ by an amount that depends on $H_0$. The TDCOSMO collaboration (Birrer et al., 2020) found $H_0 = 74.5^{+5.6}_{-6.1}$ km/s/Mpc from seven lensed quasars, entirely independently of the distance ladder. This is a completely different physical observable. It agrees with SH0ES.
Ladder 2: The Early-Universe Measurement ($H_0 = 67$)
The cosmic microwave background (CMB) is the thermal radiation left over from recombination — the epoch at $z \approx 1100$ (about 380,000 years after the Big Bang) when the universe had cooled enough for protons and electrons to combine into neutral hydrogen, allowing photons to free-stream for the first time. The CMB is extraordinarily uniform in temperature ($T \approx 2.725$ K) but carries tiny fluctuations at the level of $\Delta T / T \sim 10^{-5}$.
Before recombination, the universe was a tightly coupled photon-baryon fluid. Perturbations in this fluid oscillated: gravity pulled baryons into overdense regions, while radiation pressure resisted compression. The competition set up acoustic waves — sound waves in the plasma of the early universe. These waves travelled at the sound speed
$$c_s = \frac{c}{\sqrt{3(1 + R)}}, \qquad R = \frac{3 \rho_b}{4 \rho_\gamma}$$where $\rho_b$ and $\rho_\gamma$ are the baryon and photon energy densities. The waves propagated until recombination froze them in place. The characteristic length they had traversed — the sound horizon — is
$$r_s = \int_0^{t_{\text{rec}}} \frac{c_s \, dt}{a(t)}$$This is a physical length scale set by the microphysics of the early universe, which depends only on $\Omega_b$, $\Omega_m$, and the expansion rate $H(z)$ at $z \gtrsim 1100$ (well before any dark energy becomes relevant). For the best-fit $\Lambda$CDM parameters, $r_s \approx 147$ Mpc.
The frozen acoustic oscillations imprint a characteristic angular scale on the CMB temperature fluctuations. The angular size of the sound horizon as seen from today is
$$\theta_s = \frac{r_s}{D_A(z = 1100)}$$where $D_A$ is the angular diameter distance to the last-scattering surface. The Planck satellite measured this angle with extraordinary precision: $\theta_s = 0.59656°$ (approximately 0.6 degrees, corresponding to the first acoustic peak in the angular power spectrum). This is the most precisely measured quantity in cosmology.
Now, here is the key point. $\theta_s$ is measured directly from the CMB. But to extract $H_0$, we must model both $r_s$ (which depends on the early-universe physics) and $D_A(z=1100)$ (which depends on the late-universe expansion, and therefore on $H_0$). We fit $H_0$, $\Omega_m$, $\Omega_b$, and a handful of other parameters simultaneously to the entire CMB power spectrum.
The Planck 2018 result (Planck Collaboration, 2020):
$$H_0 = 67.4 \pm 0.5 \text{ km/s/Mpc}$$This is a 0.7% measurement — even tighter than SH0ES. And it assumes $\Lambda$CDM. That assumption is crucial, and we will return to it.
Independent CMB experiments — the Atacama Cosmology Telescope (ACT) and the South Pole Telescope (SPT) — give consistent results, ruling out Planck-specific instrumental systematics. The CMB measurement is robust.
The Tension: Numbers and What They Mean
The discrepancy is:
$$\Delta H_0 = 73.04 - 67.4 = 5.64 \text{ km/s/Mpc}$$The combined statistical uncertainty is:
$$\sigma_{\text{comb}} = \sqrt{1.04^2 + 0.5^2} \approx 1.16 \text{ km/s/Mpc}$$The significance:
$$\frac{\Delta H_0}{\sigma_{\text{comb}}} \approx 4.9\sigma$$rounding to $\sim 5\sigma$ when additional late-universe calibrators and improved analyses are included. To put this in perspective: the probability of a $5\sigma$ discrepancy arising by chance from two correct measurements of the same quantity is roughly $3 \times 10^{-7}$. You might win a lottery. You probably should not bet your cosmological model on it.
The tension has grown monotonically over a decade. In 2016 it was $3.4\sigma$. In 2019 it was $4.4\sigma$. Now it sits at $\sim 5\sigma$. This is the opposite of what happens when a systematic error is found: systematics tend to get corrected, reducing the tension. Instead, as each new systematic hypothesis has been tested and rejected, the significance has crept upward.
What Could Explain It
The community has not been idle. The number of proposed solutions runs into the hundreds. They can be organised into a few broad categories.
Systematic Errors (Increasingly Unlikely)
The distance ladder has multiple candidate systematics that have been carefully evaluated:
Cepheid metallicity dependence: the period-luminosity relation shifts with iron abundance $[\text{Fe/H}]$. This has been calibrated from first principles and from Gaia observations of Milky Way Cepheids. The residual uncertainty is $\lesssim 0.5$ km/s/Mpc.
Photometric crowding: addressed by JWST.
LMC geometry and distance: the Large Magellanic Cloud is the anchor for Cepheid calibrations. Its distance is now known from eclipsing binary stars and from the time delay of SN 1987A’s light echo to better than 1%.
SN Ia physics: host galaxy dependence of SN Ia peak luminosity, Malmquist bias in flux-limited surveys, potential evolution with redshift. These have been studied extensively. Residual effects are estimated at $\sim 1$ km/s/Mpc.
On the CMB side, Planck foreground subtraction has been audited, beam calibration has been checked, and independent experiments agree. There is no credible Planck systematic that could shift $H_0$ by $5.6$ km/s/Mpc.
The conclusion of most working cosmologists is that neither measurement chain contains a systematic error large enough to resolve the tension. This leaves us with physics.
Early Dark Energy
This is currently the most discussed new-physics solution (Poulin et al., 2019). The idea is to introduce a new energy component that becomes dynamically important at $z \sim 3000$–$5000$ — well before recombination but after matter-radiation equality. This “Early Dark Energy” (EDE) temporarily increases the expansion rate $H(z)$ at early times.
Why does this help? Recall that the CMB measures $\theta_s = r_s / D_A$ directly and precisely. The sound horizon is
$$r_s \propto \int_{z_{\text{rec}}}^\infty \frac{c_s \, dz}{H(z)}$$A faster expansion rate (higher $H(z)$ at early times) reduces the integral, shrinking $r_s$. The angular diameter distance $D_A$ also changes, but less sensitively. A smaller $r_s$ means that, to reproduce the same observed angle $\theta_s$, the model requires $D_A$ to be correspondingly smaller. Smaller $D_A$ implies a higher $H_0$.
The schematic: if we boost $H(z)$ at $z \sim 3500$ by $\sim 10\%$, the inferred $H_0$ from the CMB shifts from 67 toward 71–72 km/s/Mpc. This can be implemented by an axion-like scalar field $\phi$ that rolls down a periodic potential
$$V(\phi) = \Lambda^4 \left[1 - \cos\left(\frac{\phi}{f}\right)\right]^n$$The field oscillates around the potential minimum after recombination, rapidly diluting like matter or radiation, and leaving no late-universe signature.
EDE is not without problems. The required EDE fraction ($f_{\text{EDE}} \sim 0.1$ at peak) requires fine-tuning the initial field value. More seriously, EDE models generally worsen the $S_8$ tension — the $\sim 2$–$3\sigma$ discrepancy between CMB and weak gravitational lensing measurements of the parameter $S_8 = \sigma_8 \sqrt{\Omega_m / 0.3}$ (where $\sigma_8$ is the amplitude of matter fluctuations on 8 Mpc/$h$ scales). Fixing one tension while worsening another is not the behaviour of a correct theory.
Modified Gravity and Interacting Dark Energy
A zoo of alternatives modifies either the late-time or early-time expansion history. These include:
- Phantom dark energy: $w < -1$, which increases $H_0$ inferred from CMB fits
- Dynamical dark energy: $w \neq -1$, potentially evolving
- Interacting dark matter/dark energy: momentum transfer between sectors modifying both the background expansion and perturbation growth
- Modified gravity theories (Horndeski, bimetric gravity, $f(R)$ theories): these change the relationship between curvature and matter, altering $H(z)$
None of these is clearly preferred by the data in isolation, but several of them become more interesting in light of DESI.
The Local Void
A tempting classical explanation: if we happen to live inside a large underdense region (a “Hubble bubble”), the local expansion rate measured by the distance ladder would be higher than the cosmic mean. In an underdense region, there is less gravitational deceleration, so things expand faster locally.
The problem is scale. To shift $H_0$ by $5$ km/s/Mpc, the void would need to extend to $\gtrsim 300$ Mpc and have an underdensity of $\sim 20$%. Neither is consistent with the observed large-scale structure of the universe, where surveys of galaxy distributions show we are not in an anomalously underdense region at that scale.
DESI: Dark Energy May Not Be Constant
In 2024, the Dark Energy Spectroscopic Instrument (DESI) published its first-year results (DESI Collaboration, 2024). DESI is measuring baryon acoustic oscillations (BAOs) in the distribution of millions of galaxies — the same acoustic physics as the CMB, but imprinted in the late-universe galaxy distribution rather than in photons at $z = 1100$.
The BAO standard ruler is the sound horizon $r_s$: the same $\sim 147$ Mpc scale imprinted at recombination appears as a preferred separation between galaxy pairs in the low-redshift universe. By measuring the angular size and redshift separation of the BAO peak at multiple redshifts, DESI traces $H(z)$ across cosmic time.
DESI DR1 measured BAOs in over 6 million extragalactic objects spanning $0.1 < z < 4.2$ and found a $2.5$–$3.9\sigma$ preference for dark energy that evolves with time (the significance depending on which supernova dataset is used in the combination). The standard model assumes $w = P/\rho = -1$ exactly (a cosmological constant). DESI’s data is better fit by the $w_0$–$w_a$ parameterisation:
$$w(a) = w_0 + w_a (1 - a)$$where $a = 1/(1+z)$ is the scale factor. The DESI DR1 best-fit values, combined with CMB and SN Ia data, give $w_0 \approx -0.73$ and $w_a \approx -1.0$ — a dark energy that was more negative (more repulsive) in the past and is becoming less negative today. DESI DR2 (released in March 2025) raised the significance of this preference to $4.2\sigma$.
The connection to the Hubble tension is direct. The CMB’s inference of $H_0 = 67.4$ km/s/Mpc is derived assuming $w = -1$ exactly throughout cosmic history. If dark energy is not a cosmological constant — if $w(z)$ varies — then the Friedmann equation at late times is different, the angular diameter distance $D_A(z=1100)$ is different, and the CMB-inferred $H_0$ changes. A dynamical dark energy that is less dominant at early times and more dominant at late times (which the DESI $w_0$–$w_a$ parameters suggest) tends to shift the CMB-inferred $H_0$ upward.
DESI may be showing us the resolution of the Hubble tension: not a systematic error in either measurement chain, but a genuine departure from $\Lambda$CDM that biases both inferences in opposite directions. The distance ladder measures $H_0$ today from local observations. The CMB infers $H_0$ from a model that assumes $w = -1$ everywhere. If the model is wrong, the inference is wrong.
This is not yet settled. The DESI results are also consistent with systematic errors in the SN Ia data used in combination with BAOs. The statistical significance is below $5\sigma$ for the individual datasets. But the direction of the deviation is consistent across data combinations, and it points toward the same part of parameter space that would ease the Hubble tension.
JWST and the Early Galaxy Problem
A brief digression — or perhaps not a digression. JWST was designed partly to study the first galaxies. What it has found is unexpected: there are galaxies at $z > 10$ (less than 500 million years after the Big Bang) that are more massive and more luminous than standard $\Lambda$CDM galaxy formation models predicted. Early headlines announced that JWST was “breaking cosmology.” The reality is more nuanced: $\Lambda$CDM is not broken by these observations, and some of the most extreme early candidates have been revised to lower redshifts as spectra were taken. But genuine tension persists for some objects.
The important point is the accumulation. The Hubble tension is a $5\sigma$ discrepancy in $H_0$. The $S_8$ tension is a $2$–$3\sigma$ discrepancy in matter clustering. The early galaxy problem is a qualitative excess at high redshift. DESI shows $4.2\sigma$ evidence for evolving dark energy. None of these individually is an unambiguous model-breaking crisis. Together, they are multiple independent data sets all pointing in the same direction: $\Lambda$CDM is under pressure. That is not a coincidence you should ignore.
What It Would Mean if $\Lambda$CDM Is Wrong
Let me be clear about what “wrong” means here. $\Lambda$CDM is an extraordinarily successful model. It predicted the angular positions of the CMB acoustic peaks before they were measured. It correctly describes the large-scale structure of the universe across twelve billion years of cosmic history. It accounts for the primordial abundances of helium ($\sim 25\%$ by mass), deuterium, and lithium-7 through Big Bang nucleosynthesis. The 2011 Nobel Prize in Physics was awarded for the discovery of accelerated expansion — one of the central $\Lambda$ in $\Lambda$CDM.
Abandoning this model is not a decision to take lightly, and no serious cosmologist is proposing to do so. What is being proposed is that $\Lambda$CDM is incomplete in a specific way.
$\Lambda$CDM is not a fundamental theory. It is an empirical model with six free parameters: $H_0$, $\Omega_b h^2$, $\Omega_c h^2$, $A_s$ (scalar amplitude), $n_s$ (spectral index), $\tau$ (optical depth to reionization). It does not explain what dark matter actually is — whether it is a WIMP, an axion, a primordial black hole, or something stranger. It does not explain why the cosmological constant $\Lambda$ has the value it does (a separate crisis: the cosmological constant problem, which gives the wrong answer by $10^{120}$ when computed from quantum field theory). The model describes; it does not explain.
The Hubble tension, if it survives further scrutiny and grows in significance, would tell us something specific: the expansion history of the universe is not what $\Lambda$CDM predicts. Either there is new physics at early times (EDE, modified gravity before recombination) or the dark energy is not a cosmological constant (DESI). In either case, the fix is a modification of the Friedmann equation — a correction to how we model the $\Omega$ parameters or their time dependence. This is science working as it should: a model that has been extraordinarily successful is now encountering its limits, and those limits are pointing toward something new.
Closing Thoughts
I write on this blog about transit photometry — measuring the dimming of starlight as a planet crosses its star’s disk (see, for instance, exoplanet hunting with smartphones or the gift of transits). Those observations work because we trust the geometric relationship between angular size, distance, and physical size. The same geometric trust is what underpins Rung 1 of the distance ladder: parallax.
What is striking to me, as someone trained in physics, is that the Hubble tension sits at the top of the same ladder I describe at the bottom. Parallax gives distances to nearby stars. Those calibrate Cepheids. Those calibrate supernovae. Those supernovae reach to $z \sim 2$, and their recession velocities — measured by spectroscopy and interpreted through general relativity — give $H_0 = 73$. Meanwhile, the acoustic oscillations in the CMB that I described in astro-lab at home as a snapshot of the early universe give $H_0 = 67$ by a completely independent method. The two answers disagree at $5\sigma$.
The ladder is not broken. Both rungs have been checked, rechecked, and cross-checked. JWST has confirmed the Cepheid distances. Independent CMB experiments confirm Planck. DESI finds that dark energy may not be constant. Everything points in the same direction: the universe is telling us something.
We do not yet know what.
I find this situation — a clean empirical crisis, well-measured, unexplained — to be among the most exciting things happening in physics. Not because I enjoy confusion, but because clean empirical crises are where physics makes progress. The perihelion of Mercury was an annoying discrepancy with Newtonian gravity until Einstein showed it was a signature of spacetime curvature. The ultraviolet catastrophe in blackbody radiation was an embarrassing failure until Planck (Max, not the satellite) introduced the quantum hypothesis. The Hubble tension may be the next one. Or it may turn out to be a mundane systematic that everyone missed. Either answer would be interesting.
For now, the universe has two expansion rates and one of them is wrong. We are working on finding out which.
References
Planck Collaboration. (2020). Planck 2018 results. VI. Cosmological parameters. Astronomy & Astrophysics, 641, A6. DOI: 10.1051/0004-6361/201833910
Riess, A. G., et al. (2022). A Comprehensive Measurement of the Local Value of the Hubble Constant with 1 km/s/Mpc Uncertainty from the Hubble Space Telescope and the SH0ES Team. The Astrophysical Journal Letters, 934, L7. DOI: 10.3847/2041-8213/ac5c5b
DESI Collaboration. (2024). DESI 2024 VI: Cosmological Constraints from the Measurements of Baryon Acoustic Oscillations. arXiv:2404.03002. Retrieved from https://arxiv.org/abs/2404.03002
Poulin, V., Smith, T. L., Karwal, T., & Kamionkowski, M. (2019). Early Dark Energy can resolve the Hubble tension. Physical Review Letters, 122, 221301. DOI: 10.1103/PhysRevLett.122.221301
Birrer, S., et al. (TDCOSMO). (2020). TDCOSMO IV: Hierarchical time-delay cosmography — joint inference of the Hubble constant, mass density profile and external convergence. Astronomy & Astrophysics, 643, A165. DOI: 10.1051/0004-6361/202038861
Freedman, W. L. (2021). Measurements of the Hubble Constant: Tensions in Perspective. The Astrophysical Journal, 919, 16. DOI: 10.3847/1538-4357/ac0e95
Di Valentino, E., et al. (2021). In the realm of the Hubble tension — a review of solutions. Classical and Quantum Gravity, 38(15), 153001. DOI: 10.1088/1361-6382/ac086d
Changelog
- 2026-03-22: Updated TDCOSMO quasar count to seven lensed systems and the $H_0$ value to match the TDCOSMO-only analysis. Updated DESI DR1 galaxy count to over 6 million extragalactic objects (the previous figure of 14 million corresponds to DR2). Added qualification that the 3.9$\sigma$ significance for evolving dark energy is dataset-dependent (ranging from 2.5$\sigma$ to 3.9$\sigma$ depending on the supernova sample used).