The update provides a new consensus SM prediction for with a major change: HVP LO is taken from lattice QCD rather than the dispersive data-driven method.
Briefing
This Physics Reports white paper update addresses a central question in precision particle physics: what is the Standard Model (SM) prediction for the muon anomalous magnetic moment, and how reliably can its dominant hadronic uncertainties be controlled in light of new experimental and theoretical progress? The muon anomalous magnetic moment, defined as , matters because the comparison between a high-precision measurement and an equally precise SM prediction provides one of the most sensitive indirect probes of physics beyond the SM (BSM). The paper’s motivation is sharpened by the Fermilab Muon program, which has achieved a final experimental precision of 127 parts per billion (ppb), and by the expectation that upcoming independent measurements (e.g., at J-PARC) will further test the SM.
The SM prediction is organized as a perturbative QED series plus electroweak (EW) contributions and hadronic effects. The hadronic sector is split into hadronic vacuum polarization (HVP)—with a dominant leading-order (LO) term—and hadronic light-by-light scattering (HLbL). QED and EW uncertainties are already at the few-ppb level, so the dominant theory uncertainty is driven by nonperturbative QCD dynamics in HVP and HLbL. The update’s key methodological shift is that, for HVP LO, the consensus evaluation now relies primarily on lattice QCD rather than the data-driven dispersive method used in the previous white paper (WP20). This is not a minor technical change: it alters both the central value and the uncertainty budget.
Methodologically, the paper is a consensus review rather than a single new experiment or a single new lattice calculation. It synthesizes multiple approaches:
1) Data-driven dispersive HVP: evaluates from the hadronic -ratio via a dispersive “master formula” with a kernel that strongly weights low-energy hadrons cross sections. The dominant channel is , which must be known at better than precision to match the experimental sensitivity. The paper emphasizes that recent cross-section measurements have increased tensions among datasets, especially in the channel, preventing a meaningful average for a precise dispersive HVP LO.
2) Lattice QCD HVP: computes the Euclidean correlator of two electromagnetic currents and integrates it with a kernel (often implemented via the time-momentum representation). To manage technical systematics, the review highlights the use of “window observables” that partition the Euclidean-time integral into short-distance (SD), intermediate (W), and long-distance (LD) regions. This enables tailored control of discretization effects, finite-volume (FV) effects, and statistical noise. The paper also stresses blinding procedures to avoid confirmation bias and describes how results from multiple lattice collaborations are combined using FLAG-style averaging with correlation assumptions.
3) HLbL: updates both dispersive/analytic evaluations and lattice QCD evaluations. The dispersive framework is refined to avoid kinematic singularities (notably for spin-1 and higher-spin intermediate states) and to improve short-distance constraints using operator product expansion (OPE) and matching. Lattice HLbL calculations are updated with improved treatments of QED in infinite volume and with a focus on the dominant connected and (2+2) disconnected light-quark contributions.
Key numerical results are presented as consolidated SM inputs. For HVP LO, the lattice consensus value is reported as (in the paper’s preferred scheme), corresponding to a relative precision of about . The paper also provides the total HVP contribution including higher-order iterations (NLO and NNLO) as (its notation uses units for the full HVP block in the SM summary). For HLbL, the updated lattice+dispersive average is reported as with the lattice average itself given as . The EW contribution is quoted as with small uncertainty.
The paper’s overall SM prediction is summarized (in its Table 1 and Sec. 9) as and the difference between experiment and SM is defined as , reported as with the paper’s narrative indicating that the experimental world average is updated with the final Fermilab E989 result. While the exact sigma-level is not explicitly reproduced in the excerpted text, the paper’s qualitative conclusion is that the SM prediction remains in tension with experiment, and the dominant limitation is still hadronic theory uncertainty.
Limitations are central to the review. For the data-driven HVP dispersive method, the authors argue that the dataset tensions have grown to the point that no defensible average can be formed; they also stress that no “scientific grounds” have been identified to discard relevant datasets. They further discuss that radiative corrections and Monte Carlo generator systematics (e.g., Phokhara’s treatment of higher-order ISR/FSR configurations) can affect extracted cross sections and must be better controlled. For lattice HVP, limitations include residual uncertainties in long-distance QED and isospin-breaking (IB) corrections, scheme dependence in separating “isoQCD” from QED+SIB, and the challenge of FV corrections in the LD region. For HLbL, limitations include the remaining model dependence in matching to short-distance constraints and the fact that lattice calculations still have sign/discrepancy issues in some disconnected components across groups.
Practical implications are immediate for both theory and experiment. The paper provides a consensus SM number and a roadmap for what must improve to match the experimental precision: roughly a factor of four reduction in the total SM uncertainty is needed. The most urgent experimental-theory interface is the resolution of cross-section tensions and radiative-correction systematics in the dispersive program. On the lattice side, the focus shifts toward more precise IB and QED corrections, especially in the LD region, and toward further consolidation using multiple lattice actions and window-based cross-checks. HLbL is comparatively better controlled: its uncertainty has been reduced to below 10% in the combined dispersive+lattice average.
Who should care? Precision phenomenologists and BSM model builders care because the tension constrains new contributions to muon dipole operators. Experimental collaborations care because the dominant hadronic uncertainties determine how strongly their measurements translate into BSM sensitivity. Lattice and dispersive communities care because the review identifies the specific technical bottlenecks—radiative corrections in ISR analyses, and long-distance IB/QED in lattice HVP—that must be solved to enable a definitive SM test.
Cornell Notes
This white paper update compiles a consensus SM prediction for the muon anomalous magnetic moment with emphasis on the dominant hadronic uncertainties. It argues that HVP LO should now be taken from lattice QCD rather than the dispersive data-driven method due to unresolved tensions in the channel, while HLbL uncertainty has been reduced through improved dispersive and lattice inputs.
What is the central research question of the paper?
What is the most reliable Standard Model prediction for (including QED, EW, HVP, and HLbL), and what are the dominant sources of uncertainty limiting its precision?
Why does matter for physics beyond the Standard Model?
Because the experimental measurement and SM prediction agree at very high precision, any remaining discrepancy constrains new physics contributions to muon dipole operators.
How is the HVP LO contribution computed in the data-driven dispersive approach?
Through a dispersion integral over the hadronic -ratio, , where the kernel strongly weights low-energy hadrons cross sections, especially .
What prevents forming a precise dispersive average for HVP LO in this update?
The paper reports that tensions among datasets have increased after WP20, reaching a level that prevents a meaningful average; no dataset can be justified as discardable.
What is the lattice-QCD strategy for controlling HVP systematics?
Compute Euclidean current-current correlators and integrate using window observables (SD, W, LD) to tailor control of discretization, finite-volume effects, and noise; use blinding and FLAG-style averaging with correlation assumptions.
What lattice value does the paper adopt for HVP LO?
It quotes as the consolidated lattice result used in the SM prediction.
How is HLbL treated and what is the updated combined result?
The paper combines improved dispersive/analytic evaluations with lattice-QCD results; it reports with reduced uncertainty.
What is the final SM prediction and the implied discrepancy definition?
The paper summarizes and defines (reported as of order in its summary).
What are the main limitations and remaining bottlenecks?
Unresolved cross-section tensions and radiative-correction uncertainties in the dispersive HVP program; in lattice HVP, remaining long-distance QED/IB uncertainties; in HLbL, residual matching/model uncertainties and disconnected-component systematics.
Review Questions
Explain why the paper switches the HVP LO consensus input from dispersive data to lattice QCD, and what specific experimental issue drives this change.
Describe the purpose of SD/W/LD window observables in lattice HVP and how they reduce different classes of systematics.
Summarize how HLbL uncertainty is reduced compared with WP20, and identify the dominant contributions that remain most important.
What technical improvements are required to reduce the total SM uncertainty by the factor of about four needed to match the final Fermilab experimental precision?
Key Points
- 1
The update provides a new consensus SM prediction for with a major change: HVP LO is taken from lattice QCD rather than the dispersive data-driven method.
- 2
Unresolved tensions in cross-section datasets prevent forming a defensible dispersive average for precise HVP LO.
- 3
Lattice-QCD HVP LO is consolidated using window observables (SD/W/LD), blinding, and FLAG-style averaging, yielding .
- 4
HLbL uncertainty is reduced through improved dispersive frameworks and lattice calculations, giving .
- 5
The paper’s consolidated SM prediction is , leaving a positive discrepancy of order relative to the updated experimental world average.
- 6
The dominant remaining theory bottlenecks are hadronic: dispersive radiative-correction systematics and dataset tensions; lattice long-distance QED/IB corrections; and HLbL matching/model uncertainties.