

Abstract: While machine learning has revolutionised simulation, reconstruction, and signal extraction in particle physics experiments, the core methodology for hypothesis testing has remained largely unchanged. The likelihood ratio test (LRT) is often colloquially described as 'the optimal test statistic for frequentist analyses'. As it turns out, this is not the case. By focusing the statistical power in physics-motivated regions of parameter space, significant sensitivity gains can be achieved across experiments. We demonstrate these improvements in two case studies: a Higgs boson measurement at the Large Hadron Collider and a WIMP search at a dark matter experiment. Our method also enables rapid construction of confidence intervals, traditionally obtained through computationally expensive Monte Carlo procedures in the Feldman–Cousins framework.
These new general-purpose statistical developments hold the key to enhancing physics sensitivity across experiments, from neutrino oscillations to dark matter searches, and I would be glad to discuss their broader implications after the talk.