This was part of Uncertainty Quantification and Machine Learning for Complex Physical Systems

FAIR Universe: Benchmarks for Systematics-Aware Machine Learning in Particle Physics and Cosmology

Po-Wen Chang, Lawrence Berkeley National Laboratory

Tuesday, May 20, 2025



Slides
Abstract: Measurements and observations in particle physics fundamentally depend on one's ability to quantify their uncertainty and, thereby, their significance. Therefore, as machine learning (ML) methods become more prevalent in high energy physics, being able to determine the uncertainties of an ML method becomes more important. A wide range of possible approaches has been proposed, however, there has not been a comprehensive comparison of individual methods. To address this, the Fair Universe project organized the HiggsML Uncertainty Challenge, which took place from September 2024 to March 2025, and the dataset and performance metrics of the challenge will serve as a permanent benchmark for further developments. Additionally, the Challenge was accepted as an official NeurIPS2024 competition. The goal of the challenge was to measure the Higgs signal strength, using a dataset of simulated $pp$ collision events observed in LHC. Participants were evaluated on both their ability to precisely determine the correct signal strength, as well as on their ability to report correct and well-calibrated uncertainty intervals. In this talk, we present an overview of the competition itself and of the infrastructure that underpins it. Further, we present the winners of the competition and discuss their winning uncertainty quantification approaches. The HiggsML Uncertainty Challenge itself can be found under https://www.codabench.org/competitions/2977/ And more details are available as https://arxiv.org/abs/2410.02867