This was part of Kernel Methods in Uncertainty Quantification and Experimental Design

Adaptive Sampling Methods for Inference involving Computationally Intensive Models

Andrew Duncan, Imperial College

Thursday, April 3, 2025



Slides
Abstract: We consider the general problem of solving Bayesian Inverse problems in settings where the forward model is computationally expensive and/or viewed as a black box simulation. In these settings, both maximum aposteriori estimation and full posterior inference can pose substantial challenges, even with access to HPC resources. For the former, we explore the use of Batch Bayesian optimisation methods as a general strategy for inversion, noting that the 'inner-loop' optimisation can struggle due to the non-convexity of the acquisition function, particularly in the case of batch Bayesian optimisation, where multiple points are selected in every step. We propose reformulating batch BO as an optimisation problem over the space of probability measures. We construct a new acquisition function based on multipoint expected improvement which is convex over the space of probability measures. Practical schemes for solving this 'inner' optimisation problem arise naturally as gradient flows of this objective function, which can empirically shown to be effective across challenging problems. In the last part of this talk, I will cover some work in progress on similar kernel-based approaches being considered in the setting of full posterior inference.