DescriptionBack to top
Gaussian processes are widely used for prior modeling in data-centric applications including regression, classification, interpolation of computer output, and Bayesian inverse problems. However, despite their wide-spread adoption, practical and efficient ways of specifying flexible Gaussian processes are still lacking, particularly in the nonstationary case. In addition, Gaussian process methodology suffers from pressing computational challenges concerning their scalability to large datasets and high dimensional settings. This workshop will bring together computational and applied mathematicians, statisticians and subject matter researchers to push forward the modeling and computation with Gaussian processes, their novel use in classical scientific computing tasks, and the emerging theoretical analysis of the associated methodology.
A related but distinct challenge facing Gaussian process methodology is its computational scalability to large datasets. This tractability challenge, which stems for from the need to compute the Cholesky factorization of a dense covariance matrix, has been at the forefront of researcher’s minds for decades, resulting in methods such as circulant embedding, Vecchia approximations, and the use of sparse representations. In recent times, new classes of approximation have appeared – including Hutchinson estimators for dealing with the trace terms of the likelihood and hierarchical off-diagonal low rank approximation of the covariance matrix which exploit the smoothness of the covariance kernel between well separated regions — that have the potential to vastly extend the scalability of Gaussian processes. These and other recent computational developments have facilitated the use of Gaussian processes with larger datasets and have also prompted a renewed interest in employing Gaussian processes in several classical scientific computing tasks such as numerical solution of partial differential equations, dimension reduction, and experimental design. However, a full understanding of the error caused by such computational techniques is still missing, and their scalability to high-dimensional settings needs further investigation.
There is a clear need to bring together computational and applied mathematicians, statisticians and subject matter researchers to push the field beyond its present practices, and in particular to investigate (a) new models that go beyond what, even under stationarity, are very narrow classes of stationary covariance functions; (b) how to express and understand the structure in the modeled process and exploit it in the computation phase; (c) novel application of Gaussian processes in classical scientific computing tasks; and (d) rigorous error analysis of the associated methodology.