DescriptionBack to top
Climate models are important tools for understanding past, current and future global climate variability, yet they exhibit key uncertainties that limit their applicability to fine scale analysis and future projections. Some key sources of uncertainty include coarse grid resolution, inadequate representation of relevant physics and interactions, overfitting from downscaling and bias-correction, lack of observations to calibrate and evaluate models, uncertain model parameters, different model structures, and so on. In addition, coupled climate models are computationally expensive and thus difficult to use for uncertainty analysis, while reduced complexity models are fast and flexible but are highly parameterized and lack physics. These computational tradeoffs pose major challenges for evaluating/comparing model results, constructing reliable projections, and quantifying relevant uncertainties. The workshop will bring together researchers from multi-disciplinary fields to highlight new math/stat methods for climate model evaluation and uncertainty quantification across spatial and temporal scales, and to advance our understanding about the physical processes leading to model errors, biases, and uncertainty.
This workshop will include a poster session. The form for submitting a poster proposal will be available below after registration.
OrganizersBack to top
SpeakersBack to top
Poster SessionBack to top
The posters that have been submitted for the poster session are available on the poster session page.
ScheduleBack to top
Speaker: Dorit Hammerling (Colorado School of Mines)
Speaker: Matthias Katzfuss (Texas A&M University, College Station)
Speaker: Linda Mearns (UCAR)
Speaker: Steve Sain (Jupiter Intelligence)
Speaker: Noah Diffenbaugh (Stanford University)
Noah S. Diffenbaugh and Elizabeth A. Barnes Leveraging artificial neural networks (ANNs) trained on climate model output, we use the spatial pattern of historical temperature observations to predict the time until critical global warming thresholds are reached. Although no observations are used during the training, validation or testing, the ANNs accurately predict the timing of historical global warming from maps of historical annual temperature. The central estimate for the 1.5˚C global warming threshold is between 2033 and 2035, including a ±1 sigma range of 2028 to 2039 in the Intermediate (SSP2-4.5) climate forcing scenario, consistent with previous assessments. However, our data-driven approach also suggests substantial probability of exceeding the 2˚C threshold even in the Low (SSP1-2.6) climate forcing scenario. While there are limitations to our approach, our results suggest higher likelihood of reaching 2˚C in the Low scenario than indicated in previous assessments –– though the possibility that 2˚C could be avoided is not completely ruled out. Explainable AI (XAI) methods reveal that the ANNs focus on particular geographic regions to predict the time until the global threshold is reached. Our framework provides a unique, data-driven approach for quantifying the signal of climate change in historical observations, and for constraining the uncertainty in climate model projections. Given the substantial existing evidence of accelerating risks to natural and human systems at 1.5˚C and 2˚C, our results provide further evidence for high-impact climate change over the next three decades.
Speaker: Doug Nychka (Colorado School of Mines)
Speaker: Tapio Schneider (Caltech)
Speaker: Peter F. Craigmile (The Ohio State University)
Speaker: Gavin Schmidt (NASA Goddard Institute for Space Studies)
Speaker: Patrick Heimbach (University of Texas, Austin)
Speaker: Chris Smith (University of Leeds)
Speaker: Trevor Harris (University of Illinois at Urbana-Champaign)
Speaker: Chris Wikle (University of Missouri)
Environmental spatio-temporal processes are governed by complex dynamical interactions across multiple scales and multiple processes. Often the time scales or multi-process complexity of such processes preclude the effective specification of a specific mechanism in which to motivate the model, particularly in low-order and stochastic representations. It is particularly challenging to specify parameterizations for nonlinear dynamic spatio-temporal models (DSTMs) that are simultaneously useful scientifically, efficient computationally, and allow for proper uncertainty quantification. We describe a recent approach that utilizes a deep convolutional neural network to learn the kernel mapping function in a computationally efficient state-dependent integro-difference equation (IDE) DSTM. The most important aspect of this work is that it demonstrates remarkable “transfer learning” potential – i.e., it can predict a geophysical system quite different than the one that produced the data on which it was trained. However, this approach does not learn the functional form of the relevant dynamics. Recently, there has been interest in several academic communities to use data-driven discovery methods to learn the fundamental dynamical mechanisms present in data. These approaches have shown promise in simulations of low-dimensional systems without a great deal of noise in observation or process or missing data. Here, we present a statistical spatio-temporal dynamics discovery approach that accommodates noisy observations and stochastic dynamics.