Research Workshop

Confronting Climate Change

Confronting Climate Change

March 1-5, 2021

Registration will close on February 25, 2021

This workshop will take place online.

Organizers

  • Vera Hur (Mathematics, UIUC)
  • Bo Li (UIUC)
  • Robert  Rosner (Astrophysics, Chicago)
  • Ryan Sriver (UIUC)
  • Robert Trapp (Atmospheric Science, UIUC)

Description

The workshop will bring together leaders in mathematics, statistics, and atmospheric sciences to confront grand climate challenges and their impacts. It will also serve as a precursor of a semester-long program. A major goal of the program will be to develop next-generation suites of science-driven mathematical and statistical tools and capabilities to address decision-relevant climate hazards and impacts, foster new multidisciplinary collaborations through workshops between host universities and partner institutions, and integrate young scientists and researchers into industry, private sector, and academic research through workshops and embedded research projects with affiliated universities, national labs, and private industry. The frequency, duration, and intensity of climate and weather extremes, such as extreme precipitation events, hurricanes, droughts, heat waves, floods, and severe weather outbreaks, are changing. Climate extremes such as these pose major risks to natural and human systems at local to regional scales. New mathematical and statistical techniques are crucial to understanding the dynamics and interactions between global climate and decision-relevant regional impacts and human health hazards. New methods and diagnostic tools are needed to evaluate weather/climate properties and extremes using a combination of observations, models and downscaled products, focusing on decision-relevant time scales and with an expanded sampling of known uncertainties.

Confirmed Speakers

  • Amy Braverman (Jet Propulsion Laboratory, Caltech)
  • Tamma Carleton (University of California – Santa Barbara)
  • Edwin Gerber (New York University)
  • Chris Jones (University of North Carolina, Chapel Hill)
  • Mikyoung Jun (Texas A&M University/University of Houston)
  • Klaus Keller (Penn State)
  • Boualem Khouider (University of Victoria)
  • Robert Lund (University of California – Santa Cruz)
  • Raymond Pierrehumbert (Oxford)
  • Leslie Smith (University of Wisconsin – Madison)
  • Richard Smith (University of North Carolina, Chapel Hill)
  • Susan Solomon (MIT)
  • Michael Stein (Rutgers)
  • Laure Zanna (New York University)

Monday, March 1

All times CST
9:15-9:30Introductory remarks
9:30-10:30Robert Lund (University of California – Santa Cruz)

This talk presents methods to estimate the number of changepoint times and their locations in time-ordered (correlated) data sequences. A penalized likelihood objective function is developed from minimum description length information theory principles. Optimizing the objective function yields estimates of the changepoint numbers and location time(s). Our model penalty incorporates information on where the changepoint(s) lie, but is not solely based on the total number of model parameters (such as classical AIC and BIC penalties). Specifically, changepoints that occur relatively closely are penalized more heavily.

The methods are used to analyze two climate series. The first is a time series of annual precipitations from New Bedford, Massachusetts. The second is our North Atlantic Basin tropical cyclone record. In the latter data set, we find that the US entered a period of enhanced tropical cyclone activity circa 1993 that prevails today.

Changepoints in Climate Data
11:00-12:00Laure Zanna (New York University)
A Probabilistic View of the Oceans in Climate Change
12:00-1:30Lunch
1:30-2:30Richard Smith (University of North Carolina, Chapel Hill)
“Detection and Attribution” is a statistical technique used in climate science to determine the extent to which climate change is due to human causes, by comparing observed climate trends with climate model projections under both anthropogenic and non-anthropogenic forcing scenarios. In recent years, there has been a huge literature, including an NRC report, on the application of these concepts to climate extremes. Nevertheless, it is my view that the methods are still far from their final form. In this talk, I will discuss methodologies for using the formal statistical methods of extreme value theory for this problem, and also discuss extensions to take into account the spatial nature of extreme events. The methods will be illustrated with reference to Hurricane Harvey, and also, if time permits, some recently started analyses assessing the hurricane risk to North Carolina.Detection and Attribution for Spatial Extremes

Tuesday, March 2

9:30-10:30Chris Jones (University of North Carolina, Chapel Hill)
Rate-Induced Tipping and its Relevance to Climate
11:00-12:00Leslie Smith (University of Wisconsin – Madison)
Atmospheric variables are often decomposed into balanced and unbalanced components that represent, respectively, low-frequency variability and high-frequency dispersive waves. Such a decomposition underlies theory, modeling and forecasting, but traditionally does not account for phase changes of water, since the latter create a piecewise operator that changes across phase boundaries (dry versus cloudy air). Here we present a balanced-unbalanced decomposition for moist dynamics with phase changes and rainfall, and discuss some applications for understanding canonical weather phenomena and water transport.Probing the Dynamical Role of Water using a Balanced-Unbalanced Decomposition/span>
12:00-1:30Lunch
1:30-2:30Amy Braverman (Jet Propulsion Laboratory, Caltech)
Remote sensing data sets produced by NASA and other space agencies are a vast resource for the study of climate change and the physical processes which drive it. However, no remote sensing instrument actually observes these processes directly; the instruments collect electromagnetic spectra aggregated over two-dimensional ground footprints or three dimensional voxels, or sometimes just at a single point location. Inference on physical state based on these spectra occurs via a complex ground data processing infrastructure featuring a retrieval algorithm (so named because it retrieves latent true states from specta) which typically provides point estimates and sometimes accompanying uncertainties. The method and the rigor by which uncertainties are derived varies by mission, and a key challenge is keeping up with the volume of data that needs to be processed. In this talk I will use a new upcoming mission that is currently in the planning stages as a vehicle to explain both traditional and newer approaches to uncertainty quantification (UQ) for remote sensing data products. I hope that delving into some of the details will provide a better understanding of the strengths and weaknesses of remote sensing data for climate change research.Uncertainty Quantification for Remote Sensing Data Products used in Climate Science and Application

Wednesday, March 3

9:30-10:30 Michael Stein (Rutgers)
For many problems of inference about a marginal distribution function, while the entire distribution is important, extreme quantiles are of particular interest because rare outcomes may have large consequences. In climatological applications, extremes in both tails of the distribution can be impactful. A possible approach in this setting is to use parametric families of distributions that have flexible behavior in both tails. One way to quantify this property is to require that, for any two generalized Pareto distributions, there is a member of the parametric family that behaves like one of the generalized Pareto distributions in the upper tail and like the negative of the other generalized Pareto distribution in the lower tail. This talk describes some specific quantifications of this notion and proposes parametric families of distributions that satisfy these specifications.These families all have closed form expressions for their densities and, hence, are convenient for likelihood-based inferences. An application to climate model output shows this family works well when applied to daily average January temperature near Calgary, for which the evolving distribution over time due to climate change is difficult to model accurately by any standard parametric family. Time permitting, work by Mitchell Krock on extensions of this model to multivariate distributions will be described. Parametric Models for Distributions When Extremes Are of Interest
11:00-12:00 Boualem Khouider (University of Victoria)
Clouds and moist convection in the tropics are among the largest sources of uncertainties in state-of-the-art earth system models (ESMs) used for longtime weather predictions and climate change projections. The difficulty arrises from the fact that these models are based on a discretization of the equations of motion using grid mesh sizes ranging between 10km to 200km in the horizontal. These grids are too coarse to resolve clouds and moist dynamics in the tropics, and the associated dynamical and thermodynamical processes such as convective flows and latent heat exchange with the environment due to the phase change of water substances. As in many applications involving turbulent and multi-scale flows, the unresolved scale processes, or rather their effect on the resolved scales, are handled by sub-grid scale models often called parameterizations. The state-of-the-art parameterizations of clouds and moist convection in the tropics are based on a theory of large ensembles, known as the quasi-equilibrium (QE) theory, which fails dramatically to capture the most apparent modes of climate and weather variability in the tropics that operate on scales of thousands and tens of thousands kilometres such as the celebrated Madden and Julian oscillation (MJO) and monsoon intra-seasonal oscillations with periods of 40 days to 100 days. Contrarily to the QE assumption that requires some sort of scale separation between the resolved and the parameterized scales, convection in the tropics is organized on a hierarchy of scales ranging from the cloud cell of 1km to 10km to planetary scale disturbances such the MJO and monsoon oscillations. The dynamical and thermodynamical interactions across this vast range of temporal and spacial scales involves multi-scale tropical convective systems known as convectively coupled waves that are embedded in and interact with each other. The QE approximation tacitly makes the unresolved processes dynamically slaved to the resolved waves and thus unable to capture or represent the small scale fluctuations and their impact on the large scale waves. To overcome this dilemma, we use the framework of stochastic interacting particle systems to build stochastic birth and death models for multiple cloud types that are known to dominate organized tropical convection. Bayesian—machine learning-like—inference techniques are used in tandem to learn some key parameters of the cloud-cloud interactions, namely the associated transition time scales from one cloud type to another, based on radar data. The resulting, so-called, stochastic multi-cloud models have been successfully tested and implemented in research and operational ESMs and important improvements in the simulation of both the mean climatology and the large-scale tropical modes of variability such as the MJO monsoons have been established. Improving Tropical Climate Simulations with Stochastic Models for Clouds
12:00-1:30Lunch

Thursday, March 4

9:30-10:30 Klaus Keller (Penn State)
From Decision Making to Basic Research (and Back)
11:00-12:00 Tamma Carleton (University of California – Santa Barbara)
TBD TBD
12:00-1:30Lunch
1:30-2:30 Susan Solomon (MIT)
Climate Change Science and Policy: Hope for Our Planet

Friday, March 5

9:30-10:30 Mikyoung Jun (Texas A&M University/University of Houston)
We explore the use of three statistical and machine learning methods (a generalized linear model, random forest, and neural network) to predict the occurrence and rain rate distribution of three tropical rain types (deep convective, stratiform, and shallow convective) observed by the radar onboard the GPM satellite over the Pacific. Three-hourly temperature and moisture fields from MERRA-2 were used as predictors. While all three methods perform reasonably well at predicting the occurrence of each rain type, they all struggle to reproduce heavy tailed rain rate distribution for all three types, as well as their spatial patterns. While the neural net is the only method that can produce extreme rain amounts, there is serious overfitting problem even with moderate number of hidden layers. We will discuss challenges and also current direction we are taking to overcome this problem. Statistical and Machine Learning Methods Applied to the Prediction of Tropical Rainfall
11:00-12:00 Raymond Pierrehumbert (Oxford)
Climate sensitivity — the amount of global mean warming expected from a unit change in top-of-atmosphere radiative forcing — is the key diagnostic determining the severity of global warming, and the “fat tail” corresponding to very high climate sensitivity but low (or unquantifiable) a priori probability has deep implications for climate impacts since expected harms are typically dominated by fat-tail events. The standard analysis of climate sensitivity is based on linearisation of the energy balance equation, but in fact as (linearised) climate sensitivity increases, the next order nonlinear terms assume increasing importance. We discuss several implications of this elementary fact for the transient and equilibrium behaviour of climate in response to elevated CO2. First, it is pointed out that as linear climate sensitivity increases, the ability to detect the magnitude of climate sensitivity through observations of the Earth’s climate trajectory and its energy budget becomes increasingly difficult. We call this the Early Warning Problem. In the extreme, the Earth could be headed for a runaway greenhouse (we emphasise we do not think this at all likely), but the evidence for this trajectory would not become apparent for a century or more. Second, we point out that very high climate sensitivity affects the question of whether temperature continues to rise after net CO2 emissions are brought to zero. We will digress and discuss other factors that can cause committed warming after cessation of emissions of long-lived gases, notably those related to the land carbon cycle. Third, we will point out that when linearised climate sensitivity is high, the system is generically close to a bifurcation, but there is no advance warming of the “width” of the bifurcation; the bifurcation could be “small” (loss of arctic sea ice) or “large” (dissipation of low clouds, or even worse, transition to a runaway greenhouse). We will digress to a general discussion of the way low cloud feedbacks affect the bifurcation structure of the climate system. Finally, we will discuss the behaviour of stochastically forced climate-like systems that are near a bifurcation, and speculate on the relation of these results to behaviour of high-dimensional climate models. What happens on the fat-tail of high climate sensitivity?
12:00-1:30Lunch
1:30-2:30 Edwin Gerber (New York University)
Atmospheric model hierarchies: A bridge from theory to climate prediction

In order to apply for this program, you must first register for an account and then login. Refreshing this page should then bring up the application form. Note that, due to requirements related to our NSF grant, you will only be able to apply for funding to attend if you have linked an ORCID® iD to your account. You will have an opportunity to create (if necessary) and connect an ORCID iD to your account once you’ve registered.