March 1-5, 2021

The workshop will bring together leaders in mathematics, statistics, and atmospheric sciences to confront grand climate challenges and their impacts. A major goal of the program will be to develop next-generation suites of science-driven mathematical and statistical tools and capabilities to address decision-relevant climate hazards and impacts.

]]>**Organizers**

- Vera Hur (Mathematics, UIUC)
- Bo Li (UIUC)
- Robert Rosner (Astrophysics, Chicago)
- Ryan Sriver (UIUC)
- Robert Trapp (Atmospheric Science, UIUC)

**Description**

The workshop will bring together leaders in mathematics, statistics, and atmospheric sciences to confront grand climate challenges and their impacts. It will also serve as a precursor of a semester-long program. A major goal of the program will be to develop next-generation suites of science-driven mathematical and statistical tools and capabilities to address decision-relevant climate hazards and impacts, foster new multidisciplinary collaborations through workshops between host universities and partner institutions, and integrate young scientists and researchers into industry, private sector, and academic research through workshops and embedded research projects with affiliated universities, national labs, and private industry. The frequency, duration, and intensity of climate and weather extremes, such as extreme precipitation events, hurricanes, droughts, heat waves, floods, and severe weather outbreaks, are changing. Climate extremes such as these pose major risks to natural and human systems at local to regional scales. New mathematical and statistical techniques are crucial to understanding the dynamics and interactions between global climate and decision-relevant regional impacts and human health hazards. New methods and diagnostic tools are needed to evaluate weather/climate properties and extremes using a combination of observations, models and downscaled products, focusing on decision-relevant time scales and with an expanded sampling of known uncertainties.

**Confirmed Speakers**

- Amy Braverman (Jet Propulsion Laboratory, Caltech)
- Tamma Carleton (University of California – Santa Barbara)
- Edwin Gerber (New York University)
- Chris Jones (University of North Carolina, Chapel Hill)
- Mikyoung Jun (Texas A&M University/University of Houston)
- Klaus Keller (Penn State)
- Boualem Khouider (University of Victoria)
- Robert Lund (University of California – Santa Cruz)
- Raymond Pierrehumbert (Oxford)
- Leslie Smith (University of Wisconsin – Madison)
- Richard Smith (University of North Carolina, Chapel Hill)
- Susan Solomon (MIT)
- Michael Stein (Rutgers)
- Laure Zanna (New York University)

9:15-9:30 | Introductory remarks |

9:30-10:30 | Robert Lund (University of California – Santa Cruz) This talk presents methods to estimate the number of changepoint times and their locations in time-ordered (correlated) data sequences. A penalized likelihood objective function is developed from minimum description length information theory principles. Optimizing the objective function yields estimates of the changepoint numbers and location time(s). Our model penalty incorporates information on where the changepoint(s) lie, but is not solely based on the total number of model parameters (such as classical AIC and BIC penalties). Specifically, changepoints that occur relatively closely are penalized more heavily. The methods are used to analyze two climate series. The first is a time series of annual precipitations from New Bedford, Massachusetts. The second is our North Atlantic Basin tropical cyclone record. In the latter data set, we find that the US entered a period of enhanced tropical cyclone activity circa 1993 that prevails today. Changepoints in Climate Data |

11:00-12:00 | Laure Zanna (New York University) A Probabilistic View of the Oceans in Climate Change |

12:00-1:30 | Lunch |

1:30-2:30 | Richard Smith (University of North Carolina, Chapel Hill) “Detection and Attribution” is a statistical technique used in climate science to determine the extent to which climate change is due to human causes, by comparing observed climate trends with climate model projections under both anthropogenic and non-anthropogenic forcing scenarios. In recent years, there has been a huge literature, including an NRC report, on the application of these concepts to climate extremes. Nevertheless, it is my view that the methods are still far from their final form. In this talk, I will discuss methodologies for using the formal statistical methods of extreme value theory for this problem, and also discuss extensions to take into account the spatial nature of extreme events. The methods will be illustrated with reference to Hurricane Harvey, and also, if time permits, some recently started analyses assessing the hurricane risk to North Carolina.Detection and Attribution for Spatial Extremes |

9:30-10:30 | Chris Jones (University of North Carolina, Chapel Hill) Rate-Induced Tipping and its Relevance to Climate |

11:00-12:00 | Leslie Smith (University of Wisconsin – Madison) Atmospheric variables are often decomposed into balanced and unbalanced components that represent, respectively, low-frequency variability and high-frequency dispersive waves. Such a decomposition underlies theory, modeling and forecasting, but traditionally does not account for phase changes of water, since the latter create a piecewise operator that changes across phase boundaries (dry versus cloudy air). Here we present a balanced-unbalanced decomposition for moist dynamics with phase changes and rainfall, and discuss some applications for understanding canonical weather phenomena and water transport.Probing the Dynamical Role of Water using a Balanced-Unbalanced Decomposition/span> |

12:00-1:30 | Lunch |

1:30-2:30 | Amy Braverman (Jet Propulsion Laboratory, Caltech) Remote sensing data sets produced by NASA and other space agencies are a vast resource for the study of climate change and the physical processes which drive it. However, no remote sensing instrument actually observes these processes directly; the instruments collect electromagnetic spectra aggregated over two-dimensional ground footprints or three dimensional voxels, or sometimes just at a single point location. Inference on physical state based on these spectra occurs via a complex ground data processing infrastructure featuring a retrieval algorithm (so named because it retrieves latent true states from specta) which typically provides point estimates and sometimes accompanying uncertainties. The method and the rigor by which uncertainties are derived varies by mission, and a key challenge is keeping up with the volume of data that needs to be processed. In this talk I will use a new upcoming mission that is currently in the planning stages as a vehicle to explain both traditional and newer approaches to uncertainty quantification (UQ) for remote sensing data products. I hope that delving into some of the details will provide a better understanding of the strengths and weaknesses of remote sensing data for climate change research.Uncertainty Quantification for Remote Sensing Data Products used in Climate Science and Application |

9:30-10:30 |
Michael Stein (Rutgers)
For many problems of inference about a marginal distribution function, while the entire distribution is important, extreme quantiles are of particular interest because rare outcomes may have large consequences. In climatological applications, extremes in both tails of the distribution can be impactful. A possible approach in this setting is to use parametric families of distributions that have flexible behavior in both tails. One way to quantify this property is to require that, for any two generalized Pareto distributions, there is a member of the parametric family that behaves like one of the generalized Pareto distributions in the upper tail and like the negative of the other generalized Pareto distribution in the lower tail. This talk describes some specific quantifications of this notion and proposes parametric families of distributions that satisfy these specifications.These families all have closed form expressions for their densities and, hence, are convenient for likelihood-based inferences. An application to climate model output shows this family works well when applied to daily average January temperature near Calgary, for which the evolving distribution over time due to climate change is difficult to model accurately by any standard parametric family. Time permitting, work by Mitchell Krock on extensions of this model to multivariate distributions will be described. Parametric Models for Distributions When Extremes Are of Interest |

11:00-12:00 |
Boualem Khouider (University of Victoria)
Clouds and moist convection in the tropics are among the largest sources of uncertainties in state-of-the-art earth system models (ESMs) used for longtime weather predictions and climate change projections. The difficulty arrises from the fact that these models are based on a discretization of the equations of motion using grid mesh sizes ranging between 10km to 200km in the horizontal. These grids are too coarse to resolve clouds and moist dynamics in the tropics, and the associated dynamical and thermodynamical processes such as convective flows and latent heat exchange with the environment due to the phase change of water substances. As in many applications involving turbulent and multi-scale flows, the unresolved scale processes, or rather their effect on the resolved scales, are handled by sub-grid scale models often called parameterizations. The state-of-the-art parameterizations of clouds and moist convection in the tropics are based on a theory of large ensembles, known as the quasi-equilibrium (QE) theory, which fails dramatically to capture the most apparent modes of climate and weather variability in the tropics that operate on scales of thousands and tens of thousands kilometres such as the celebrated Madden and Julian oscillation (MJO) and monsoon intra-seasonal oscillations with periods of 40 days to 100 days. Contrarily to the QE assumption that requires some sort of scale separation between the resolved and the parameterized scales, convection in the tropics is organized on a hierarchy of scales ranging from the cloud cell of 1km to 10km to planetary scale disturbances such the MJO and monsoon oscillations. The dynamical and thermodynamical interactions across this vast range of temporal and spacial scales involves multi-scale tropical convective systems known as convectively coupled waves that are embedded in and interact with each other. The QE approximation tacitly makes the unresolved processes dynamically slaved to the resolved waves and thus unable to capture or represent the small scale fluctuations and their impact on the large scale waves. To overcome this dilemma, we use the framework of stochastic interacting particle systems to build stochastic birth and death models for multiple cloud types that are known to dominate organized tropical convection. Bayesian—machine learning-like—inference techniques are used in tandem to learn some key parameters of the cloud-cloud interactions, namely the associated transition time scales from one cloud type to another, based on radar data. The resulting, so-called, stochastic multi-cloud models have been successfully tested and implemented in research and operational ESMs and important improvements in the simulation of both the mean climatology and the large-scale tropical modes of variability such as the MJO monsoons have been established. Improving Tropical Climate Simulations with Stochastic Models for Clouds |

12:00-1:30 | Lunch |

9:30-10:30 |
Klaus Keller (Penn State)
From Decision Making to Basic Research (and Back) |

11:00-12:00 |
Tamma Carleton (University of California – Santa Barbara)
TBD TBD |

12:00-1:30 | Lunch |

1:30-2:30 |
Susan Solomon (MIT)
Climate Change Science and Policy: Hope for Our Planet |

9:30-10:30 |
Mikyoung Jun (Texas A&M University/University of Houston)
We explore the use of three statistical and machine learning methods (a generalized linear model, random forest, and neural network) to predict the occurrence and rain rate distribution of three tropical rain types (deep convective, stratiform, and shallow convective) observed by the radar onboard the GPM satellite over the Pacific. Three-hourly temperature and moisture fields from MERRA-2 were used as predictors. While all three methods perform reasonably well at predicting the occurrence of each rain type, they all struggle to reproduce heavy tailed rain rate distribution for all three types, as well as their spatial patterns. While the neural net is the only method that can produce extreme rain amounts, there is serious overfitting problem even with moderate number of hidden layers. We will discuss challenges and also current direction we are taking to overcome this problem. Statistical and Machine Learning Methods Applied to the Prediction of Tropical Rainfall |

11:00-12:00 |
Raymond Pierrehumbert (Oxford)
Climate sensitivity — the amount of global mean warming expected from a unit change in top-of-atmosphere radiative forcing — is the key diagnostic determining the severity of global warming, and the “fat tail” corresponding to very high climate sensitivity but low (or unquantifiable) a priori probability has deep implications for climate impacts since expected harms are typically dominated by fat-tail events. The standard analysis of climate sensitivity is based on linearisation of the energy balance equation, but in fact as (linearised) climate sensitivity increases, the next order nonlinear terms assume increasing importance. We discuss several implications of this elementary fact for the transient and equilibrium behaviour of climate in response to elevated CO2. First, it is pointed out that as linear climate sensitivity increases, the ability to detect the magnitude of climate sensitivity through observations of the Earth’s climate trajectory and its energy budget becomes increasingly difficult. We call this the Early Warning Problem. In the extreme, the Earth could be headed for a runaway greenhouse (we emphasise we do not think this at all likely), but the evidence for this trajectory would not become apparent for a century or more. Second, we point out that very high climate sensitivity affects the question of whether temperature continues to rise after net CO2 emissions are brought to zero. We will digress and discuss other factors that can cause committed warming after cessation of emissions of long-lived gases, notably those related to the land carbon cycle. Third, we will point out that when linearised climate sensitivity is high, the system is generically close to a bifurcation, but there is no advance warming of the “width” of the bifurcation; the bifurcation could be “small” (loss of arctic sea ice) or “large” (dissipation of low clouds, or even worse, transition to a runaway greenhouse). We will digress to a general discussion of the way low cloud feedbacks affect the bifurcation structure of the climate system. Finally, we will discuss the behaviour of stochastically forced climate-like systems that are near a bifurcation, and speculate on the relation of these results to behaviour of high-dimensional climate models. What happens on the fat-tail of high climate sensitivity? |

12:00-1:30 | Lunch |

1:30-2:30 |
Edwin Gerber (New York University)
Atmospheric model hierarchies: A bridge from theory to climate prediction |

In order to apply for this program, you must first register for an account and then login. Refreshing this page should then bring up the application form. Note that, due to requirements related to our NSF grant, you will only be able to apply for funding to attend if you have linked an ORCID^{®} iD to your account. You will have an opportunity to create (if necessary) and connect an ORCID iD to your account once you’ve registered.

February 15-19, 2021

Computational Materials Science is a branch of the engineering sciences that lies at the intersection of many disciplines. It describes how materials deform, are damaged, and age. This workshop identify questions where mathematics can play a significant role in the future.

]]>**Organizers**

- Qiang Du (Applied Physics and Applied Mathematics, Columbia)
- Irene Fonseca (Mathematics, CMU)
- Richard James (Aerospace Engineering Mechanics, Minnesota)
- Claude Le Bris (CERMICS, Ecole Nationale des Ponts et Chaussees Paris)
- Jianfeng Lu (Mathematics, Duke)
- Danny Perez (Los Alamos National Lab)

**Description**

Computational Materials Science is a well-established branch of the engineering sciences that lies at the intersection of many disciplines: mechanics, computational techniques, numerical analysis, mathematical theory. It describes how materials deform, are damaged, age. These phenomena can be studied at various scales, from microscopic scales described using the framework of quantum mechanics, to macroscopic scales modeled with continuum mechanics, via intermediate mesoscopic scales where atomistic and molecular dynamics techniques are key. This workshop aims to bring together leading experts from all these disciplines, in order to identify the challenging practical questions, of major relevance, where mathematics can play a significant role in the future.

**Confirmed Speakers**

- Gregoire Allaire (Ecole Polytechnique)
- Kaushik Bhattacharya (Caltech)
- Ludovic Chamoin (ENS Paris-Saclay)
- Selim Esedoglu (University of Michigan)
- Manuel Friedrich (Universität Münster)
- Vikram Gavini (University of Michigan)
- Miranda Holmes-Cerfon (New York University)
- Tony Lelièvre (Ecole des Ponts ParisTech)
- Lin Lin (UC Berkeley)
- Chun Liu (Illinois Institute of Technology)
- Mitchell Luskin (University of Minnesota)
- Noa Marom (Carnegie Mellon University)
- Maria Giovanna Mora (Università degli Studi di Pavia)
- Cyrill Muratov (New Jersey Institute of Technology)
- Felix Otto (Max Planck Institute, Leipzig)
- Christophe Ortner (University of Warwick)
- Sylvia Serfaty (New York University)
- Xiaochuan Tian (UC San Diego)
- Peter Voorhees (Northwestern University)
- Michael Weinstein (Columbia University)
- Barbara Zwicknagl (Technische Universität Berlin)

8:45-9:00 | Introductory remarks: IMSI Director and workshop organizers |

Morning session chair: Dick James (Aerospace Engineering and Mechanics, Minnesota) | |

9:00-10:00 | Felix Otto (Max Planck Institute, Leipzig) We study the Representative Volume Element (RVE) method, which is a method to approximately infer the effective behavior $a_{rm hom}$ of a stationary random medium, described by a coefficient field $a(x)$ and the corresponding linear elliptic operator $-nablacdot anabla$. In line with the theory of homogenization, the method proceeds by computing $d=3$ ($d$ denoting the space dimension) correctors, however on a “representative” volume element, i.e. box with, say, periodic boundary conditions. The main message of this article is: Periodize the ensemble instead of its realizations. By this we mean that it is better to sample from a suitably periodized ensemble than to periodically extend the restriction of a realization $a(x)$ from the whole-space ensemble $langlecdotrangle$. We make this point by investigating the bias (or systematic error), i.e. the difference between $a_{rm hom}$ and the expected value of the RVE method, in terms of its scaling w.r.t. the lateral size $L$ of the box. In case of periodizing $a(x)$, we heuristically argue that this error is generically $O(L^{-1})$. In case of a suitable periodization of $langlecdotrangle$, we rigorously show that it is $O(L^{-d})$. In fact, we give a characterization of the leading-order error term for both strategies. We carry out the rigorous analysis in the convenient setting of ensembles $langlecdotrangle$ of Gaussian type, which allow for a straightforward periodization, passing via the (integrable) covariance function. This setting has also the advantage of making Malliavin calculus available for optimal stochastic estimates of correctors. We actually need control of second-order correctors to capture the leading-order error term in the presence of cancellations due to point symmetry. This is joint work with Nicolas Clozeau, Marc Josien, and Qiang Xu. Bias in the Representative Volume Element method: periodize the ensemble instead of its realizations |

10:15-11:15 | Christoph Ortner (University of British Columbia) Accurate molecular simulation requires computationally expensive quantum chemistry models that makes simulating complex material phenomena or large molecules intractable. However, if no chemistry is required, but only interatomic forces then it should in principle be possible to construct much cheaper surrogate models, interatomic potentials, that capture full QM accuracy. This talk will review recent attempts, with focus on my personal analysis and numerical analysis perspectives, to achieve this. Specifically, I will explore whether we can rigorously justify the extremely low-dimensional functional forms proposed for interatomic potentials, and whether we can construct practical approximation schemes that can, in principle at least, close the complexity gap between density functional theory and interatomic potentials.Interatomic Potentials from First Principles |

11:15-12:45 | Lunch |

Afternoon session chair: Qiang Du (Applied Physics and Applied Mathematics, Columbia) | |

12:45-1:45 | Noa Marom (Carnegie-Mellon University) TBDApplications of Machine Learning in Materials Simulations |

2:00-3:00 | Kaushik Bhattacharya (Caltech) The talk will describe some recent work on the application of accelerated computing and machine learning for multi-scale modeling of materials. We will demonstrate the ideas with illustrative applications and discuss open issues.Multi-scale modeling of materials revisited: Accelerated computing and machine learning |

Morning session chair: Claude Le Bris (CERMICS, Ecole Nationale des Ponts et Chaussées Paris) | |

9:00-10:00 | Grégoire Allaire (Ecole Polytechnique) This work is concerned with the topology optimization of so-called lattice materials, i.e., porous structures made of periodically perforated material, where the microscopic periodic cell can be macroscopically modulated and oriented. Lattice materials are becoming increasingly popular since they can be built by additive manufacturing techniques. The main idea is to optimize the homogenized formulation of this problem, which is an easy task of parametric optimization, then to project the optimal microstructure at a desired length-scale, which is a delicate issue, albeit computationally cheap. The main novelty of our work is, in a plane setting, the conformal treatment of the optimal orientation of the microstructure. In other words, although the periodicity cell has varying parameters and orientation throughout the computational domain, the angles between its members or bars are conserved. Several numerical examples are presented for compliance minimization in 2-d. Extension to the 3-d case will also be discussed. This is a joint work with Perle Geoffroy-Donders and Olivier Pantz.Optimal design of lattice materials |

10:15-11:15 | Barbara Zwicknagl (Humboldt-Universität zu Berlin) TBDOn variational models for the geometry of martensite needles |

11:15-12:45 | Lunch |

Afternoon session chair: Jianfeng Lu (Mathematics, Duke University) | |

12:45-1:45 | Vikram Gavini (University of Michigan) Electronic structure calculations, especially those using density functional theory (DFT), have been very useful in understanding and predicting a wide range of materials properties. The importance of DFT calculations to engineering and physical sciences is evident from the fact that ~20% of computational resources on some of the world’s largest public supercomputers are devoted to DFT calculations. Despite the wide adoption of DFT, and the tremendous progress in theory and numerical methods over the decades, the following challenges remain. Firstly, the state-of-the-art implementations of DFT suffer from cell-size and geometry limitations, with the widely used codes in solid state physics being limited to periodic geometries and typical simulation domains containing a few hundred atoms. This limits the complexity of materials systems that can be treated using DFT calculations. Secondly, there are many materials systems (such as strongly-correlated systems) where the widely used model exchange-correlation functionals, which account for the many-body quantum mechanical interactions between electrons, are inaccurate. Addressing these challenges will enable large-scale quantum-accuracy DFT calculations, and will significantly advance our predictive modeling capabilities to treat complex materials systems. This presentation will discuss our recent advances towards addressing the aforementioned challenges. In particular, the development of computational methods and numerical algorithms for conducting fast and accurate large-scale DFT calculations using adaptive finite-element discretization will be presented, which form the basis for the recently released DFT-FE open-source code. The computational efficiency, scalability and performance of DFT-FE will be presented, which will demonstrate a significant outperformance of widely used plane-wave DFT codes. DFT studies on dislocations and biomolecules will be presented to showcase the capability of DFT-FE in handling large-scale systems. Finally, recent efforts, and related thoughts, towards developing a framework for a data-driven approach to improve the exchange-correlation description in DFT will be also discussed. This is joint work with Phani Motamarri (IISc/U. Michigan), Sambit Das (U. Michigan) and Bikash Kanungo (U. Michigan). Large-scale electronic structure calculations |

2:00-3:00 | Xiaochuan Tian (University of California, San Diego) Nonlocal continuum models are in general integro-differential equations in place of the conventional partial differential equations. While nonlocal models show their effectiveness in modeling a number of anomalous and singular processes in physics and material sciences, for example, the peridynamics model of fracture mechanics, they also come with increased difficulty in computation with nonlocality involved. In this talk, we will give a review of the asymptotically compatible schemes for nonlocal models with a parameter dependence. Such numerical schemes are robust under the change of the nonlocal length parameter and are suitable for multiscale simulations where nonlocal and local models are coupled. We will discuss finite difference, finite element and collocation methods for nonlocal models as well as the related open questions for each type of the numerical methods.Numerical methods for nonlocal models: asymptotically compatible schemes and multiscale modeling |

Morning session chair: Danny Perez (Los Alamos National Lab) | |

9:00-10:00 | Tony Lelièvre (Ecole des Ponts ParisTech) We will discuss models used in classical molecular dynamics, and some mathematical questions raised by their simulations. In particular, we will present recent results on the connection between a metastable Markov process with values in a continuous state space (satisfying e.g. the Langevin or overdamped Langevin equation) and a jump Markov process with values in a discrete state space. This is useful to analyze and justify numerical methods which use the jump Markov process underlying a metastable dynamics as a support to efficiently sample the state-to-state dynamics (accelerated dynamics techniques à la A.F. Voter). It also provides a mathematical framework to justify the use of transition state theory and the Eyring-Kramers formula to build kinetic Monte Carlo or Markov state models. References: |

10:15-11:15 | Maria Giovanna Mora (Università di Pavia) Particle systems subject to long-range interactions can be described, for large numbers of particles, in terms of continuum models involving nonlocal energies. For radially symmetric interaction kernels, several authors have established qualitative properties of minimizers for this kind of energies. But what can be said for anisotropic kernels? Starting from an example describing dislocation interactions in metals, I will discuss how the anisotropy may affect the equilibrium measure and, in particular, its dimensionality.Equilibrium measures for nonlocal interaction energies: The role of anisotropy |

11:15-12:45 | Lunch |

Afternoon session chair: Qiang Du (Applied Physics and Applied Mathematics, Columbia) | |

12:45-1:45 | Selim Esedoglu (University of Michigan) A natural property to demand from discrete in time approximations to gradient flows is energy stability: Just like the exact evolution, the approximate evolution should decrease the cost function from one time step to the next. Often, approximation schemes with desirable (e.g. unconditional) energy stability, such as minimizing movements, are only first order accurate in time. We will discuss general (problem independent) procedures for boosting the order of accuracy of existing implicit and semi-implicit variational schemes for gradient flows while preserving their desirable stability properties, such as unconditional energy stability. The resulting high order versions are formulated only in terms of multiple calls of the original scheme per time step, and therefore also essentially preserve their per time step complexity. Variational extrapolation of numerical schemes for gradient flows |

2:00-3:00 | Lin Lin (University of California, Berkeley) Green’s functions play a central role in describing excited state electronic structures in quantum chemistry and materials science. At the heart of Green’s function computation is the solution of a linear system of size $2^Ntimes 2^N$, where $N$ is the number of spin-orbitals in the quantum system. We will discuss how to use quantum linear system solvers to compute Green’s functions, and how to accelerate such computations using a new quantum primitive called the fast inversion, as well as preconditioning techniques.Quantum computation of Green’s functions |

Morning session chair: Jianfeng Lu (Mathematics, Duke University) | |

9:00-10:00 | Mitchell Luskin (University of Minnesota) Layers of two-dimensional materials stacked with a small twist-angle give rise to periodic beating patterns on a “moiré superlattice” scale much larger than the original lattice. This effective large-scale fundamental domain allows phenomena such as the fractal Hofstadter butterfly to be observed in crystalline materials at experimental magnetic fields. More recently, this new length scale has allowed experimentalists to observe new correlated electronic phases such as superconductivity at a lower electron density than previously accessible and has motivated an intense focus by theorists to develop models for this correlated behavior. We will present some mathematical and computational models for these experimental platforms and theoretical models.Mathematical Models for Moiré Physics |

10:15-11:15 | Michael Weinstein (Columbia University) We discuss continuum Schroedinger operators which are basic models of 2D-materials, like graphene; in its bulk form or deformed by edges (sharp terminations or domain walls). For non-magnetic and strongly non-magnetic systems we discuss the relationship to effective tight binding (discrete) Hamiltonians through a result on strong resolvent convergence. An application of this convergence is a result on the equality of topological (Fredholm) indices associated with continuum and discrete models (for bulk and edge systems). Finally, we discuss the construction of edge states in continuum systems with domain walls. Away from the tight binding regime there are resonant phenomena, and we conjecture that there are meta-stable (finite lifetime, but long-lived) edge states which slowly diffract into the bulk.Continuum and discrete models of waves in 2D materials |

11:15-12:45 | Lunch |

Afternoon session chair: Danny Perez (Los Alamos National Lab) | |

12:45-1:45 | Peter Voorhees (Northwestern University) Simulations can be used to measure the properties of interfaces in materials. The central role of quantitative phase field simulations in this effort is illustrated by a rapid throughput method to determine grain boundary properties. By comparing the evolution of experimentally determined three-dimensional grain structures to those derived from simulation, we measure the reduced mobilities of thousands of grain boundaries. Using a time step from the experiment as an initial condition in a phase-field simulation, the computed structure is compared to that measured experimentally at a later time. An optimization technique is then used to find the reduced grain boundary mobilities of over 1300 grain boundaries in iron that yield the best match to the simulated microstructure. We find that the grain boundary mobilities are largely independent of the five macroscopic degrees of freedom given by the misorientation of the grains and the inclination of the grain boundary. The challenge of developing quantitatively accurate phase field simulations of grain growth will be highlighted, with an emphasis on the novel PDE’s suggested by methods that can account for the five degrees of freedom of the grain boundary energy.Grain Growth in Polycrystals |

2:00-3:00 | Miranda Holmes-Cerfon (New York University) Particles with diameters of nanometres to micrometres form the building blocks of many of the materials around us, and can be designed in a multitude of ways to form new ones. One challenge in simulating such particles is that the range over which they interact attractively, is often much shorter than their diameters, so the SDEs describing the particles’ dynamics are stiff, requiring timesteps much smaller than the timescales of interest. I will introduce methods aimed at accelerating these simulations, which simulate instead the limiting equations as the range of the attractive interaction goes to zero. In this limit a system of particles is described by a diffusion process on a collection of manifolds of different dimensions, connected by “sticky” boundary conditions. I will introduce methods that simulate such sticky diffusion processes directly, and discuss some ongoing challenges to extending these methods to high dimensions.Numerically simulating sticky particles |

Morning session chair: Claude Le Bris (CERMICS, Ecole Nationale des Ponts et Chaussées Paris) | |

9:00-10:00 | Ludovic Chamoin (ENS Paris-Saclay) The work aims at developing new numerical tools in order to permit real-time and robust data assimilation that could then be used in various engineering activities. A specific targeted activity is the implementation of applications in which a continuous exchange between simulation tools and experimental measurements is envisioned to the end of creating retroactive control loops and online health monitoring on mechanical systems. In this context, and in order to take various uncertainty sources (modeling error, measurement noise,..) into account, a general stochastic methodology with Bayesian inference is considered. However, a well-known drawback of such an approach is the computational complexity which makes real-time simulation a difficult task. The research work thus proposes to couple Bayesian inference with attractive and advanced numerical techniques so that real-time and sequential assimilation can be envisioned. First, PGD model reduction [1] is introduced to facilitate the computation of the likelihood function, the uncertainty propagation through complex models, and the sampling of the posterior density. PGD builds a multi-parametric solution in an offline phase and leads to cost effective evaluation of the numerical model depending on parameters in the online inversion phase. Second, Transport Map sampling [2] is investigated as a substitute to classical MCMC procedures for posterior sampling. It is shown that this technique leads to deterministic computations, with clear convergence criteria, and that it is particularly suited to sequential data assimilation. Here again, the use of PGD model reduction highly facilitates the process by recovering gradient and Hessian information in a straightforward manner [3]. Third, and to increase robustness, on-the-fly correction of model bias is addressed in a stochastic context using data-based enrichment terms. The overall cost-effective methodology is applied and illustrated on a specific test-case dealing with real-time model updating for the control of a mechanical test involving damageable concrete structures with full-field measurements [4]. [1] P-B. Rubio, F. Louf and L. Chamoin, Fast model updating coupling Bayesian inference and PGD model reduction, Computational Mechanics, 62(6):1485-1509, 2018.[2] T.A. El Moselhy and Y.M. Marzouk, Bayesian inference with optimal maps, Journal of Computational Physics, 231(23):7815-7850, 2012. [3] P-B. Rubio, F. Louf and L. Chamoin, Transport Map sampling with PGD model reduction for fast dynamical Bayesian data assimilation, International Journal for Numerical Methods in Engineering, 120(4):447-472, 2019. [4] P-B. Rubio, L. Chamoin and F. Louf, Real-time Bayesian data assimilation with data selection, correction of model bias, and on-the-fly uncertainty propagation, Comptes-Rendus de l’Académie des Sciences, Mécanique, Paris, 347:762-779, 2019.Real-time Bayesian data assimilation with data-based model enrichment for the monitoring of damage in materials and structures |

10:15-11:15 | Manuel Friedrich (WWU Münster) We investigate the emergence of rigid polycrystalline structures from atomistic particle systems. The atomic interaction is governed by a suitably normalized pair interaction energy, where the `sticky disk’ interaction potential models the atoms as hard spheres that interact when they are tangential. The discrete energy is frame invariant and no underlying reference lattice on the atomistic configurations is assumed. By means of Gamma-convergence, we characterize the asymptotic behavior of configurations with finite surface energy scaling in the infinite particle limit. The effective continuum theory is described in terms of a piecewise constant field delineating the local orientation and micro-translation of the configuration. The limiting energy is local and concentrated on the grain boundaries, i.e., on the boundaries of the zones where the underlying microscopic configuration has constant parameters. The corresponding surface energy density depends on the relative orientation of the two grains, their microscopic translation misfit, and the normal to the interface. Joint work with Leonard Kreutz (Münster) and Bernd Schmidt (Augsburg).Emergence of rigid polycrystals from atomistic systems |

11:15-12:45 | Lunch |

Afternoon session chair: Irene Fonseca (Mathematics, Carnegie-Mellon University) | |

12:45-1:45 | Cyrill Muratov (New Jersey Institute of Technology) We characterize skyrmions in ultrathin ferromagnetic films as local minimizers of a reduced micromagnetic energy appropriate for quasi two-dimensional materials with perpendicular magnetic anisotropy and interfacial Dzyaloshinskii-Moriya interaction. The minimization is carried out in a suitable class of two-dimensional magnetization configurations that prevents the energy from going to negative infinity, while not imposing any restrictions on the spatial scale of the configuration. We first demonstrate existence of minimizers for an explicit range of the model parameters when the energy is dominated by the exchange energy. We then investigate the conformal limit, in which only the exchange energy survives and identify the asymptotic profiles of the skyrmions as degree 1 harmonic maps from the plane to the sphere, together with their radii, angles and energies. A byproduct of our analysis is a quantitative rigidity result for degree ±1 harmonic maps from the two-dimensional sphere to itself.Magnetic skyrmions in the conformal limit |

2:00-3:00 | Chun Liu (Illinois Institute of Technology) We will derive and explore the mass action kinetics of chemical reactions by employing an energetic variational approach. The dynamics of the system is determined through the choice of the free energy, the dissipation (the entropy production), as well as the kimenatics (conservation of species). The method enables us to capture the coupling and competition of various mechanisms, including mechanical effects such as diffusion, viscoelasticity in polymerical fluids and muscle contraction, as well as the thermal effects. This is a joint work with Bob Eisenberg, Pei Liu, Jan-Eric Sulzbach, Yiwei Wang and Tengfei Zhang.Generalized law of mass action (LMA) with energetic variational approaches (EnVarA) and applications |

3:15-4:15 | Sylvia Serfaty (New York University) We report on joint work with Carlos Roman and Etienne Sandier in which we study the onset of vortex lines in the 3D Ginzburg-Landau model of superconductivity with magnetic field and derive an interaction energy for them.Vortex lines interactions in the 3D Ginzburg-Landau model of superconductivity |

In order to apply for this program, you must first register for an account and then login. Refreshing this page should then bring up the application form. Note that, due to requirements related to our NSF grant, you will only be able to apply for funding to attend if you have linked an ORCID^{®} iD to your account. You will have an opportunity to create (if necessary) and connect an ORCID iD to your account once you’ve registered.

Complex societal problems can be studied and modeled through the mathematical theory of Mean Field Games. Indeed MFGs are a mathematical modeling approach to stochastically evolving systems which involve a large number of indistinguishable rational agents that have the same optimization criteria. The theory of MFG is very lively and productive at the moment and several important results have been achieved that can be applied to engineering, economics, finance, social sciences, In this final workshop we present recent analytic, probabilistic and numerical advances in this theory.

]]>**Organizers**:

- Pierre Cardaliaguet (Mathematics, Paris-Dauphine)
- René Carmona (ORFE, Princeton)
- Annalisa Cesaroni (Statistics, Padova)
- Takis Souganidis (Mathematics, University of Chicago)
- Daniela Tonon (Mathematics, Paris-Dauphine)

**Description**:

Complex societal problems can be studied and modeled through the mathematical theory of Mean Field Games. Indeed MFGs are a mathematical modeling approach to stochastically evolving systems which involve a large number of indistinguishable rational agents that have the same optimization criteria. The theory of MFG is very lively and productive at the moment and several important results have been achieved that can be applied to engineering, economics, finance, social sciences, In this final workshop we present recent analytic, probabilistic and numerical advances in this theory.

In order to register for this workshop, you must have an IMSI account and be logged in. Please login or create an account.

Mean field theories, Mean Field Games, and Mean Field Control are theoretical concepts which can naturally be brought to bear in applications to financial engineering. The workshop will examine how they influenced the development of financial mathematics theoretical works and the implementation of financial engineering solutions to problems involving large ensembles of individuals or robots optimizing their behaviors in uncertain and complex environments.

]]>**Organizers**:

- Rene Carmona (ORFE, Princeton)
- Beatrice Acciaio (ETH-Zurich)

**Description**:

Mean field theories, Mean Field Games, and Mean Field Control are theoretical concepts which can naturally be brought to bear in applications to financial engineering. The workshop will examine how they influenced the development of financial mathematics theoretical works and the implementation of financial engineering solutions to problems involving large ensembles of individuals or robots optimizing their behaviors in uncertain and complex environments.

Applications will include contract theory, cyber currency mining, high frequency trading, systemic risk, and recents developments in the applications of machine learning techniques to the numerical solutions of some of these problems.

In order to register for this workshop, you must have an IMSI account and be logged in. Please login or create an account.

The paradigm of Mean Field Games (MFG) has become a major connection between distributed decision-making and stochastic modeling. Starting out in the stochastic control literature, it is gaining rapid adoption across a range of industries. The objective of this workshop is to give a clear vision of how MFG tools are being used in practical settings, both in complement and in contrast to the usual methodologies. The workshop will gather researchers both from industry and universities and will focus on diverse application areas.

]]>**Organizers**:

- Clemence Allasseur (Finance For Energy Market Research Centre EDF)
- Damien Fessler (University of Paris-Dauphine and PSL)
- Mike Ludkovski (Statistics and Applied Probability, UC Santa Barbara)

**Description**:

The paradigm of Mean Field Games (MFG) has become a major connection between distributed decision-making and stochastic modeling. Starting out in the stochastic control literature, it is gaining rapid adoption across a range of industries. The objective of this workshop is to give a clear vision of how MFG tools are being used in practical settings, both in complement and in contrast to the usual methodologies. The workshop will gather researchers both from industry and universities and will focus on diverse application areas, including (but not exclusively):

- Energy sector, including smart power grids and natural commodity markets
- Control and mitigation of Epidemics, including in the context of the Covid-19 pandemic response
- Financial market microstructure for algorithmic and high-frequency trading and cryptocurrencies

In order to register for this workshop, you must have an IMSI account and be logged in. Please login or create an account.

Interacting particle models are a powerful mathematical tool to model the behavior of large groups in economics as well as in the life and social sciences. Understanding the dynamics of these systems on different levels is of great importance, as it gives insights into the emergence of many complex phenomena. In this workshop we will focus on recent developments and emerging challenges in the derivation and analysis of these micro- and mean-field models. It will feature different perspectives and approaches to these challenges, by bringing together applied mathematicians working at the interfaces between statistics, social sciences and the life sciences.

]]>**Organizers**:

- Annalisa Cesaroni (Padova)
- Qiang Du (Columbia University)
- Benedetto Piccoli (Rutgers University
- Marie-Therese Wolfram (University of Warwick)

**Description**:

Interacting particle models are a powerful mathematical tool to model the behavior of large groups in economics as well as in the life and social sciences. Here, particles may correspond to agents trading certain goods, fish or birds moving collectively, or individuals exchanging ideas in social networks. Understanding the dynamics of these systems on different levels is of great importance, as it gives insights into the emergence of many complex phenomena. In this workshop we will focus on recent developments and emerging challenges in the derivation and analysis of these micro- and mean-field models. It will feature different perspectives and approaches to these challenges, by bringing together applied mathematicians working at the interfaces between statistics, social sciences and the life sciences.

This conference invites participants to present and discuss current research on models with the following features. The heterogeneous agents feature refers to agents solving dynamic problems subject to idiosyncratic random shocks, each agent with non-trivial interactions with the remaining agents. The “aggregate dynamics” feature refers to the focus on the understanding and characterization of the dynamics of the entire system, either itself subject to aggregate shock or as a deterministic system, using analytical or numerical techniques. Examples of such models are variants of Mean Field Games. Models will have applications in several fields in economics and intersections with other disciplines.

]]>**Organizer**: Fernando Alvarez (Economics, University of Chicago)

**Description**

This conference invites participants to present and discuss current research on models with the following features. The heterogeneous agents feature refers to agents solving* dynamic problems* subject to i*diosyncratic random shocks*, each agent with *non-trivial interactions* with the remaining agents. The “aggregate dynamics” feature refers to the focus on the understanding and characterization of the dynamics of the *entire system*, either itself subject to aggregate shock or as a deterministic system, using analytical or numerical techniques. Examples of such models are variants of Mean Field Games, but we are considering a broader set of complex societal problems. We expect to have presentation of models with applications in several fields in economics and intersections with other disciplines.

The aim of this workshop is to gather specialists from machine learning and statistics to applied probability and analysis who share a common interest in mean-field models. Potential applications range from mean-field games to stochastic algorithms and simulations, neural networks and frequentist or Bayesian statistical inference for interacting systems.

]]>**Organizers**

- Marc Hoffmann (University of Paris-Dauphine)
- Francis Bach (INRIA and Ecole Normale Superieure)

**Description**

The aim of this workshop is to gather specialists from machine learning and statistics to applied probability and analysis who share a common interest in mean-field models. Potential applications range from mean-field games to stochastic algorithms and simulations, neural networks and frequentist or Bayesian statistical inference for interacting systems.

This conference will consist of three series of lectures, the aim of which is to present the main issues at stake in the analysis of distributed solutions to complex societal problems and to describe some mathematical tools to handle these questions. Applications range from collective behavior in economy and finance to crowd analysis and the spread of diseases, and from machine learning to stochastic optimization and artificial intelligence.

]]>**Organizers**

- Pierre Cardaliaguet (University of Paris-Dauphine and PSL)
- René Carmona (ORFE, Princeton).

**Description**

This conference will consist of three series of lectures, the aim of which is to present the main issues at stake in the analysis of distributed solutions to complex societal problems and to describe some mathematical tools to handle these questions. Applications range from collective behavior in economy and finance to crowd analysis and the spread of diseases, and from machine learning to stochastic optimization and artificial intelligence.