**Introduction to Decision Making and Uncertainty**

### June 28-July 23, 2021

**This program will take place online**

How do we make decisions in the face of risk? The need to make decisions in the presence of uncertainty cuts across a wide range of issues in science and human behavior. The underlying problems require both sophisticated modeling and advanced mathematical and statistical approaches and techniques.

This program will serve as an introduction to the long program on Decision Making and Uncertainty scheduled for Spring 2022. It aims to introduce participants to a variety of modeling questions and methods of current interest in this area. It will be built on “thematic clusters” of emerging areas of application.Each cluster will begin with tutorial lectures on the first day followed by supporting lectures on mathematical and statistical topics related to the underlying theme. There will also be panel discussions, together with poster sessions and short presentations by the participants.

The intended audience is researchers interested in mathematical modeling and methods applicable to decision making under uncertainty in economics, finance, business, and other areas. Advanced Ph.D. students, postdocs, and junior faculty are especially encouraged to apply.

The program covers a diverse set of topics and each theme will be self-contained. Given the variety of both the applications and the methods, participants are encouraged to attend the entire program. Basic knowledge in probability, stochastics, and statistics is required.

The planned clusters are as follows.

Dates | Topic | Organizer(s) |

June 28-July 2 | Foundations of stochastic optimization, Dynamic Programming, and Hamilton-Jacobi-Bellman equations |
Thaleia Zariphopoulou (Mathematics and McCombs Business School, University of Texas at Austin) |

July 5-7 | Optimal transport and machine learning |
Marcel Nutz (Statistics, Columbia University) |

July 8-9 | Time-inconsistent and relaxed stochastic optimization, and applications |
Xunyu Zhou (IEOR, Columbia University) |

July 12-16 | Markov decision processes with dynamic risk measures: optimal control and learning |
Tomasz Bielecki (IIT) and Andrzej Ruszczynski (Rutgers Business School) |

Machine learning and Mean Field Games |
Xin Guo (IEOR, Berkeley) | |

July 19-23 | Models for climate change with ambiguity and misspecification concerns |
Lars Hansen (Economics, University of Chicago) |

Games with ambiguity |
Peter Klibanoff (Kellogg School, Northwestern University) |

**Week 1: June 28-July 2**

**Module:** **Foundations of stochastic optimization, BSDE and applications**

**Organizer: Thaleia Zariphopoulou **(McCombs Business School and Mathematics, UT Austin)

Module Summary

This tutorial will provide an introduction to various methodological areas that are central in decision making under uncertainty. Specifically, they will cover foundational material in stochastic optimization, stochastic analysis and its connection with pde. They will also provide an introduction to backward stochastic differential equations (BSDE) and BSDE systems. In addition, they will offer an introduction to functional Itô calculus and its applications, as well as an introduction to robo-advising as a human-machine stochastic interactive system.

**Monday, June 28**

**8:45-9:00** Welcome and introduction to the spring long program, Takis Souganidis (University of Chicago)

**Title:** Foundations of stochastic optimization

**Speaker: **Thaleia Zariphopoulou (UT-Austin)

Description & Schedule

The lectures will provide an introduction to stochastic optimization problems of controlled diffusion processes. They will cover fundamental results, like the Dynamic Programming principle and the Hamilton-Jacobi-Bellman equation, and will provide various examples coming mainly from optimization problems in decision analysis and optimal asset allocation. They will also provide a short introduction to the theory of viscosity solutions for these non-linear problems.

9:00-10:15: Lecture 1

10:45-12:00: Lecture 2

**Tuesday, June 29**

**Title: **Probabilistic methods for elliptic and parabolic PDEs: from linear equations to free-boundary problems

**Speaker: **Sergey Nadtochiy (IIT)

Description & Schedule

The lecture will start by presenting the classical results which connect the elliptic and parabolic PDEs with the associated probabilistic (stochastic) systems. This includes the famous Feynman-Kac formula and the representation of the solutions to Hamilton-Jacobi-Bellman equations via the stochastic control problems. In particular, it will be discussed how these connections may help in the analysis of the associated PDEs. Finally, it will be shown how similar results can be extended to the free-boundary problems and, in particular, how they can be used to establish the well-posedness of the latter.

9:00-10:15: Lecture 1

10:45-12:00: Lecture 2

**Wednesday, June 30**

**Title:** Foundations of Backward Stochastic Differential Equations and their applications

**Speakers: **Gordan Zitkovic (UT-Austin) and Joseph Jackson (UT-Austin)** *** *

Description & Schedule

Lectures 1 and 2 are intended to be a soft introduction to Backward Stochastic Differential Equations (BSDEs) and some of their applications. Even though they appear to be formal cousins of classical stochastic differential equations (SDEs), BSDEs exhibit a rich and interesting mathematical life of their own. Their applications are no less impressive – they range from option pricing in finance, over optimal stochastic control and stochastic games to stochastic analysis on manifolds.

Lecture 3 will focus on quadratic BSDE systems i.e. when the driver has quadratic growth in the variable z. Such systems arise in many applications. However, unlike Lipschitz BSDEs, quadratic systems are not always well-posed. Nevertheless, there has been some recent progress on identifying structural conditions on quadratic BSDE systems which yield positive results. This talk will offer a survey of some of these developments.

9:00-10:15: Lecture 1 (Gordan Zitkovic)

10:45-12:00: Lecture 2 (Gordan Zitkovic)

1:00-2:00 Lecture 3 (Joseph Jackson)

2:00-3:00: Office Hours (Joseph Jackson)

**Thursday**, **July 1**

**Title: **Introduction to Functional Itô Calculus and applications

**Speaker: **Rama Cont (Oxford University)

9:00-10:15: Lecture 1

10:45-12:00: Lecture 2

**Friday, July 2**

**Title: **Introduction to robo-advising: modeling, learning, and human-machine interactions

**Speakers: **Agostino Capponi (Columbia University) and Sveinn Olafsson (Columbia University)

Description & Schedule

The lectures will begin by discussing institutional details of robo-advising, and how its operations and scale of adoption are impacted by factors such as behavioral biases, algorithm aversion, and limited client attentiveness. They will then provide a quantitative analysis of the most prominent forms of robo-advice: utility based and goal-based. They will discuss quantitative frameworks used to meet investment goals and deadlines specified by clients, how to achieve portfolio personalization by viewing robo-advisors as human-machine interaction systems, and how to infer clients’ risk preferences from trading actions. They will conclude listing algorithmic aspects of other services provided by robo-advisors, such as tax-aware portfolio management and construction. Along the way, topics that will be covered include stochastic optimization frameworks, PDEs, reinforcement learning, and Nash equilibria.

9:00-10:15: Lecture 1 (Agostino Capponi)

10:45-12:00: Lecture 2 (Agostino Capponi)

3:30-4:30 Office Hours (Sveinn Olafsson)

**Week 2: July 5-9**

**Monday, July 5 **Holiday

**July 6-7**

**Module: ** **Optimal transport and machine learning **

**Organizer: Marcel Nutz** (Mathematics and Statistics, Columbia University)

Module Summary

The four parts of this tutorial will cover the mathematical foundations of optimal transport, modern computational approaches and applications in machine learning, sampling properties of optimal transport, and applications of transport maps in nonparametric statistics.

**Tuesday, July 6**

9:00 **Introduction to optimal transport**

Marcel Nutz (Columbia University)

Description

This tutorial introduces the optimal transport problem and some of its fundamental mathematical results: existence of optimal transports, geometric characterization, dual problem, etc. A second part of the lecture introduces entropic regularization.

1:00 **Distribution-free nonparametric inference using optimal transport**

Bodhisattva Sen (Columbia University)

Description

Nonparametric statistics, a subfield of statistics that came into being after the introduction of Wilcoxon’s tests in Wilcoxon (1945), is traditionally associated with distribution-free methods, e.g., hypothesis testing problems where the null distribution of the test statistic does not depend on the unknown data generating distribution. Although enormous progress has been made in this field in the last 100 years, most of the distribution-free methods have been restricted to one-dimensional problems.

Recently, using the theory of optimal transport, many distribution-free procedures have been extended to multi-dimensions. Prominent examples include: (a) two-sample equality of distributions testing, (b) testing for mutual independence, etc. In this lecture, I will summarize these recent developments and: (i) provide a general framework for distribution-free nonparametric testing in multi-dimensions, (ii) propose multivariate analogues of some classical methods (including Wilcoxon’s tests), and (iii) study the (asymptotic) efficiency of the proposed distribution-free methods. I will also compare and contrast these distribution-free methods with kernel-based methods that are very popular in the machine learning literature. In summary, I will illustrate that these distribution-free methods are as powerful/efficient as their traditional counterparts and more robust to heavy-tailed outliers and contamination.

**July 7**

9:00 **Learning with optimal transport**

Aude Genevay (MIT)

Description

In this tutorial, we will start by introducing different notions of distance between probability measures and see how they compare on toy learning problems, both from a theoretical and practical perspective. We will then focus on regularized optimal transport and provide fast algorithms to compute it, along with theoretical guarantees. We will finish with an overview of learning problems that can be tackled using optimal transport.

1:00 **Statistical estimation and optimal transport**

Jonathan Niles-Weed (NYU)

Description

In this tutorial, we will consider a fundamental statistical question of optimal transport: how well can optimal transport distances and maps be estimated from data? We will discuss the pervasive “curse of dimensionality” in the statistical theory of optimal transport and discuss assumptions—such a smoothness, sparsity, and low-dimensionality—which can partially ameliorate this curse. We will also explore minimax lower bounds, which establish that in the absence of such assumptions, it is not possible to avoid the curse of dimensionality entirely. These pessimistic results help to motivate several variants of optimal transport which have recently become popular in machine learning.

**July 8-9 **

**Module: Time-inconsistent and relaxed stochastic optimization, and applications**

**Organizer: Xunyu Zhou** (Columbia University)

Module Summary

This tutorial will provide an introduction to the theory of relaxed controls and their applications to reinforcement learning, including decision making under exploration via randomization in continuous time. The tutorial will also provide an introduction to time-inconsistent stochastic optimization and its applications to behavioral finance under decision making criteria that involve rank-dependent risk preferences, hyperbolic discounting, distorted probabilities and non-linear expectations.

**Thursday, July 8**

**Speakers: **Xunyu Zhou (Columbia University) and Wenpin Tang (Columbia University)* *

**Title: **Exploration via Randomization: Reinforcement Learning and Beyond

Description and Schedule

The lectures will cover some of the latest developments in the approach of exploration through randomization in reinforcement learning and related topics in continuous time and spaces. The first lecture will focus on optimal solutions to the exploratory formulation, relating them to classical reinforcement learning solutions such as Gibbs measure and Boltzmann exploration, and will illustrate the theory with an application to a Langevin temperature control problem. The second lecture will present the sampling from Gibbs measures as well as its relation with optimization. It will focus on the Langevin type dynamics, with topics such as replica exchange (swapping), underdamping (Hamiltonian Monte Carlo), and discretization schemes.

9: 00-10:15am: Xunyu Zhou (Columbia University)

10:45-12:00pm: Wenpin Tang (Columbia University)

3:30-4:30 pm: Office hours, Yanwei Jia (Columbia University)

**Friday, July 9**

**Speakers: **Xuedong He (CUHK) and Moris Strub** **(SUSTECH)** **** **

**Title: **Foundations of time-inconsistent control, and applications to decision making under non-standard optimality criteria

Description and Schedule

Time inconsistency is prevalent in dynamic choice problems: a plan of actions to be taken in the future that is optimal for an agent today may not be optimal for the same agent in the future. If the agent is aware of this intra-personal conflict but unable to commit herself in the future to following the optimal plan today, the rational strategy for her today is to correctly anticipate her actions in the future and then act today accordingly. Such a strategy is named intra-personal equilibrium. These lectures will first provide a review of recent studies on intra-personal equilibria for time-inconsistent problems in continuous time. They will then give an introduction of two construction approaches for time-consistent preferences, the dynamic utility approach and the forward utility approach, and a detailed discussion of these approaches under the setting of distorted probabilities.

9:00-10:15am: Xuedong He (CUHK)

10:45-12:00pm: Moris Strub (SUSTECH)

**Week 3: July 12-16**

**Module 1: Markov decision processes with dynamic risk measures: optimal control and learning **

**Organizers**: **Tomasz Bielecki** (IIT) and **Andrzej Ruszczyński (**Department of Management Science and Information Systems, Rutgers)

**Module 2: Machine learning and Mean Field Games **

**Organizer**: **Xin Guo** (IEOR, UC Berkeley)

**Monday, July 12**

Schedule & Descriptions

**9:00-11:00** (Module 2) Christa Cucherio (Statistics and Operations Research, University of Vienna)

**Title**: From neural SDEs and signature methods to affine processes and back

**Description**: Modern universal classes of dynamic processes, based on neural networks or signature methods, have recently entered the field of stochastic modeling, in particular in Mathematical Finance. This has opened the door to more data-driven and thus more robust model selection mechanisms, while first principles like no arbitrage still apply. The underlying model classes are often so-called neural stochastic differential equations (SDEs) or signature SDEs, i.e. SDEs whose characteristics are either neural networks or linear functions of the process’ signature. We present methods how to learn these characteristics from available option price and time series data.

From a more theoretical point of view, we show how these new models can be embedded in the framework of affine and polynomial processes, which have been — due to their tractability — the dominating process class prior to the new era of highly over-parametrized dynamic models. Indeed, we prove that generic classes of diffusion models can be viewed as infinite dimensional affine processes, which in this setup coincide with polynomial processes. A key ingredient to establish this result is again the signature process. This then allows to get power series expansions for expected values of analytic functions of the process’ marginals, which also apply to neural or signature SDEs. In particular, expected signature can be computed via polynomial technology.

**11:15-1:00** (Module 1) Andrzej Ruszczyński** (**Department of Management Science and Information Systems, Rutgers)

**Title**: Foundations of Dynamic Risk Measurement (Andrzej Ruszczynski)

**Description**: We shall discuss the background of the theory of measures of risk, their main properties, and examples. Special attention will be paid to the dual representation, law-invariance, and the Kusuoka representation. Then we shall discuss issues associated with measuring risk of sequences, in particular: time consistency and the local property. We shall extend the dual representation to dynamic risk measures. Finally, we shall discuss risk measurement in continuous time.

**2:00-4:00** (Module 2) Dacheng Xiu (Booth, University of Chicago)

**Title**: Predicting returns with text data and (Re-)Imag(in)ing Price Trends

**Description**: The common theme of these two talks is on machine learning applications to investment with alternative data.

**Lecture 1**: We introduce a new text-mining methodology that extracts information from news articles to predict asset returns. Unlike more common sentiment scores used for stock return prediction (e.g., those sold by commercial vendors or built with dictionary-based methods), our supervised learning framework constructs a score that is specifically adapted to the problem of return prediction. Our method proceeds in three steps: 1) isolating a list of terms via predictive screening, 2) assigning prediction weights to these words via topic modeling, and 3) aggregating terms into an article-level predictive score via penalized likelihood. We derive theoretical guarantees on the accuracy of estimates from our model with minimal assumptions. In our empirical analysis, we study one of the most actively monitored streams of news articles in the financial system–the Dow Jones Newswires–and show that our supervised text model excels at extracting return-predictive signals in this context. Information in newswires is assimilated into prices with an inefficient delay that is broadly consistent with limits-to-arbitrage (i.e., more severe for smaller and more volatile firms) yet can be exploited in a real-time trading strategy with reasonable turnover and net of transaction costs.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3389884

**Lecture 2**: We reconsider the idea of trend-based predictability using methods that flexibly learn price patterns that are most predictive of future returns, rather than testing hypothesized or pre-specified patterns (e.g., momentum and reversal). Our raw predictor data are images—stock-level price charts—from which we elicit the price patterns that best predict returns using machine learning image analysis methods. The predictive patterns we identify are largely distinct from trend signals commonly analyzed in the literature, give more accurate return predictions, translate into more profitable investment strategies, and are robust to a battery of specification variations. They also appear context-independent: Predictive patterns estimated at short time scales (e.g., daily data) give similarly strong predictions when applied at longer time scales (e.g., monthly), and patterns learned from US stocks predict equally well in international markets.

**Tuesday, July 13**

Schedule and Descriptions

**9:00-11:00** (Module 2) Huyen Pham (Université de Paris Diderot)

**Lecture 1**: Control of McKean-Vlasov systems and applications

This lecture is concerned with the optimal control of McKean-Vlasov equations, which has been knowing a surge of interest since the emergence of the mean-field game theory. Such control problem corresponds to the asymptotic formulation of a N-player cooperative game under mean-field interaction, and can also be viewed as an influencer strategy problem over an interacting large population. It finds various applications in economy, finance, or social sciences for modelling motion of socially interacting individuals and herd behavior. It is also relevant for dealing with intermittence questions arising typically in risk management.

In the first part, I will focus on the discrete-time case, which extends the theory of Markov decision processes (MDP) to the mean-field interaction context. We give an application with explicit results to a problem of targeted advertising via social networks.

The second part is devoted to the continuous-time framework. We shall first consider the important class of linear-quadratic McKean-Vlasov (LQMKV) control problem, which provides a major source for examples and applications. We show a direct and elementary method for solving explicitly LQMKV based on a mean version of the well-known martingale optimality principle in optimal control, and the completion of squares technique. Next, we present the dynamic programming approach (in other words, the time consistency approach) for the control of general McKean-Vlasov dynamics. In particular, we introduce the recent mathematical tools that have been developed in this context : differentiability in the Wasserstein space of probability measures, Itô formula along a flow of probability measures and Master Bellman equation.

**Lecture 2**: Deep learning algorithms for mean-field control problems

Machine learning methods for solving nonlinear partial differential equations (PDEs) and control problems are hot topical issues, and different algorithms proposed in the literature show efficient numerical approximation in high dimension. This lecture will present recent deep learning schemes for solving mean-field control problem, and the corresponding PDEs in Wasserstein space of probability measures. Some numerical tests for the examples of a mean-field systemic risk, mean-variance problem.

**11:15-1:00**: (Module 1) Tomasz Bielecki (IIT)

**Title**: Risk Sensitive Markov Decision Processes

**Description**: We shall relate the risk sensitive criterion to the entropic risk measure. Some relevant properties of the risk sensitive criterion will be presented. A discussion of Markov decisions processes will follow, mostly in discrete time. A study of a finite time horizon risk sensitive MDP subject to model uncertainty will be presented as well.

**2:00-4:00**: (Module 2) Xin Guo (IEOR, UC Berkeley)

**Lecture 1**: Generative Adversarial Networks: An Optimal Transport and Game Perspective

Generative Adversarial Networks (GANs) have enjoyed tremendous success in image generation and processing, and have recently attracted growing interests in financial modelings. In this tutorial we will introduce GANs from the perspective of mean field games (MFGs) and optimal transport (OT). We will first discuss the well-posedness of GANs as a minmax game, and then the variational structure of GANs, as well as GANs’ connection with mean-field games . We will next demonstrate how this game perspective enables a GANs-based algorithm (MFGANs) to solve efficiently high-dimensional mean-field games: the two neural networks trained in an adversarial way.

This new perspective will naturally lead to an analytical connection between GANs and Optimal Transport (OT) problems, for which we will provide sufficient conditions for the minimax games of GANs to be reformulated in the framework of OT.

**Lecture 2**: Convergence of GANs training: An Stochastic Approximation and Control Approach

Despite the popularity of Generative Adversarial Networks (GANs), there are well recognised and documented issues for GANs training. In this second part of the tutorial, we will first introduce a stochastic differential equation approximation approach for GANs training. We will next demonstrate the connection of this SDE approach with the classical Newton’s method, and then show how this approach will enable studies of the convergence of GANs training, as well as analysis of the hyperparameters training for GANs in a stochastic control framework.

**Wednesday, July 14**

Schedule & Description

**9:00-10:45**: (Module 1) Andrzej Ruszczyński** (**Department of Management Science and Information Systems, Rutgers)

**Title**: Markov Control Problems with Markov Risk Measures (Andrzej Ruszczynski)

**Description**: We shall adapt the theory of dynamic risk measurement to Markov systems and we shall introduce the concept of a Markov risk measure. We shall study the properties of such measures and develop optimality conditions and methods for three classes of control problems: finite horizon, infinite horizon with discount, and infinite horizon for transient systems. Finally, we shall mention two continuous-time models: a controlled jump process and a controlled diffusion process.

**Thursday, July 15**

Schedule & Description

**9:00-10:45**: (Module 1) Andrzej Ruszczyński** (**Department of Management Science and Information Systems, Rutgers)

**Title**: Risk-Averse Reinforcement Learning

**Description**: We shall discuss large-scale risk-averse Markov control problems with the use of value function approximations by a linear function of state features. We construct a projected risk-averse dynamic programming equation and study its properties. Then we shall present risk-averse counterparts of the basic and multi-step methods of temporal differences and discuss their convergence with probability one. Finally, we shall mention a risk-averse SARSA approach with the use of Q-factors.

**Friday, July 16**

Workshop Schedule (Module 1)

**9:00-10:00** Alexander Shapiro (Industrial and Systems Engineering at Georgia Tech

**Title**: Computational Approaches to Solving Multistage Stochastic Programs

**Abstract**: From a generic point of view, even linear Multistage Stochastic Programing (MSP) appears to be computationally intractable. However, this does not mean that some classes of MPS cannot be solved with a reasonable computational effort. In this talk we discuss some recent advances in computational approaches to solving convex MPS problems. In some applications the considered MSP programs have a periodical behavior. We demonstrate that in such cases it is possible to drastically reduce the number of stages by introducing a periodical analog of the so-called Bellman equations, used in Markov Decision Processes and Stochastic Optimal Control. Furthermore, we describe a primal – dual variant of a cutting plane type algorithm, applied to the constructed periodical Bellman equations. We discuss the risk-neutral and risk-averse/distributionally-robust settings, and sample complexity of discretization of the distribution of the corresponding random data process.

**10:00**–**11:00** Darinka Dentcheva (Mathematical Sciences, Stevens Institute of Technology)

**Title**: Subregular Recourse in Multistage Stochastic Optimization

**Abstract**: We discuss nonlinear multistage stochastic optimization problems in the spaces of integrable functions. The problems may include nonlinear dynamics and general objective functionals with dynamic risk measures as a particular case. We present analysis about the causal operators describing the dynamics of the system and the Clarke subdifferential for a penalty function involving such operators. We introduce concept of a subregular recourse in nonlinear multistage stochastic optimization and establish subregularity of the resulting systems in two formulations: with built-in nonanticipativity and with explicit nonanticipativity constraints. Optimality conditions for both formulations and their relations will be presented. This is a joint work with Andrzej Ruszczynski, Rutgers University.

**11:00-12:00** Igor Cialenco (Department of Mathematics, Illinois Institute of Technology)

**Title**: Adaptive-Robust Stochastic Control

**Abstract**: Motivated by various real-world problems, we will discuss a recently developed methodology, called adaptive-robust control, for solving a discrete-time stochastic control problem subject to model uncertainty, also known as Knightian uncertainty. The uncertainty comes from the fact that the controller does not know the true probability law of the underlying model but only that it belongs to a certain family of probability laws. We develop a learning algorithm that reduces the model uncertainty through progressive learning about the unknown system. One of the key components in the proposed methodology is the recursive construction of the confidence sets for the unknown parameters of a general ergodic Markov process. This, in particular, allows to establish the Bellman system of equations corresponding to the original stochastic control problem.

We will compare this stochastic control framework to some classical stochastic control methods that deal with model uncertainty, such as, robust control, adaptive control, strong robust control and Bayesian adaptive control.

Finally, we will apply the proposed stochastic control framework to a classical finance problem of optimal portfolio allocation problem. We will discuss both time-consistent and time-inconsistent terminal Markovian control problems. We provide a machine learning algorithm in solving numerically some of these problems, such as the dynamic Markowitz mean-variance portfolio selection problem with the modern twist of model uncertainty.

**12:00-1:00** Mert Gurbuzbalaban (Department of Management Science and Information Systems, Rutgers Business School)

**Title**: Momentum Acceleration Under Random Gradient Noise: From Convex to Non-Convex Optimization

**Abstract**: For many large-scale optimization and machine learning problems, first-order methods and their accelerated variants based on momentum have been a leading approach for computing low-to-medium accuracy solutions because of their cheap iterations and mild dependence on the problem dimension and data size. Even though momentum-based accelerated gradient (AG) methods proposed by Nesterov for convex optimization converges provably faster than gradient descent (GD) in the absence of noise, the comparison is no longer clear in the presence of gradient noise.

In the first part of the talk, we focus on GD and AG methods for minimizing convex functions when the gradient has random errors in the form of additive i.i.d. noise. We study the trade-offs between convergence rate and robustness to gradient errors in designing a first-order algorithm. Our results show that AG can achieve acceleration while being more robust to random gradient errors. Our framework also leads to practical algorithms that can perform better than other state-of-the-art methods in the presence of random gradient noise.

In the second part of the talk, we focus on non-convex optimization and study the stochastic gradient Hamiltonian Monte Carlo (SGHMC) algorithm, which is a popular variant of the stochastic gradient with momentum where a controlled and properly scaled Gaussian noise is added to the unbiased gradient estimates to steer them towards a global minimum. We obtain first-time non-asymptotic global convergence guarantees for SGHMC for both empirical risk and population risk optimization problems. Our results demonstrate that momentum-based Monte Carlo algorithms can lead to better iteration complexity as well as generalization performance compared to known guarantees for the (reversible) standard stochastic gradient Langevin Monte Carlo methods, justifying the use of momentum in the context of non-convex optimization and non-convex learning further.

**Week 4: July 19-22**

**Module 1: ** **Models for climate change with ambiguity and misspecification concerns **

**Organizer: Lars Hansen **(Economics, University of Chicago)

**Module 2: ** **Games with ambiguity **

**Organizer: Peter Klibanoff **(Managerial Economics & Decision Sciences, Kellogg School of Management, Northwestern University)

Module Summary

The tutorial will provide an introduction to some recent approaches to game theory (the theory of strategic interactions) when players may be averse to ambiguity (subjective uncertainty about probabilities). It includes lectures on decision-theoretic models of ambiguity-averse preferences and theories of how to update such preferences in a dynamically consistent manner after observing new information, in addition to lectures on game theory and mechanism design with these ambiguity concerns.

**Monday, July 19**

Schedule

**9:00-10:00, 10:30-11:30** Massimo Marinacci (Bocconi University, Professor in the Department of Decisions Sciences)

**Title**: Decision Theory Tools for Uncertainty,

including Ambiguity and Misspecification Concerns (Modules 1 & 2)

**1:00-2:00, 2:30-3:30** Lars Peter Hansen (University of Chicago, David

Rockefeller Distinguished Service Professor in Economics,

Statistics, Booth School of Business, and the College)

**Title**: Dynamic Decision Theory Under Uncertainty:

Tools and Applications (Module 1)

**Tuesday, July 20**

Schedule

**9:00-10:00, 10:30-11:30** Eran Hanany (Tel Aviv University, Professor in School of Industrial Engineering)

**Title**: Dynamically Consistent Updating, including

under Ambiguity (Module 2)

**1:00-2:00, 2:30-3:30** William Brock (University of Wisconsin-Madison,

Professor of Economics)

**Title**: Economics of Climate Change in the Face of

Uncertainty (Module 1)

**Wednesday, July 21**

Schedule

**9:00-10:00, 10:30-11:30** Peter Klibanoff (Northwestern University Kellogg School of Management, Professor of Managerial Economics and

Decision Sciences)

**Title**: Games with Ambiguity (Module 2)

**1:00-2:00, 2:30-3:30** Michael Barnett (Arizona State University, Assistant Professor in Finance)

**Title**: Solving and Assessing Economic Models of

Climate Change and Accounting for Uncertainty (Module 1)

**Thursday, July 22**

Schedule

** 9:00-10:00 **Sujoy Mukerji (Queen Mary University of London, Professor in the School of Economics and Finance)

**Title**: Mechanism Design with Ambiguity (Module 2)