Research WorkshopSpecial Events

Opening Conference

Opening Conference

Vistas in the Applied Mathematical Sciences

October 7-9, 2020

This conference will explore IMSI’s scientific themes: Climate Science, Data and Information, Health Care and Medicine, Materials Science, Quantum Information, and Uncertainty. The following speakers are confirmed:

  • Scott Aaronson (University of Texas, Austin)
  • Tony Cai (University of Pennsylvania)
  • Omar Ghattas (University of Texas, Austin)
  • Kristin Lauter (Microsoft)
  • Claude Le Bris (Ecole des Ponts)
  • Pierre-Louis Lions (College de France)
  • Andrew Lo (MIT)
  • Jose Scheinkman (Columbia)
  • Karen Willcox (University of Texas, Austin)
  • Laure Zanna (NYU)

This conference will take place via Zoom.

October 7

All times CDT
9:00Kevin Corlette
University of Chicago and Director, IMSI
Welcome
9:10Robert Zimmer
President, University of Chicago
Opening remarks
11:00-12:00Omar Ghattas
University of Texas, Austin
Parsimonious structure-exploiting deep neural network surrogates for Bayesian inverse problems and optimal experimental design

In an inverse problem, one seeks to infer unknown parameters or parameter fields from measurements or observations of the state of a natural or engineered system. Such problems are fundamental to many fields of science and engineering: often available models possess unknown or uncertain input parameters that must be inferred from experimental or observational data. The Bayesian framework for inverse problems accounts for uncertainty in the inferred parameters stemming from uncertainties in the observational data, the model, and any prior knowledge. This leads to the meta-problem of optimal experimental design (OED): how do we optimize the data acquisition so that the uncertainty in the recovered parameters is minimized? In both Bayesian inversion (BI) and OED, the forward model must be solved numerous times—as many as millions—to characterize the uncertainty in the parameters.    BI and OED problems governed by large-scale complex models in high parameter dimensions (such as nonlinear PDEs with uncertain infinite dimensional parameter fields) quickly become prohibitive.

Efficient evaluation of the parameter-to-observable (p2o) map, defined by solution of the forward model, is the key to making BI and OED tractable. Surrogate approximations of p2o maps have the potential to greatly accelerate BI and OED, provided that the p2o map can be accurately approximated using (far) fewer forward model solves than would be required for solving the BI or OED problem using the full p2o map. Unfortunately, constructing such surrogates presents significant challenges when the parameter dimension is high and the forward model is expensive. Deep neural networks (DNNs) have emerged as leading contenders for overcoming these challenges. We demonstrate that black box application of DNNs for problems with infinite dimensional parameter fields leads to poor results, particularly in the common situation when training data are limited due to the expense of the model. However, by constructing a network architecture that is adapted to the geometry and intrinsic low-dimensionality of the p2o map as revealed through adjoint PDEs, one can construct a “parsimonious” DNN surrogate with superior approximation properties with only limited training data.    For training the DNN, we introduce the low rank saddle-free Newton method for stochastic optimization, and show that it outperforms first order methods such as Adam and stochastic gradient descent.    Examples from climate modeling are presented.

This work is joint with Tom O’Leary-Roseberry, Peng Chen, Umberto Villa, and Nick Alger.

1:30-2:30Andrew Lo
MIT
How Data Science and Financial Engineering Can Help Cure CancerFunding for early-stage biomedical innovation has been declining at the same time that medical breakthroughs seem to be occurring at ever increasing rates. One explanation for this counterintuitive trend is that increasing scientific knowledge can actually lead to greater economic risk for investors in the life sciences. While the Human Genome project, high-throughput screening, genetic biomarkers, immunotherapies, and gene therapies have had tremendously positive impact on biomedical research and, consequently, patients, they have also increased the cost and complexity of the drug development process, causing investors to shift their assets to more attractive investment opportunities. In this talk, Prof. Lo will describe how data science and financial engineering are being used to forecast more accurately the financial risks and rewards of drug discovery and development, thereby reducing the risk and increasing the attractiveness of biomedical innovation so as bring new and better therapies to patients faster.
3:00-4:00Kristin Lauter
Microsoft Research
Private AI: Machine Learning on Encrypted DataAs the world adopts Artificial Intelligence, the privacy risks are many. AI can improve our lives, but may leak or misuse our private data. Private AI is based on Homomorphic Encryption (HE), a new encryption paradigm which allows the cloud to operate on private data in encrypted form, without ever decrypting it, enabling private training and private prediction. Our 2016 ICML CryptoNets paper showed for the first time that it was possible to evaluate neural nets on homomorphically encrypted data, and opened new research directions combining machine learning and cryptography. The security of Homomorphic Encryption is based on hard problems in mathematics involving lattices, a candidate for post-quantum cryptography. Cyclotomic number rings are a good source of the lattices used in practice, which leads to new interesting problems in number theory. This talk will explain Homomorphic Encryption, Private AI, and show demos of HE in action.

October 8

9:00Takis Souganidis
University of Chicago and Scientific Advisor, IMSI
Welcome
9:05Ka Yee Lee
Provost, University of Chicago
Opening remarks
9:15-10:15Claude Le Bris
Ecole des Ponts ParisTech
Multiscale Finite Element Methods: some recent contributionsThe talk will first review various discretization approaches dedicated to problems in the engineering sciences where different length scales play a role. It will next proceed with some recent contributions, in particular regarding the simulation of materials with defects.
10:45-11:45Tony Cai
University of Pennsylvania
When Statistics Meets Computing

In the conventional statistical framework, the goal is developing optimal inference procedures, where optimality is understood with respect to the sample size and parameter space. When the dimensionality of the data becomes large as in many contemporary applications, the computational concerns associated with the statistical procedures come to the forefront. A fundamental question is: Is there a price to pay for statistical performance if one only considers computable (polynomial-time) procedures? After all, statistical methods are useful in practice only if they can be computed within a reasonable amount of time.

In this talk, we discuss the interplay between statistical accuracy and computational efficiency in two specific problems: submatrix localization and sparse matrix detection based on a noisy observation of a large matrix. The results show some interesting phenomena and point to directions that are worthy further investigation.

1:15-2:15Scott Aaronson
University of Texas at Austin
Quantum Computational Supremacy and Its ApplicationsLast fall, a team at Google announced the first-ever demonstration of “quantum computational supremacy”—that is, a clear quantum speedup over a classical computer for some task—using a 53-qubit programmable superconducting chip called Sycamore. Google’s accomplishment drew on a decade of research in my field of quantum complexity theory. This talk will discuss questions like: what exactly was the (contrived) problem that Sycamore solved? How does one verify the outputs using a classical computer? And how confident are we that the problem is classically hard—especially in light of subsequent counterclaims by IBM and others? I’ll end with a possible application that I’ve been developing for Google’s experiment: namely, the generation of trusted public random bits, for use (for example) in cryptocurrencies.
2:45-3:45Karen Willcox
University of Texas at Austin
Toward predictive digital twins: From physics-based modeling to scientific machine learningA digital twin is an evolving virtual model that mirrors an individual physical asset throughout its lifecycle. Key to the digital twin concept is the ability to sense, collect, analyze, and learn from the asset’s data. This talk will highlight the foundational mathematical, statistical and computational challenges that must be overcome to achieve predictive digital twins for societally critical applications across science and engineering. Digital twins hold the promise to underpin intelligent automation by supporting data-driven decision making and enabling asset-specific analysis, but this can only be achieved through a synergistic combination of predictive physics-based modeling, data-driven machine learning, and uncertainty quantification.

October 9

9:00Doug Simpson
University of Illinois at Urbana-Champaign and Associate Director, IMSI
Welcome
9:05Angela Olinto
Dean of the Physical Sciences Division, University of Chicago
Opening remarks
9:15-10:15José Scheinkman
Columbia University
Bubbles in financial and in art marketsI will discuss a mathematical approach to the question of asset market bubbles and give some empirical illustrations including one in art.
10:45-11:45Laure Zanna
Courant Institute for Mathematical Sciences, New York University
Blending physics and machine learning to improve climate projectionsNumerical simulations used for weather and climate predictions solve approximations of the governing laws of fluid motions. The computational cost of these simulations limits the accuracy of the predictions. Uncertainties in the simulations and predictions ultimately originate from the poor or lacking representation of processes, such as turbulence, that are not resolved on the numerical grid of global climate models. I will show that using machine learning (ML) algorithms with imposed physical constraints are good candidates to improve the representation of processes that occur below the scales resolved by global models. In this talk, I will propose new representations of ocean turbulence based on two different ML approaches using data from high-resolution simulations. Specifically, I will discuss how to use relevance vector machines to discover equation for the sub grid forcing, and convolutional neural networks to derive a stochastic representation of sub grid forcing. The new models of turbulent processes are interpretable and/or encapsulate physics, and lead to improved simulations of the ocean. Our results simultaneously open the door to the discovery of new physics from data and the improvement of numerical simulations of oceanic and atmospheric flows.