Uncertainty is ubiquitous in the modern world. This raises profound challenges in any effort to model massively complex phenomena.
The construction of verifiable models is limited due to error and uncertainty that enter through many channels. When it comes to the modeling of highly complex systems, there are two potential sources for predictive inaccuracy. The first can be due to misestimation of the system’s initial, and possibly stochastic, conditions, or to unavoidable natural variation in the target phenomena. This is the random error, which is, at least in principle, detectable and correctable. The second (systematic error) may arise from systematic misspecification of the theoretical characteristics and dynamics of the system itself and is difficult to detect empirically. However, advances in the interrogability of models could lead to stable mathematical moorings for the parametrizations and attendant weightings of the models, and thus to sound anchoring of the basic science.
Model uncertainty has considerably more complex effects on human decision making as its directly affects attitudes and behaviors not only towards uncertainty itself but also towards the lack of accurate models for it. This creates a stochastic interplay between human beings and the environment which can be extremely difficult to analyze, and is seen in all aspects of everyday life, at both personal and collective levels. The actions of a decision maker (parent, patient, voter, consumer, traveler, etc.) also depend crucially on the perceptual horizon of the situation they encounter, which creates additional layers of difficulty as model decay and misspecification themselves change with time. Constructing informed beliefs and forecasts presents serious conceptual and computational challenges to both decision scientists and statisticians.
Dynamic models of single or multiple agents and learning require tractable solutions to learning and filtering problems. Although quasi-analytic solutions are available for only a limited number of such problems, there have been important advances in numerical solutions. The numerical methods involved are of interest to statisticians as a way to learn about parameters and, more generally, the models. Depending on the sector in which the applications are considered, bespoke methods of analysis are expected. For example, in building economic models, these methods are applied to the people (economic agents) who seek to learn from historical data when making decisions. The objective functions of the decision makers should influence how to design the numerical learning algorithms. For instance, economists care about how people respond to extreme events that occur infrequently but may have important, even detrimental, consequences to their well-being. On the other hand, building models for patients facing a potentially harsh treatment would need a different analysis of preferences towards uncertainty, based on more personalized profiles and much shorter horizon.
Although identifying and quantifying uncertainty is of paramount importance, it is equally critical to access, manage and mitigate the emerging risks. This is where mathematics and statistics can provide the necessary, effective, rigorous models and tools to deal with these issues.