This was part of Computational Challenges and Optimization in Kinetic Plasma Physics

What it will take to do first principles optimal design of fusion

Andrew Christlieb, Michigan State University

Wednesday, February 21, 2024

Abstract: Enabling first principle design of fusion energy systems faces many challenges.  To start with there are a vast number of spatial and temporal scales within these systems.  Let's consider ITER, the joint international fusion energy system being designed based on the tokamak concept. From its smallest spatial scale lengths to the device scale lengths, there are 8 orders of magnitude in scales. Likewise, from the smallest temporal scales to the largest time scales we need to consider, there are 14 orders of magnitude in scales.  Similar spatial and temporal scales exist in all fusion concepts, including those at the National Ignition Facility.  Complicating the situation is the fact that a fundamental model for describing dynamics within this non-equilibrium system is the Boltzmann equation, which is 6 dimensional plus time.  Ignoring the issues around temporal scales, storing a single copy of distribution function with a 3 cubic centimeter resolution of the 480 cubic meter volume inside of ITER requires 2.4 petabytes of ram.  Even this level of resolution does not resolve the smallest spatial scales within the system, let alone the additional requirements for time stepping and making the solution high order such that we trust the answer.   Doubling the resolution requires 2^6 more pieces of information, and hence, the goal of first principles optimal design of fusion is plagued by the curse of dimensionality.  In addition, existing plasma modeling tools based on a reduced representation using Magneto-hydrodynamics take weeks to months to do a single run.  It's not possible to do optimal design with traditional plasma codes. This problem demands new mathematical representations to address this grand challenge.  This approach needs to start with new accurate sparse representations of kinetic models and end with structure preserving surrogate models that can be trusted inside of optimizers.  We need surrogates that are cheap, efficient and sufficiently accurate to guide design decisions.  In a multiphysics setting this means we need to be able to linked together collections of surrogates and be able to “trust” the resulting outcome. Hence, the surrogates have to preserve key physics so as to ensure a stable approximation coming from the coupled surrogates. Surrogates of all forms depend on data. To get there, we need new methods for generating data that let us capture high fidelity physics at a moderate cost.  This could come in the form of greedy reduced basis methods, low rank representations, mixed precision computing, asymptotic preserving methods, blended computing which hybridizes structure preserving machine learning surrogates with traditional scientific computing, cheep and accurate error indicators, etc.  At the heart of this list is the key idea that we need a families of methods/models that can bootstrap to a collection of trusted surrogates for optimal design.   In addition, we need new approaches to multi fidelity uncertainty quantification and optimal design that can leverage unordered models in the decision making process.    To tackle such a problem of this scale, the key step is the design of intermediate moderate challenge problems that bring in all aspects of the overarching goal without the need to introduce every aspect of these multi physics problems at once.  To that end, towards the end of my talk, I will discuss a proposed set of radiation transport problems that can be used as a starting point for building tools that enable multi physics surrogates of the kind we need along with API’s for interfacing with UQ frameworks.  There is also a need for trusted ground truth data for training surrogates, so that those who want to focus on surrogates can.  Doing the hard work of designing intermediate moderate challenge problems and providing the community with ground truth data will lay the foundation for the community to be able to contribute to this effort.