Conditional neural Whitney forms for structure preserving data-driven modeling
Nat Trask, University of Pennsylvania
We present a framework for combining expressive transformer architectures from computer vision with conventional Whitney form constructions to identify structure preserving reduced order models from data. Much like how conditional generative models allow sampling of images from a text prompt, this allows sampling of FEEC models from sensor measurements, which we use to construct digital twins that can both predict and assimilate in real time. We then introduce a recent extension to build hybridizable neural integrators; while significant effort focuses on using vision transformers to build auto-regressive forecasters of physical systems, our framework allows recasting of the problem in terms of a trainable mixed FEEC space in time. Analysis highlights geometric conservation structure, energy stability, and uniform boundedness of parameter sensitivities independent of the number of rollouts, resolving a current major issue in long term stability for AI-driven forecasting tools.