This was part of
Statistical and Computational Challenges in Probabilistic Scientific Machine Learning (SciML)
Data Manifolds as Priors for Inverse Problems: From Regularization to Representation
Jiequn Han, Flatiron Insitute
Tuesday, June 10, 2025
Abstract:
Classical approaches to inverse problems often rely on explicit regularization, such as Tikhonov penalties or sparsity, which impose rigid assumptions on the solution. While simple and effective in many cases, these structured priors can become inadequate when the problem landscape is complex. Recent advances in machine learning offer more flexible priors by directly modeling the solution manifold. This talk presents two complementary perspectives on this theme. The first revisits supervised learning with handcrafted data manifolds, where instance-wise adaptive sampling significantly improves data efficiency over brute-force strategies. The second explores score-based diffusion models that represent real data distributions through dynamic transport. We show how they enable provably posterior sampling in inverse problems via tilted transport. Together, these works underscore the growing importance of understanding data geometry and learning effective representations in real-world inverse problems.