You are here

Probabilistic Learning and Constrained Generative Models

Sanjay Govindjee, University of California, Berkeley
Roger Ghanem, University of Southern California
Johann Guilleminot, Duke University
Cosmin Safta, Sandia National Laboratories
Michael Shields, Johns Hopkins University
Christian Soize, Université Gustave Eiffel
Charbel Farhat, Stanford University
Probabilistic models stand at the juncture of physics and data-science. While the semantics of these models can encode logical and physics constraints, their mathematical analysis is steeped in probability theory, statistics, and data analysis. Recent technological advances in sensing and computing underlie the exponential growth of scholarship at this intersection yet challenges remain in making predictions and decisions that are sufficiently constrained by both physics and data. A large challenge in this regard is the learning of probabilistic models utilizing limited data and extracting meaningful statistical information in a mathematically rigorous fashion and to do so in a computationally efficient manner that is both generalizable, yet also domain/problem specific.
We invite submissions that deal with theoretical as well as practical and applied aspects of these challenges. A partial, but non-exclusive list of topics of interest, includes:
  1. Nonlinear manifold identification methods from sparse data
  2. Sampling methods in high stochastic dimensions
  3. Effective methods for constrained sampling
  4. Novel generative models
  5. Design of experiments for probabilistic learning
  6. Application of learned generative models in science and engineering
  7. Probabilistic learning on manifolds
  8. Probabilistic models and reasoning for digital twins and AI
  9. Physics informed probabilistic models