Aishwarya Pawar, Iowa State University
Aditya Balu, Iowa State University
Baskar Ganapathysubramanian, Iowa State University
Ming-Chen Hsu, Iowa State University
Adarsh Krishnamurthy, Iowa State University
Machine learning based approaches in engineering have seen rapid progress in recent years. While modern ML/AI approaches have transformed a host of application areas that involve assimilating large data streams to make useful predictions, the time is ripe to leverage these advances for analysis, optimization, design, and control of complex engineered systems. Researchers have been leveraging novel tools in machine learning – such as deep generative models, deep reinforcement learning models as a computationally efficient paradigm for modeling and simulation of complex engineered systems. However, despite their apparent utility, current AI systems suffer from three key drawbacks:
Reliance on abundance of data: Current AI systems tend to entirely let data dictate the narrative. As a result, the data requirement for training such systems is very large, which may become a major bottleneck for complex simulations and expensive experiments.
Lack of generalizability: These approaches have narrow scope, i.e., they typically only succeed on the task that they are trained on. Additionally, contextual constraints and domain knowledge known from physical systems are left unused.
Unsatisfactory parsimony and explainability:The representations produced are non-parsimonious and un-interpretable. This is especially damaging when the end-goal, which is the identification of functional relationships in complex systems or constrained explorations of the design space, requires generating insights into the engineered system.
Recent efforts after the advent of physics informed neural networks and generative neural network models have proven to have tremendous impact in numerous tasks such as prediction, visualization, and design. This is a significant departure from the typical, data-hungry approach required by traditional ML training methods because the encoded invariances allow for physically meaningful predictions using far less training data. In our view, these approaches are suitable for solving several problems in computational mechanics. We have worked on novel methods and have exciting preliminary results in which we have trained models (in a data free manner and data driven manner) for solving partial differential equations (PDEs). We have also seen several other researchers in the community working more in this research area. However, there are still many research questions that are unanswered. Some of the key research questions include:
- Principled approaches for incorporating physics-based constraints into computational mechanics models
- Quantitative guarantees that link model architecture, predictive performance, and generalization
- Constructing new model architectures for complex geometries.
Addressing these research questions requires revolutionary advances in AI, physics-based modeling and simulation, optimization, and computational science. Furthermore, these ideas can be extended to be applied on complex engineering systems such as turbulence, fluid-structure interaction, and complex material dynamics.