You are here

SC17-015: A Short Course on Hierarchical Deep Learning Neural Network (HiDeNN) and Its Applications: Finite Elements, Isogeometric Analysis, Tensor Decomposition, and Beyond

Wing Kam Liu, Northwestern University, Co-Founder of HIDENN-AI, LLC
Dong Qian, University of Texas at Dallas and Co-Founder of HIDENN-AI, LLC
 

In engineering and physical science, we routinely deal with discrete data. Depending on how much we know about the system (the system behavior may be completely, partially, or hardly understood), these data need to be curated and analyzed to obtain mechanistic insight. Numerical methods such as finite difference, finite elements, or similar tools have been widely used to deal with such discrete data[i]. However, dealing with high-resolution data (e.g., image or mesh data) for extreme-scale engineering problems is computationally challenging with traditional numerical methods. In recent times, researchers have resorted to data science techniques, especially deep learning to use the data for system characterization and analysis. However, little mechanistic insight can be obtained from pure data-driven approachesi. Hierarchical Deep Learning Neural Network (HiDeNN) and Deep-Learning Discrete Calculus (DLDC)[ii] research aim to bridge numerical techniques, mechanistic knowledge, and data science methods and propose computationally efficient tools designed for extreme-scale science and engineering problems. DLDC is also developed as a series of lectures for STEM education and frontier research, as it offers a new perspective on numerical methods and deep learning.

The term “deep learning” refers to a subset of artificial intelligence (AI) that uses universal approximation theory through a set of neurons, activation functions, and optimizable hyperparameters to approximate any linear or non-linear relationships. Calculus is a branch of mathematical science that deals with changes in a system. DLDC[iii] is a computational method that uses deep learning constructs to mimic numerical methods based on discrete calculus. The DLDC uses elements of data science (data generation and collection, feature extraction, dimension reduction, reduced order models, regression and classification, and system and design) to solve science and engineering problems[iv]. The DLDC method offers enhanced computational efficiency and accuracy compared to traditional numerical methods, especially when dealing with high-resolution experimental/computational datasets.

Let us consider designing a drone body frame with extremely high resolution (e.g., 10 billion degrees of freedom, DoFs). Most of the computing resources are devoted to solving the physical response of the system, which is excessively demanding. To solve such problems, the DLDC is extended to Hierarchical Deep Learning Neural Network (HiDeNN) and convolution hierarchical deep-learning neural network (C-HiDeNN)[v] and its reduced order method, i.e., C-HiDeNN tensor decomposition (C-HiDeNN-TD)[vi], that enables highly accurate solutions with less computational overhead. Convolution is a mathematical operation well-known in applications such as signal processing[vii] and convolutional neural networks (CNN)[viii]. C-HiDeNN builds adaptive (the convolution filter can have varying sizes and values like CNN) convolution filters that cover a domain outside of an element (a.k.a. patch domain) so that we can interpolate a larger domain with higher smoothness and completeness without increasing the global degrees of freedom like higher-order FEM. C-HiDeNN-TD[ix] is a reduced-order version of C-HiDeNN where we only use 1-dimensional convolutional filters. Based on the concept of separation of variables, the Tensor Decomposition (TD) method converts an “n”-dimensional problem into “n” 1-dimensional problems for some number of decomposition modes. Since solving a full-dimensional matrix equation takes most of the computation time for extremely large-scale problems, dividing the equation into multiple small equations can greatly reduce the computational burden. Orders of magnitude speedup and improvement in the accuracy of the solution can be obtained by C-HiDeNN-TD. Another extension of C-HiDeNN might be the convolution isogeometric analysis (C-IGA) that can recover exact geometry while maintaining higher order continuity and retaining the Kronecker delta property of the underlying shape functions.

This short course will introduce and demonstrate how to apply a) HiDeNN and C-HiDeNN, b) C-IGA, and c) C-HiDeNN-TD. In the later part of the course, Graphical Processing Unit (GPU) acceleration of DLDC[x] will be demonstrated via Google Colab with the JAX[xi] library in Python. Participants are welcome to bring their laptops, and there is no need for installation/registration during this demonstration session. The application examples will focus on using the DLDC technique for topology optimization and multiscale materials design. After finishing the course, the attendees will be able to understand and apply the DLDC methods for solving engineering problems that require very accurate solutions given high-resolution data.

Outline of the Course:

Timeline

Lecture

Contents

Instructor

8:30-10:00

History of FEM and Mechanistic data science that leads to HiDeNN

  • History of FEM
  • Basics of FEM (weak form, shape function)
  • From FEM to meshfree/IGA
  • Emergence of MDS and machine learning
  • Basics of DNN and machine learning, programming in Python

 

10:00-10:30

Coffee break

 

 

10:30-12:00

Background: Hierarchical Deep-learning Neural Network (HiDeNN)

  • Three building blocks of HiDeNN
  • HiDeNN-Finite Element Method (HiDeNN-FEM)
  • R-adaptivity
  • Introduction to tensor decomposition
  • HiDeNN-tensor decomposition

 

12:00-1:00PM

Lunch

 

 

1:00-2:15PM

Extension of HiDeNN-FEM to meshfree, enrichment and IGA approximations

  • HiDeNN-meshfree approximation
  • HiDeNN-enrichments
  • HiDeNN-IGA
  • Examples

 

2:15-3:15PM

Convolutional Hierarchical Deep-learning Neural Network (C-HiDeNN)

  • Convolution patch functions
  • Graph theory for nodal connectivity
  • Demonstration of C-HiDeNN
  • Preliminary results of C-HiDeNN

 

3:15-3:45PM

Coffee break

 

 

3:45-4:30PM

HiDeNN-FEM for nonlinear problems

  • Basics of nonlinear FEM (total Lagragian formulation)
  • Building blocks for nonlinear HiDeNN-FEM
  • Linearization and Newton’s method
  • Application to nonlinear elasticity and plasticity

 

4:30-5:30PM

Applications

  • C-HiDeNN-TD for Topology Optimization
  • C-HiDeNN for Multiscale Materials Design

 

 


[i] W.K. Liu, S. Li, H.S. Park, Eighty Years of the Finite Element Method: Birth, Evolution, and Future, Archives of Computational Methods in Engineering pp.1–23 (2022).

[ii] Saha S, Park C, Knapik S, Guo J, Huang O and Liu WK (2023), Deep Learning Discrete Calculus (DLDC): A Family of Discrete Numerical Methods by Universal Approximation for STEM Education to Frontier Research. Computational Mechanics.

[iii] Saha S, Park C, Knapik S, Guo J, Huang O and Liu WK (2023), Deep Learning Discrete Calculus (DLDC): A Family of Discrete Numerical Methods by Universal Approximation for STEM Education to Frontier Research. Computational Mechanics.

[iv] Liu, Gan, Fleming, “Mechanistic Data Science for STEM Education and Applications,” Springer, 2021

[v] Lu Y, Li H, Zhang L, Park C, Mojumder S, Knapik S, Sang Z, Tang S, Wagner G and Liu WK (2023) Convolution Hierarchical Deep-learning Neural Networks (C-HiDeNN): Finite Elements, Isogeometric Analysis, Tensor Decomposition, and Beyond. Computational Mechanics.

[vi] Li H, Knapik S, Li Y, Guo J, Lu Y, Apley DW and Liu WK (2023) Convolution-Hierarchical Deep Learning Neural Network-Tensor Decomposition (C-HiDeNN-TD) for high resolution topology optimization. Computational Mechanics.

[vii] Liu WK, Jun S and Zhang YF (1995) Reproducing kernel particle methods. International journal for numerical methods in fluids 20: 1081-1106 DOI. Liu WK, Han W, Lu H, Li S and Cao J (2004) Reproducing kernel element method. Part I: Theoretical formulation. Computer Methods in Applied Mechanics and Engineering 193: 933-951 DOI.

[viii] LeCun, Y., & Bengio, Y. (1995). Convolutional networks for images, speech, and time series. The handbook of brain theory and neural networks, 3361(10), 1995.

[ix] Zhang, L., Lu, Y., Tang, S., & Liu, W. K. (2022). HiDeNN-TD: Reduced-order hierarchical deep learning neural networks. Computer Methods in Applied Mechanics and Engineering, 389, 114414.

[x] C. Park, Y. Lu, S. Saha, T. Xue, J. Guo, S. Mojumder, G. Wagner, W. Liu, Convolution Hierarchical Deep-learning Neural Network (C-HiDeNN) with Graphics Processing Unit (GPU) Acceleration, Computational Mechanics (2023).