In engineering and physical science, we routinely deal with discrete data. Depending on how much we know about the system (the system behavior may be completely, partially, or hardly understood), these data need to be curated and analyzed to obtain mechanistic insight. Numerical methods such as finite difference, finite elements, or similar tools have been widely used to deal with such discrete data[i]. However, dealing with highresolution data (e.g., image or mesh data) for extremescale engineering problems is computationally challenging with traditional numerical methods. In recent times, researchers have resorted to data science techniques, especially deep learning to use the data for system characterization and analysis. However, little mechanistic insight can be obtained from pure datadriven approachesi. Hierarchical Deep Learning Neural Network (HiDeNN) and DeepLearning Discrete Calculus (DLDC)[ii] research aim to bridge numerical techniques, mechanistic knowledge, and data science methods and propose computationally efficient tools designed for extremescale science and engineering problems. DLDC is also developed as a series of lectures for STEM education and frontier research, as it offers a new perspective on numerical methods and deep learning.
The term “deep learning” refers to a subset of artificial intelligence (AI) that uses universal approximation theory through a set of neurons, activation functions, and optimizable hyperparameters to approximate any linear or nonlinear relationships. Calculus is a branch of mathematical science that deals with changes in a system. DLDC[iii] is a computational method that uses deep learning constructs to mimic numerical methods based on discrete calculus. The DLDC uses elements of data science (data generation and collection, feature extraction, dimension reduction, reduced order models, regression and classification, and system and design) to solve science and engineering problems[iv]. The DLDC method offers enhanced computational efficiency and accuracy compared to traditional numerical methods, especially when dealing with highresolution experimental/computational datasets.
Let us consider designing a drone body frame with extremely high resolution (e.g., 10 billion degrees of freedom, DoFs). Most of the computing resources are devoted to solving the physical response of the system, which is excessively demanding. To solve such problems, the DLDC is extended to Hierarchical Deep Learning Neural Network (HiDeNN) and convolution hierarchical deeplearning neural network (CHiDeNN)[v] and its reduced order method, i.e., CHiDeNN tensor decomposition (CHiDeNNTD)[vi], that enables highly accurate solutions with less computational overhead. Convolution is a mathematical operation wellknown in applications such as signal processing[vii] and convolutional neural networks (CNN)[viii]. CHiDeNN builds adaptive (the convolution filter can have varying sizes and values like CNN) convolution filters that cover a domain outside of an element (a.k.a. patch domain) so that we can interpolate a larger domain with higher smoothness and completeness without increasing the global degrees of freedom like higherorder FEM. CHiDeNNTD[ix] is a reducedorder version of CHiDeNN where we only use 1dimensional convolutional filters. Based on the concept of separation of variables, the Tensor Decomposition (TD) method converts an “n”dimensional problem into “n” 1dimensional problems for some number of decomposition modes. Since solving a fulldimensional matrix equation takes most of the computation time for extremely largescale problems, dividing the equation into multiple small equations can greatly reduce the computational burden. Orders of magnitude speedup and improvement in the accuracy of the solution can be obtained by CHiDeNNTD. Another extension of CHiDeNN might be the convolution isogeometric analysis (CIGA) that can recover exact geometry while maintaining higher order continuity and retaining the Kronecker delta property of the underlying shape functions.
This short course will introduce and demonstrate how to apply a) HiDeNN and CHiDeNN, b) CIGA, and c) CHiDeNNTD. In the later part of the course, Graphical Processing Unit (GPU) acceleration of DLDC[x] will be demonstrated via Google Colab with the JAX[xi] library in Python. Participants are welcome to bring their laptops, and there is no need for installation/registration during this demonstration session. The application examples will focus on using the DLDC technique for topology optimization and multiscale materials design. After finishing the course, the attendees will be able to understand and apply the DLDC methods for solving engineering problems that require very accurate solutions given highresolution data.
Outline of the Course:
Timeline 
Lecture 
Contents 
Instructor 
8:3010:00 
History of FEM and Mechanistic data science that leads to HiDeNN 


10:0010:30 
Coffee break 


10:3012:00 
Background: Hierarchical Deeplearning Neural Network (HiDeNN) 


12:001:00PM 
Lunch 


1:002:15PM 
Extension of HiDeNNFEM to meshfree, enrichment and IGA approximations 


2:153:15PM 
Convolutional Hierarchical Deeplearning Neural Network (CHiDeNN) 


3:153:45PM 
Coffee break 


3:454:30PM 
HiDeNNFEM for nonlinear problems 


4:305:30PM 
Applications 


[i] W.K. Liu, S. Li, H.S. Park, Eighty Years of the Finite Element Method: Birth, Evolution, and Future, Archives of Computational Methods in Engineering pp.1–23 (2022).
[ii] Saha S, Park C, Knapik S, Guo J, Huang O and Liu WK (2023), Deep Learning Discrete Calculus (DLDC): A Family of Discrete Numerical Methods by Universal Approximation for STEM Education to Frontier Research. Computational Mechanics.
[iii] Saha S, Park C, Knapik S, Guo J, Huang O and Liu WK (2023), Deep Learning Discrete Calculus (DLDC): A Family of Discrete Numerical Methods by Universal Approximation for STEM Education to Frontier Research. Computational Mechanics.
[iv] Liu, Gan, Fleming, “Mechanistic Data Science for STEM Education and Applications,” Springer, 2021
[v] Lu Y, Li H, Zhang L, Park C, Mojumder S, Knapik S, Sang Z, Tang S, Wagner G and Liu WK (2023) Convolution Hierarchical Deeplearning Neural Networks (CHiDeNN): Finite Elements, Isogeometric Analysis, Tensor Decomposition, and Beyond. Computational Mechanics.
[vi] Li H, Knapik S, Li Y, Guo J, Lu Y, Apley DW and Liu WK (2023) ConvolutionHierarchical Deep Learning Neural NetworkTensor Decomposition (CHiDeNNTD) for high resolution topology optimization. Computational Mechanics.
[vii] Liu WK, Jun S and Zhang YF (1995) Reproducing kernel particle methods. International journal for numerical methods in fluids 20: 10811106 DOI. Liu WK, Han W, Lu H, Li S and Cao J (2004) Reproducing kernel element method. Part I: Theoretical formulation. Computer Methods in Applied Mechanics and Engineering 193: 933951 DOI.
[viii] LeCun, Y., & Bengio, Y. (1995). Convolutional networks for images, speech, and time series. The handbook of brain theory and neural networks, 3361(10), 1995.
[ix] Zhang, L., Lu, Y., Tang, S., & Liu, W. K. (2022). HiDeNNTD: Reducedorder hierarchical deep learning neural networks. Computer Methods in Applied Mechanics and Engineering, 389, 114414.
[x] C. Park, Y. Lu, S. Saha, T. Xue, J. Guo, S. Mojumder, G. Wagner, W. Liu, Convolution Hierarchical Deeplearning Neural Network (CHiDeNN) with Graphics Processing Unit (GPU) Acceleration, Computational Mechanics (2023).