Hybrid Twins. Part II. Real-time, data-driven modeling

Online since 12 April 2021

Article

Abstract

We have seen in Part I of this paper that model order reduction allows the involvement of physics-based models in design, as in the past, but now also in online decision-making, without requiring unreasonable computing resources. On the other hand, machine learning techniques were not ready to cope with the processing speed and the lack of data. It was therefore necessary to adapt a number of techniques, and to create others, capable of operating online and even in the presence of a very small amount of data: the so-called "physics-informed artificial intelligence" techniques. For that purpose, we have adapted and proposed a number of techniques, which have proven and are proving every day in many industrial applications their capabilities and performances. Six major uses of AI in engineering concern: (i) visualization of multidimensional data; (ii) classification and clustering, supervised and unsupervised, where it is assumed that members of the same cluster have similar behaviors; (iii) model extraction, that is, discovering the quantitative relationship between inputs (actions) and outputs (reactions) in a consistent manner with respect to the physical laws. When addressing knowledge extraction, item (iv) above, as well as the need of explaining for certifying, item (v), advances are much limited and both items need for major progresses, as the one enabling discarding useless parameters, or discovering latent variables whose consideration becomes compulsory for explaining experimental findings, or combining parameters that act in a combined manner. Discovering equations is a very timely topic because it finally enables transforming data into knowledge.

Keywords

Scientific Machine Learning, Physics-informed Neural Networks, Thermodynamics

Table of contents

Text

1 Introduction

With the irruption of the so-called fourth paradigm of science, we face a new way of doing research: one in which experiments, theory, computation and data analysis—through the intensive use of machine learning—are employed simultaneously [1]. Science has been mainly experimental from its inception, nearly two millennia ago. Several centuries ago, it became theoretic, and with the help of the mathematical language, humans have been able to express beautifully the laws of the universe as universal laws. Only some decades ago, science became also computational, helped by the irruption of personal computers, but also supercomputing facilities. Finally, the last artificial intelligence summer, only some years ago, has helped scientists to extract knowledge from data. Even some attempts have been made to create artificial intelligence physicists, see [2,3].

However, machine learning techniques and, particularly artificial intelligence, behave often as a black box: there is no guarantee of the accuracy of the result and, at the same time, this result lacks most of the times of a suitable interpretation. In recent times there has been a growing interest in the development of scientific machine learning techniques able to unveil scientific laws from data. The interest is two-fold. On one hand, the possibility of leveraging big data to unveil scientific laws from them is interesting by itself. On the other, the first results suggest that the more data we add to the process, the less data we need [4,5].

In this work we review of our latest results on the development of self-learning digital and hybrid twins. In the first part, we review the development of hybrid twins based on machine learning techniques arising from a posteriori Proper Generalized Decomposition (PGD) techniques. In the second one, we focus on the development of deep learning techniques that guarantee the fulfillment of the principles of thermodynamics. Finally, we show some implementations that employ Augmented Reality to seamlessly communicate the results to the user for a fast decision making.

2 Self-learning digital twins

In our previous work, see [6,7] we develop a digital twins that is able to correct itself when the data stream do not provide with results in accordance with the model implemented in it. This is possible by resorting to the concept of sparse-PGD techniques, that finds a rank-1 tensor approximation to the discrepancy between the model and the observed results.

The developed twin, see Fig. 1, is able to provide information to the user via Augmented Reality, while locating the position of the load—and thus providing an explanation to the physics taking place—. If the results provided by the twin do not fit well with the built-in model, the twin is able to compute rank-1 corrections to the model via sparse-PGD methods.

Image 100002010000050500000298FF17A9F8B8DC0F49.png

Fig. 1. Hybrid twin for a foam beam subjected to a punctual load. The developed system is able to locate the position of the load while providing the user with useful information on hidden information such as stresses or strains [6]

In general, predictions will take the form

Image 10000201000004A90000002AA3C3BCC9F6A30A50.png

where A represents the built-in model, and B the self-learnt corrections. In the example in Fig. 1, the beam is modelled via a classical, linear Euler-Bernoulli-Navier beam model, which does not represent faithfully the behavior of the beam and presents systematic biases. Once these biases are detected, our technique corrects the model, as explained before, by adding as many rank-1 corrections as needed and storing them in the B-term of Eq. (1).

Image 10000201000004AC00000191F88D18D2400FD5F7.png

Fig. 2. Comparison between experiments (+), theoretical predictions gave by the Euler-Bernoulli-Navier model (*) and the final, corrected predictions made by the twin after the learning procedure (o). The legend represents the absolute error made in mm. Results taken from [6]

3 Thermodynamics-aware deep learning

In the last few years there has been a growing interest in the development of deep learning techniques that could take into account the accumulated scientific knowledge. This results in techniques that need for less data and whose result comply by construction with known laws of physics.

Within this rationale, the authors have developed a family of deep neural networks that are able to comply by construction with the first and second laws of thermodynamics [4,8,9,10].

Assume that the variables governing the behavior of the system at a particular level of description are stored in a vector

Image 10000201000004A5000000272DE655872DDFF3AF.png

Under this prism, machine learning would be equivalent to finding 𝑓 by regression, provided that sufficient data is available. How this regression is accomplished is of little importance: neural networks or classical (piecewise) regression are thus equivalent, if both work well.

To guarantee the thermodynamic admissibility of the resulting approximation, we impose Eq. (1) to have a GENERIC form [4]:

Image 10000201000004A30000001F606BA883D9C1F3CE.png

Where 𝑳 represents the classical Poisson matrix of Hamiltonian mechanics (and is, therefore, skew-symmetric) and 𝑴 represents the so-called friction matrix, that must be symmetric, semi-positive definite in order to guarantee thermodynamic consistency.

In our previous works, see [2] [3], we perform regression analysis from data so as to unveil the particular form of the expression for the energy, 𝐸, and entropy, 𝑆, potentials. The resulting formulation guarantees by construction thermodynamic admissibility and provides excellent results in the data-driven identification of complex behaviors.

The sketch of such a network is depicted in Fig. 3. Essentially, a sparse autoencoder first determines the intrinsic dimensionality of the data set. Then, for the just determined variables-whose precise physical meaning is very often not known—a time integrator is learnt that exactly conserves energy and dissipates the right amount of entropy.

As an example, consider a Couette flow of an Oldroyd-B polymer suspension, see Fig. 4, right.

Image 10000201000004A100000198B1F8192E69240977.png

Fig. 3. Sketch of a structure-preserving neural network. In it, a sparse autoencoder first determines the intrinsic dimensionality of data. Then, a time integrator is learnt for these variables. Taken from [10]

Fig. 3. Sketch of a structure-preserving neural network. In it, a sparse autoencoder first determines the intrinsic dimensionality of data. Then, a time integrator is learnt for these variables. Taken from [10]

Fig. 4. Couette flow of an Oldroyd-B polymer. Left, finite element discretization of the problem. Right, prediction done by the network for a flow whose characteristics had not been seen before: position, velocity, energy and extra-stress tensor at different positions in the flow. Taken from [10]

4 Conclusions

In this paper we review some of the developments made by the authors for the construction of hybrid twins. We have reviewed first an approach based upon machine learning of rank-1 corrections to the experimental deviations from established models of the physical asset at hand. In the second part, we have reviewed an approach based on the employ of deep learning techniques. As a salient feature, the proposed techniques guarantees by constructions the fulfillment of the first and second principles of thermodynamics.

Acknowledgements

The authors acknowledge the work done by our colleagues Beatriz Moya, Icíar Alfaro and Quercus Hernández, for their useful comments during endless discussions on the topic.

This project has been partially funded by the ESI Group through the ESI Chair at ENSAM Arts et Metiers Institute of Technology, and through the project “Simulated Reality” at the University of Zaragoza. The support of the Spanish Ministry of Economy and Competitiveness through grant number CICYT-DPI2017-85139-C2-1-R and by the Regional Government of Aragon and the European Social Fund, are also gratefully acknowledged.

Bibliography

[1] Tansley, S., & Tolle, K. (2009). The fourth paradigm: data-intensive scientific discovery (Vol. 1). T. Hey (Ed.). Redmond, WA: Microsoft research.

[2] Wu, T., & Tegmark, M. (2019). Toward an artificial intelligence physicist for unsupervised learning. Physical Review E, 100(3), 033311. Author name / ESAFORM 2021

[3] Brunton, S. L., Proctor, J. L., & Kutz, J. N. (2016). Discovering governing equations from data by sparse identification of nonlinear dynamical systems. Proceedings of the national academy of sciences, 113(15), 3932-3937.

[4] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., & Cueto, E. (2020). Structure-preserving neural networks. arXiv preprint arXiv:2004.04653.

[5] Jin, P., Zhang, Z., Kevrekidis, I. G., & Karniadakis, G. E. (2020). Learning Poisson systems and trajectories of autonomous systems via Poisson neural networks. arXiv preprint arXiv:2012.03133.

[6] Moya, B, Badías, A, Alfaro, I, Chinesta, F, Cueto, E. Digital twins that learn and correct themselves. International Journal for Numerical Methods in Engineering. 2020; 1– 11. https://doi.org/10.1002/nme.6535

[7] Moya, B., Alfaro, I., Gonzalez, D., Chinesta, F., & Cueto, E. (2020). Physically sound, self-learning digital twins for sloshing fluids. PLoS One, 15(6), e0234569.

[8] Chinesta, F., Cueto, E., Abisset-Chavanne, E., Duval, J. L., & El Khaldi, F. (2020). Virtual, digital and hybrid twins: a new paradigm in data-based engineering and engineered data. Archives of computational methods in engineering, 27(1), 105-134.

[9] González, D., Chinesta, F., & Cueto, E. (2020). Learning non-Markovian physics from data. Journal of Computational Physics, 109982.

[10] Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., & Cueto, E. (2020). Deep learning of thermodynamics-aware reduced-order models from data. arXiv preprint arXiv:2007.03758.

Illustrations

Fig. 3. Sketch of a structure-preserving neural network. In it, a sparse autoencoder first determines the intrinsic dimensionality of data. Then, a time integrator is learnt for these variables. Taken from [10]

Fig. 3. Sketch of a structure-preserving neural network. In it, a sparse autoencoder first determines the intrinsic dimensionality of data. Then, a time integrator is learnt for these variables. Taken from [10]

References

Electronic reference

Elías Cueto, David González, Alberto Badías, Francisco Chinesta, Nicolas Hascoet and Jean-Louis Duval, « Hybrid Twins. Part II. Real-time, data-driven modeling », ESAFORM 2021 [Online], Online since 12 April 2021, connection on 26 April 2024. URL : https://popups.uliege.be/esaform21/index.php?id=2050

Authors

Elías Cueto

Aragon Institute of Engineering Research, Universidad de Zaragoza. Zaragoza, Spain

Corresponding author: ecueto@unizar.es

By this author

David González

Aragon Institute of Engineering Research, Universidad de Zaragoza. Zaragoza, Spain

By this author

Alberto Badías

Aragon Institute of Engineering Research, Universidad de Zaragoza. Zaragoza, Spain

Francisco Chinesta

ESI Group Chair, Arts et Metiers Institute of Technology. Paris, France

By this author

Nicolas Hascoet

ESI Group Chair, Arts et Metiers Institute of Technology. Paris, France

Jean-Louis Duval

ESI Group, Bâtiment Seville, 3bis. Rue Saarinen. Rungis, France

By this author

Files(s)

Download PDF

Attachment

  • Video
    (video/mp4 | 120M)

Illustrations

Fig. 3. Sketch of a structure-preserving neural network. In it, a sparse autoencoder first determines the intrinsic dimensionality of data. Then, a time integrator is learnt for these variables. Taken from [10]

Fig. 3. Sketch of a structure-preserving neural network. In it, a sparse autoencoder first determines the intrinsic dimensionality of data. Then, a time integrator is learnt for these variables. Taken from [10]

Details

Title
Hybrid Twins. Part II. Real-time, data-driven modeling
Language
en
Author, co-author
Elías Cueto, David González, Alberto Badías, Francisco Chinesta, Nicolas Hascoet and Jean-Louis Duval,
Publication date
14 April 2021
Journal title
ESAFORM 2021
Copyright
CC-BY
DOI
10.25518/esaform21.2050
Permanent URL
https://popups.uliege.be/esaform21/index.php?id=2050

Statistics

Views : 612 (1 ULiège)
Downloads : 0 (0 ULiège)

Cite

Cueto, E., González, D., Badías, A., Chinesta, F., Hascoet, N., & Duval, J. (2021). Hybrid Twins. Part II. Real-time, data-driven modeling. Paper presented at ESAFORM 2021. 24th International Conference on Material Forming, Liège, Belgique. doi: 10.25518/esaform21.2050