A non-cooperative meta-modeling game for automated third-party calibrating, validating and falsifying constitutive laws with parallelized adversarial attacks
- Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
- Columbia Univ., New York, NY (United States)
The evaluation of constitutive models, especially for high-risk and high-regret engineering applications, requires efficient and rigorous third-party calibration, validation and falsification. While there are numerous efforts to develop paradigms and standard procedures to validate models, difficulties may arise due to the sequential, manual, and often biased nature of the commonly adopted calibration and validation processes, thus slowing down data collections, hampering the progress towards discovering new physics, increasing expenses and possibly leading to misinterpretations of the credibility and application ranges of proposed models. This work attempts to introduce concepts from game theory and machine learning techniques to overcome many of these existing difficulties. Here, we introduce an automated meta-modeling game where two competing AI agents systematically generate experimental data to calibrate a given constitutive model and to explore its weakness such that the experiment design and model robustness can be improved through competitions. The two agents automatically search for the Nash equilibrium of the meta-modeling game in an adversarial reinforcement learning framework without human intervention. In particular, a protagonist agent seeks to find the more effective ways to generate data for model calibrations, while an adversary agent tries to find the most devastating test scenarios that expose the weaknesses of the constitutive model calibrated by the protagonist. By capturing all possible design options of the laboratory experiments into a single decision tree, we recast the design of experiments as a game of combinatorial moves that can be resolved through deep reinforcement learning by the two competing players. Our adversarial framework emulates idealized scientific collaborations and competitions among researchers to achieve a better understanding of the application range of the learned material laws and prevent misinterpretations caused by conventional AI-based third-party validation. Numerical examples are given to demonstrate the wide applicability of the proposed meta-modeling game with adversarial attacks on both human-crafted constitutive models and machine learning models.
- Research Organization:
- Los Alamos National Laboratory (LANL), Los Alamos, NM (United States)
- Sponsoring Organization:
- USDOE National Nuclear Security Administration (NNSA); US Army Research Office (ARO); US Air Force Office of Scientific Research (AFOSR); National Science Foundation (NSF)
- Grant/Contract Number:
- 89233218CNA000001; W911NF-18-2-0306; W911NF-15-1-0562; FA9550-19-1-0318; FA9550-17-1-0169; CMMI-1846875; CMMI-1940203; CCF-1704833; DMS-1719699; DMR-1534910
- OSTI ID:
- 1756796
- Report Number(s):
- LA-UR-20-22154
- Journal Information:
- Computer Methods in Applied Mechanics and Engineering, Vol. 373; ISSN 0045-7825
- Publisher:
- ElsevierCopyright Statement
- Country of Publication:
- United States
- Language:
- English
Similar Records
A publicly available PyTorch-$\mathrm{ABAQUS}$ $\mathrm{UMAT}$ deep-learning framework for level-set plasticity
SO(3)-invariance of informed-graph-based deep neural network for anisotropic elastoplastic materials