Hyperparameter Studies for Vision Transformers Trained on High-Fidelity Simulations
This library is a collection of python modules that define, train, and analyze vision-transformer (ViT) machine learning models. The code implements, with mild modifications, ViT models that have been made publicly available through publication and GitHub code. The training data for these models is hydrodynamic simulation output in the form of numpy arrays. This library contains code to train these ViT models on the hydrodynamic simulation output with a variety of hyperparameters, and to compare the results of such models. Furthermore, the library contains definitions of simple convolutional neural network (CNN) machine learning architectures which can be trained on the same hydrodynamic simulation output. These are included as a reference point to compare the ViT models to. Additionally, the library includes trained ViT and CNN models and example input data for demonstration purposes. The code is based on the PyTorch python library.
- Site Accession Number:
- O4749
- Software Type:
- Scientific
- License(s):
- BSD 3-clause "New" or "Revised" License
- Research Organization:
- Los Alamos National Laboratory (LANL), Los Alamos, NM (United States)
- Sponsoring Organization:
- USDOE National Nuclear Security Administration (NNSA)Primary Award/Contract Number:AC52-06NA25396
- DOE Contract Number:
- AC52-06NA25396
- Code ID:
- 134596
- OSTI ID:
- code-134596
- Country of Origin:
- United States
Similar Records
Feature Interpretability
PyTorch Implementation of Log-Additive Convolutional Neural Networks