Abstract
This library is a collection of python modules that define, train, and analyze vision-transformer (ViT) machine learning models. The code implements, with mild modifications, ViT models that have been made publicly available through publication and GitHub code. The training data for these models is hydrodynamic simulation output in the form of numpy arrays. This library contains code to train these ViT models on the hydrodynamic simulation output with a variety of hyperparameters, and to compare the results of such models. Furthermore, the library contains definitions of simple convolutional neural network (CNN) machine learning architectures which can be trained on the same hydrodynamic simulation output. These are included as a reference point to compare the ViT models to. Additionally, the library includes trained ViT and CNN models and example input data for demonstration purposes. The code is based on the PyTorch python library.
- Developers:
- Release Date:
- 2024-05-29
- Project Type:
- Open Source, Publicly Available Repository
- Software Type:
- Scientific
- Licenses:
-
BSD 3-clause "New" or "Revised" License
- Sponsoring Org.:
-
USDOE National Nuclear Security Administration (NNSA)Primary Award/Contract Number:AC52-06NA25396
- Code ID:
- 134596
- Site Accession Number:
- O4749
- Research Org.:
- Los Alamos National Laboratory (LANL), Los Alamos, NM (United States)
- Country of Origin:
- United States
Citation Formats
Callis, Skylar.
Hyperparameter Studies for Vision Transformers Trained on High-Fidelity Simulations.
Computer Software.
https://github.com/lanl/CNN-T2TViT-comparison.
USDOE National Nuclear Security Administration (NNSA).
29 May. 2024.
Web.
doi:10.11578/dc.20240712.6.
Callis, Skylar.
(2024, May 29).
Hyperparameter Studies for Vision Transformers Trained on High-Fidelity Simulations.
[Computer software].
https://github.com/lanl/CNN-T2TViT-comparison.
https://doi.org/10.11578/dc.20240712.6.
Callis, Skylar.
"Hyperparameter Studies for Vision Transformers Trained on High-Fidelity Simulations." Computer software.
May 29, 2024.
https://github.com/lanl/CNN-T2TViT-comparison.
https://doi.org/10.11578/dc.20240712.6.
@misc{
doecode_134596,
title = {Hyperparameter Studies for Vision Transformers Trained on High-Fidelity Simulations},
author = {Callis, Skylar},
abstractNote = {This library is a collection of python modules that define, train, and analyze vision-transformer (ViT) machine learning models. The code implements, with mild modifications, ViT models that have been made publicly available through publication and GitHub code. The training data for these models is hydrodynamic simulation output in the form of numpy arrays. This library contains code to train these ViT models on the hydrodynamic simulation output with a variety of hyperparameters, and to compare the results of such models. Furthermore, the library contains definitions of simple convolutional neural network (CNN) machine learning architectures which can be trained on the same hydrodynamic simulation output. These are included as a reference point to compare the ViT models to. Additionally, the library includes trained ViT and CNN models and example input data for demonstration purposes. The code is based on the PyTorch python library.},
doi = {10.11578/dc.20240712.6},
url = {https://doi.org/10.11578/dc.20240712.6},
howpublished = {[Computer Software] \url{https://doi.org/10.11578/dc.20240712.6}},
year = {2024},
month = {may}
}