Abstract
The Vistransformers Explained library is a collection of python notebooks that demonstrate the internal mechanics and uses of visual-transformer (ViT) machine learning models. The code implements, with mild modifications, ViT models that have been made publicly available through publication and GitHub code. The value added by this code is in-depth explanations of the mathematics behind the sub-modules of the ViT models, including original figures. Additionally, the library contains the code necessary to implement and train the ViT models. The library does not include example training data for the models; instead, it would rely on users generating their own datasets. The code is based on the PyTorch python library. It does not include any files other than python scripts, modules, or notebooks.
- Developers:
- Release Date:
- 2024-01-24
- Project Type:
- Open Source, Publicly Available Repository
- Software Type:
- Scientific
- Licenses:
-
BSD 3-clause "New" or "Revised" License
- Sponsoring Org.:
-
USDOE National Nuclear Security Administration (NNSA)Primary Award/Contract Number:AC52-06NA25396
- Code ID:
- 125432
- Site Accession Number:
- O4693
- Research Org.:
- Los Alamos National Laboratory (LANL), Los Alamos, NM (United States)
- Country of Origin:
- United States
Citation Formats
Callis, Skylar.
Vistransformers Explained.
Computer Software.
https://github.com/lanl/vision_transformers_explained.
USDOE National Nuclear Security Administration (NNSA).
24 Jan. 2024.
Web.
doi:10.11578/dc.20240322.8.
Callis, Skylar.
(2024, January 24).
Vistransformers Explained.
[Computer software].
https://github.com/lanl/vision_transformers_explained.
https://doi.org/10.11578/dc.20240322.8.
Callis, Skylar.
"Vistransformers Explained." Computer software.
January 24, 2024.
https://github.com/lanl/vision_transformers_explained.
https://doi.org/10.11578/dc.20240322.8.
@misc{
doecode_125432,
title = {Vistransformers Explained},
author = {Callis, Skylar},
abstractNote = {The Vistransformers Explained library is a collection of python notebooks that demonstrate the internal mechanics and uses of visual-transformer (ViT) machine learning models. The code implements, with mild modifications, ViT models that have been made publicly available through publication and GitHub code. The value added by this code is in-depth explanations of the mathematics behind the sub-modules of the ViT models, including original figures. Additionally, the library contains the code necessary to implement and train the ViT models. The library does not include example training data for the models; instead, it would rely on users generating their own datasets. The code is based on the PyTorch python library. It does not include any files other than python scripts, modules, or notebooks.},
doi = {10.11578/dc.20240322.8},
url = {https://doi.org/10.11578/dc.20240322.8},
howpublished = {[Computer Software] \url{https://doi.org/10.11578/dc.20240322.8}},
year = {2024},
month = {jan}
}