Abstract
A new vector parallel supercomputer, Fujitsu VPP500, was installed at RIKEN earlier this year. It consists of 30 vector computers, each with 1.6GFLOPS peak speed and 256MB memory, connected by a crossbar switch with 400MB/s peak data transfer rate each way between any pair of nodes. The authors developed a Fortran lattice QCD simulation code for it. It runs at about 1.1GFLOPS sustained per node for Metropolis pure-gauge update, and about 0.8GFLOPS sustained per node for conjugate gradient inversion of staggered fermion matrix. ((orig.)).
Citation Formats
Kim, S, and Ohta, S.
Lattice QCD calculation using VPP500.
Netherlands: N. p.,
1995.
Web.
doi:10.1016/0920-5632(95)00422-6.
Kim, S, & Ohta, S.
Lattice QCD calculation using VPP500.
Netherlands.
https://doi.org/10.1016/0920-5632(95)00422-6
Kim, S, and Ohta, S.
1995.
"Lattice QCD calculation using VPP500."
Netherlands.
https://doi.org/10.1016/0920-5632(95)00422-6.
@misc{etde_101226,
title = {Lattice QCD calculation using VPP500}
author = {Kim, S, and Ohta, S}
abstractNote = {A new vector parallel supercomputer, Fujitsu VPP500, was installed at RIKEN earlier this year. It consists of 30 vector computers, each with 1.6GFLOPS peak speed and 256MB memory, connected by a crossbar switch with 400MB/s peak data transfer rate each way between any pair of nodes. The authors developed a Fortran lattice QCD simulation code for it. It runs at about 1.1GFLOPS sustained per node for Metropolis pure-gauge update, and about 0.8GFLOPS sustained per node for conjugate gradient inversion of staggered fermion matrix. ((orig.)).}
doi = {10.1016/0920-5632(95)00422-6}
journal = []
volume = {42}
journal type = {AC}
place = {Netherlands}
year = {1995}
month = {Apr}
}
title = {Lattice QCD calculation using VPP500}
author = {Kim, S, and Ohta, S}
abstractNote = {A new vector parallel supercomputer, Fujitsu VPP500, was installed at RIKEN earlier this year. It consists of 30 vector computers, each with 1.6GFLOPS peak speed and 256MB memory, connected by a crossbar switch with 400MB/s peak data transfer rate each way between any pair of nodes. The authors developed a Fortran lattice QCD simulation code for it. It runs at about 1.1GFLOPS sustained per node for Metropolis pure-gauge update, and about 0.8GFLOPS sustained per node for conjugate gradient inversion of staggered fermion matrix. ((orig.)).}
doi = {10.1016/0920-5632(95)00422-6}
journal = []
volume = {42}
journal type = {AC}
place = {Netherlands}
year = {1995}
month = {Apr}
}