skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Massively Parallel Simulations of Hemodynamics in the Primary Large Arteries of the Human Vasculature

; ;
Publication Date:
Research Org.:
Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
Sponsoring Org.:
OSTI Identifier:
Report Number(s):
DOE Contract Number:
Resource Type:
Resource Relation:
Conference: Presented at: International Conference on Computational Science, Reykjavik, Iceland, Jun 01 - Jun 03, 2015
Country of Publication:
United States

Citation Formats

Randles, A, Draeger, E, and Bailey, P. Massively Parallel Simulations of Hemodynamics in the Primary Large Arteries of the Human Vasculature. United States: N. p., 2015. Web.
Randles, A, Draeger, E, & Bailey, P. Massively Parallel Simulations of Hemodynamics in the Primary Large Arteries of the Human Vasculature. United States.
Randles, A, Draeger, E, and Bailey, P. 2015. "Massively Parallel Simulations of Hemodynamics in the Primary Large Arteries of the Human Vasculature". United States. doi:.
title = {Massively Parallel Simulations of Hemodynamics in the Primary Large Arteries of the Human Vasculature},
author = {Randles, A and Draeger, E and Bailey, P},
abstractNote = {},
doi = {},
journal = {},
number = ,
volume = ,
place = {United States},
year = 2015,
month = 1

Other availability
Please see Document Availability for additional information on obtaining the full-text document. Library patrons may search WorldCat to identify libraries that hold this conference proceeding.

Save / Share:
  • Cited by 4
  • Successful molecular dynamics (MD) simulation of large systems (> million atoms) for long times (> nanoseconds) requires the integration of constrained equations of motion (CEOM). Constraints are used to eliminate high frequency degrees of freedom (DOF) and to allow the use of rigid bodies. Solving the CEOM allows for larger integration time-steps and helps focus the simulation on the important collective dynamics of chemical, biological, and materials systems. We explore advances in multibody dynamics which have resulted in O(N) algorithms for propagating the CEOM. However, because of their strictly sequential nature, the computational time required by these algorithms does notmore » scale down with increased numbers of processors. We then present the new constraint force algorithm for solving the CEOM and show that this algorithm is fully parallelizable, leading to a computational cost of O(N/P+IogP) for N DOF on P processors.« less
  • A fully connected feedforward neural network is simulated on a number of parallel computers (MasPar-1, Connection Machine CM5, Intel iPSC-2 and iPSC-860) and the performance is compared to that obtained on sequential vector computers (Cray YMP, Cray C90, IIBM-3090) and to a scaler workstation (MM RISC-6000). Peak performances of up to 342 million connections per second (MCPS) could be obtained on the Cray C90 using a single processor while the optimum performance obtained on the parallel computers was 90 MCPS using 4096 processors. Efficiency such as these has enabled neural network computations to be carried out for a number ofmore » chemical physics problems. Several examples are discussed: multi-dimensional function/surface fitting, coordinate transformations, and predictions of physical properties from chemical structure.« less
  • Large scale molecular dynamics simulations on a massively parallel computer are performed to investigate the mechanical behavior of 2-dimensional materials. A pair potential and a model embedded atom many-body potential are examined, corresponding to brittle'' and ductile'' materials, respectively. A parallel MD algorithm is developed to exploit the architecture of the Connection Machine, enabling simulations of > 10[sup 6] atoms. A model spallation experiment is performed on a 2-D triagonal crystal with a well-defined nanocrystalline defect on the spall plane. The process of spallation is modelled as a uniform adiabatic expansion. The spall strength is shown to be proportional tomore » the logarithm of the applied strain rate and a dislocation dynamics model is used to explain the results. Good predictions for the onset of spallation in the computer experiments is found from the simple model. The nanocrystal defect affects the propagation of the shock front and failure is enhanced along the grain boundary.« less
  • The implementation of explicit finite element methods with contact-impact on massively parallel SIMD computers is described. The basic parallel finite element algorithm employs an exchange process which minimizes interprocessor communication at the expense of redundant computations and storage. The contact-impact algorithm is based on the pinball method in which compatibility is enforced by preventing interpenetration on spheres embedded in elements adjacent to surfaces. The enhancements to the pinball algorithm include a parallel assembled surface normal algorithm and a parallel detection of interpenetrating pairs. Some timings with and without contact-impact are given.