skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Parallel Statistical Computing with R: An Illustration on Two Architectures

Authors:
ORCiD logo [1];  [2];  [1]
  1. ORNL
  2. United States Food and Drug Administration (FDA)
Publication Date:
Research Org.:
Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
Sponsoring Org.:
USDOE Office of Science (SC), Advanced Scientific Computing Research (ASCR) (SC-21)
OSTI Identifier:
1399456
DOE Contract Number:
AC05-00OR22725
Resource Type:
Conference
Resource Relation:
Conference: ISI 61st World Statistics Congress Proceedings - Marrakech, , Morocco - 7/16/2017 12:00:00 PM-7/21/2017 12:00:00 PM
Country of Publication:
United States
Language:
English

Citation Formats

Ostrouchov, George, Chen, Wei-chen, and Schmidt, Drew. Parallel Statistical Computing with R: An Illustration on Two Architectures. United States: N. p., 2017. Web.
Ostrouchov, George, Chen, Wei-chen, & Schmidt, Drew. Parallel Statistical Computing with R: An Illustration on Two Architectures. United States.
Ostrouchov, George, Chen, Wei-chen, and Schmidt, Drew. Sat . "Parallel Statistical Computing with R: An Illustration on Two Architectures". United States. doi:. https://www.osti.gov/servlets/purl/1399456.
@article{osti_1399456,
title = {Parallel Statistical Computing with R: An Illustration on Two Architectures},
author = {Ostrouchov, George and Chen, Wei-chen and Schmidt, Drew},
abstractNote = {},
doi = {},
journal = {},
number = ,
volume = ,
place = {United States},
year = {Sat Jul 01 00:00:00 EDT 2017},
month = {Sat Jul 01 00:00:00 EDT 2017}
}

Conference:
Other availability
Please see Document Availability for additional information on obtaining the full-text document. Library patrons may search WorldCat to identify libraries that hold this conference proceeding.

Save / Share:
  • The interior point methods for linear programming in general share the common characteristic that most computational efforts are spent in factorizing and solving the symmetric positive semi-definite matrix that results from the sparse least square equations. The number of iterations of most interior point methods is fairly small and is almost invariant to changes in problem dimensions. For numerical reasons, it is unlikely that the average number of iterations can be greatly reduced. Therefore research into speeding up the computation of the IPM concentrates mostly on improving solution methods for the sparse least square equations. The symmetric matrix resulting frommore » these equations can be fairly dense and slow down computation considerably. Alternative approaches using augmented system, column splitting and schur complement have been reported and implemented within commercial systems. These approaches are mostly suitable for matrices with a small number of highly dense columns. Alternatively, iterative or hybrid (direct-iterative) solution methods can also be employed. Whatever solution approach is used, the key factor in speeding up the computation is efficient exploitation of the static density structure of the matrices involved. Supernodes, decomposition, dense windows and elimination tree techniques for processor scheduling have been successfully employed for speeding up the computation. Some of these techniques naturally reveal the parallel nature of the underlying computational model. In this paper, we present a parallel interior point algorithm based on some of the above mentioned methods. The algorithm is designed for the PVM environment which can exploit dedicated computers as well as cluster of workstations. We show that although the main computational effort of the algorithm concentrates in few steps, a proper speed up can be achieved only if the whole algorithm is designed to run in parallel.« less
  • An intensive R&D and programming effort is required to accomplish new challenges posed by future experimental high-energy particle physics (HEP) programs. The GeantV project aims to narrow the gap between the performance of the existing HEP detector simulation software and the ideal performance achievable, exploiting latest advances in computing technology. The project has developed a particle detector simulation prototype capable of transporting in parallel particles in complex geometries exploiting instruction level microparallelism (SIMD and SIMT), task-level parallelism (multithreading) and high-level parallelism (MPI), leveraging both the multi-core and the many-core opportunities. We present preliminary verification results concerning the electromagnetic (EM) physicsmore » models developed for parallel computing architectures within the GeantV project. In order to exploit the potential of vectorization and accelerators and to make the physics model effectively parallelizable, advanced sampling techniques have been implemented and tested. In this paper we introduce a set of automated statistical tests in order to verify the vectorized models by checking their consistency with the corresponding Geant4 models and to validate them against experimental data.« less
  • Wavelets are the mathematical equivalent of a microscope, a means of looking at more or less detail in data. By applying wavelet transforms to remote sensing data (satellite images, atmospheric profiles, etc.), we can discover symmetries in Nature's ways of changing in time and displaying a highly variable environment at any given time. These symmetries are not exact but statistical. The most intriguing one is 'scale-invariance' which describes how spatial statistics collected over a wide range of scales (using wavelets) follow simple power laws with respect to the scale parameter. The geometrical counterparts of statistical scale-invariance are the random fractalsmore » so often observed in Nature. This wavelet-based exploration of natural symmetry will be illustrated with clouds where asymmetries and broken symmetries are also uncovered Both symmetry and symmetry-breaking have deep physical meanings.« less
  • In this paper the implementation of a parallel O(LogN) algorithm for computation of rigid multibody dynamics on a Hypercube MIMD parallel architecture is presented. To our knowledge, this is the first algorithm that achieves the time lower bound of O(LogN) by using an optimal number of O(N) processors. However, in addition to its theoretical significance, the algorithm is also highly efficient for practical implementation on commercially available MIMD parallel architectures due to its highly coarse grain size and simple communication and synchronization requirements. We present a multilevel parallel computation strategy for implementation of the algorithm on a Hypercube. This strategymore » allows the exploitation of parallelism at several computational levels as well as maximum overlapping of computation and communication to increase the performance of parallel computation. 24 refs.« less