skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Implementing sparse matrix techniques in the ERATO code

Abstract

The ERATO code computes the stability of an equilibrium with respect to the linear MHD equation. The ERATO code is divided into 5 programs (ERATO1 through ERATO5). This report documents some minor changes made in ERATO3 and a major reorganization of ERATO4. The changes were made to reduce drastically the amount of secondary storage needed by the codes and at the same time improve the efficiency of ERATO4. The ultimate goal is to allow the successful completion of runs with finer meshes than are currently possible. ERATO4 takes given matrices, A and B, and a shift parameter, ..omega../sub 0/, and computes the Choleski factorization (U/sup T/DU) of the matrix A - ..omega../sub 0/B. It then uses the factorization to implement inverse iteration to find the eigenvalue of the generalized eigenvalue problem A - lambda B closest to ..omega../sub 0/ and its corresponding eigenvector. The matrices A and B are highly structured and quite sparse. Both are symmetric and B is positive definite. Each block consists of 16 square subblocks. 9 figures. (RWR)

Authors:
Publication Date:
Research Org.:
Union Carbide Corp., Oak Ridge, TN (USA). Computer Sciences Div.
OSTI Identifier:
5404400
Report Number(s):
ORNL/CSD/TM-117
TRN: 80-007674
DOE Contract Number:
W-7405-ENG-26
Resource Type:
Technical Report
Country of Publication:
United States
Language:
English
Subject:
71 CLASSICAL AND QUANTUM MECHANICS, GENERAL PHYSICS; 99 GENERAL AND MISCELLANEOUS//MATHEMATICS, COMPUTING, AND INFORMATION SCIENCE; 75 CONDENSED MATTER PHYSICS, SUPERCONDUCTIVITY AND SUPERFLUIDITY; COMPUTER CODES; E CODES; MATRICES; EIGENVALUES; FACTORIZATION; CALCULATION METHODS; EIGENVECTORS; EQUILIBRIUM; ITERATIVE METHODS; MAGNETOHYDRODYNAMICS; STABILITY; FLUID MECHANICS; HYDRODYNAMICS; MECHANICS; 658000* - Mathematical Physics- (-1987); 990200 - Mathematics & Computers; 640430 - Fluid Physics- Magnetohydrodynamics

Citation Formats

Scott, D.S.. Implementing sparse matrix techniques in the ERATO code. United States: N. p., 1980. Web. doi:10.2172/5404400.
Scott, D.S.. Implementing sparse matrix techniques in the ERATO code. United States. doi:10.2172/5404400.
Scott, D.S.. Tue . "Implementing sparse matrix techniques in the ERATO code". United States. doi:10.2172/5404400. https://www.osti.gov/servlets/purl/5404400.
@article{osti_5404400,
title = {Implementing sparse matrix techniques in the ERATO code},
author = {Scott, D.S.},
abstractNote = {The ERATO code computes the stability of an equilibrium with respect to the linear MHD equation. The ERATO code is divided into 5 programs (ERATO1 through ERATO5). This report documents some minor changes made in ERATO3 and a major reorganization of ERATO4. The changes were made to reduce drastically the amount of secondary storage needed by the codes and at the same time improve the efficiency of ERATO4. The ultimate goal is to allow the successful completion of runs with finer meshes than are currently possible. ERATO4 takes given matrices, A and B, and a shift parameter, ..omega../sub 0/, and computes the Choleski factorization (U/sup T/DU) of the matrix A - ..omega../sub 0/B. It then uses the factorization to implement inverse iteration to find the eigenvalue of the generalized eigenvalue problem A - lambda B closest to ..omega../sub 0/ and its corresponding eigenvector. The matrices A and B are highly structured and quite sparse. Both are symmetric and B is positive definite. Each block consists of 16 square subblocks. 9 figures. (RWR)},
doi = {10.2172/5404400},
journal = {},
number = ,
volume = ,
place = {United States},
year = {Tue Apr 01 00:00:00 EST 1980},
month = {Tue Apr 01 00:00:00 EST 1980}
}

Technical Report:

Save / Share:
  • The purpose of this paper is to show how sparse Gaussian elimination is applied to the numerical simulation of petroleum reservoirs. In particular, it presents the work and computing-time requirements of sparse Gaussian elimination for some typical problems of reservoir simulation.
  • A unified theory of finite sparse matrix techniques based on a literature search and new results is presented. It is intended to aid in computational work and symbolic manipulation of large sparse systems of linear equations. The theory relies on the bijection property of bipartite graph and rectangular Boolean matrix representation. The concept of perfect elimination matrices is extended from the classification under similarity transformations to that under equivalence transformations with permutation matrices. The reducibility problem is treated with a new and simpler proof than found in the literature. A number of useful algorithms are described. The minimum deficiency algorithmsmore » are extended to the new classification, where the latter required a different technique of proof. 13 figures, 7 tables.« less
  • Sparse Matrix-Matrix multiplication is a key kernel that has applications in several domains such as scientific computing and graph analysis. Several algorithms have been studied in the past for this foundational kernel. In this paper, we develop parallel algorithms for sparse matrix- matrix multiplication with a focus on performance portability across different high performance computing architectures. The performance of these algorithms depend on the data structures used in them. We compare different types of accumulators in these algorithms and demonstrate the performance difference between these data structures. Furthermore, we develop a meta-algorithm, kkSpGEMM, to choose the right algorithm and datamore » structure based on the characteristics of the problem. We show performance comparisons on three architectures and demonstrate the need for the community to develop two phase sparse matrix-matrix multiplication implementations for efficient reuse of the data structures involved.« less
  • Gary Kumfert and Alex Pothen have improved the quality and run time of two ordering algorithms for minimizing the wavefront and envelope size of sparse matrices and graphs. These algorithms compute orderings for irregular data structures (e.g., unstructured meshes) that reduce the number of cache misses on modern workstation architectures. They have completed the implementation of a parallel solver for sparse, symmetric indefinite systems for distributed memory computers such as the IBM SP-2. The indefiniteness requires one to incorporate block pivoting (2 by 2 blocks) in the algorithm, thus demanding dynamic, parallel data structures. This is the first reported parallelmore » solver for the indefinite problem. Direct methods for solving systems of linear equations employ sophisticated combinatorial and algebraic algorithms that contribute to software complexity, and hence it is natural to consider object-oriented design (OOD) in this context. The authors have continued to create software for solving sparse systems of linear equations by direct methods employing OOD. Fast computation of robust preconditioners is a priority for solving large systems of equations on unstructured grids and in other applications. They have developed new algorithms and software that can compute incomplete factorization preconditioners for high level fill in time proportional to the number of floating point operations and memory accesses.« less