Implementing sparse matrix techniques in the ERATO code
Abstract
The ERATO code computes the stability of an equilibrium with respect to the linear MHD equation. The ERATO code is divided into 5 programs (ERATO1 through ERATO5). This report documents some minor changes made in ERATO3 and a major reorganization of ERATO4. The changes were made to reduce drastically the amount of secondary storage needed by the codes and at the same time improve the efficiency of ERATO4. The ultimate goal is to allow the successful completion of runs with finer meshes than are currently possible. ERATO4 takes given matrices, A and B, and a shift parameter, ..omega../sub 0/, and computes the Choleski factorization (U/sup T/DU) of the matrix A - ..omega../sub 0/B. It then uses the factorization to implement inverse iteration to find the eigenvalue of the generalized eigenvalue problem A - lambda B closest to ..omega../sub 0/ and its corresponding eigenvector. The matrices A and B are highly structured and quite sparse. Both are symmetric and B is positive definite. Each block consists of 16 square subblocks. 9 figures. (RWR)
- Authors:
- Publication Date:
- Research Org.:
- Union Carbide Corp., Oak Ridge, TN (USA). Computer Sciences Div.
- OSTI Identifier:
- 5404400
- Report Number(s):
- ORNL/CSD/TM-117
TRN: 80-007674
- DOE Contract Number:
- W-7405-ENG-26
- Resource Type:
- Technical Report
- Country of Publication:
- United States
- Language:
- English
- Subject:
- 71 CLASSICAL AND QUANTUM MECHANICS, GENERAL PHYSICS; 99 GENERAL AND MISCELLANEOUS//MATHEMATICS, COMPUTING, AND INFORMATION SCIENCE; 75 CONDENSED MATTER PHYSICS, SUPERCONDUCTIVITY AND SUPERFLUIDITY; COMPUTER CODES; E CODES; MATRICES; EIGENVALUES; FACTORIZATION; CALCULATION METHODS; EIGENVECTORS; EQUILIBRIUM; ITERATIVE METHODS; MAGNETOHYDRODYNAMICS; STABILITY; FLUID MECHANICS; HYDRODYNAMICS; MECHANICS; 658000* - Mathematical Physics- (-1987); 990200 - Mathematics & Computers; 640430 - Fluid Physics- Magnetohydrodynamics
Citation Formats
Scott, D.S.. Implementing sparse matrix techniques in the ERATO code. United States: N. p., 1980.
Web. doi:10.2172/5404400.
Scott, D.S.. Implementing sparse matrix techniques in the ERATO code. United States. doi:10.2172/5404400.
Scott, D.S.. Tue .
"Implementing sparse matrix techniques in the ERATO code". United States.
doi:10.2172/5404400. https://www.osti.gov/servlets/purl/5404400.
@article{osti_5404400,
title = {Implementing sparse matrix techniques in the ERATO code},
author = {Scott, D.S.},
abstractNote = {The ERATO code computes the stability of an equilibrium with respect to the linear MHD equation. The ERATO code is divided into 5 programs (ERATO1 through ERATO5). This report documents some minor changes made in ERATO3 and a major reorganization of ERATO4. The changes were made to reduce drastically the amount of secondary storage needed by the codes and at the same time improve the efficiency of ERATO4. The ultimate goal is to allow the successful completion of runs with finer meshes than are currently possible. ERATO4 takes given matrices, A and B, and a shift parameter, ..omega../sub 0/, and computes the Choleski factorization (U/sup T/DU) of the matrix A - ..omega../sub 0/B. It then uses the factorization to implement inverse iteration to find the eigenvalue of the generalized eigenvalue problem A - lambda B closest to ..omega../sub 0/ and its corresponding eigenvector. The matrices A and B are highly structured and quite sparse. Both are symmetric and B is positive definite. Each block consists of 16 square subblocks. 9 figures. (RWR)},
doi = {10.2172/5404400},
journal = {},
number = ,
volume = ,
place = {United States},
year = {Tue Apr 01 00:00:00 EST 1980},
month = {Tue Apr 01 00:00:00 EST 1980}
}
-
The purpose of this paper is to show how sparse Gaussian elimination is applied to the numerical simulation of petroleum reservoirs. In particular, it presents the work and computing-time requirements of sparse Gaussian elimination for some typical problems of reservoir simulation.
-
Finite sparse matrix techniques. [Solution of linear systems Ax = b]
A unified theory of finite sparse matrix techniques based on a literature search and new results is presented. It is intended to aid in computational work and symbolic manipulation of large sparse systems of linear equations. The theory relies on the bijection property of bipartite graph and rectangular Boolean matrix representation. The concept of perfect elimination matrices is extended from the classification under similarity transformations to that under equivalence transformations with permutation matrices. The reducibility problem is treated with a new and simpler proof than found in the literature. A number of useful algorithms are described. The minimum deficiency algorithmsmore » -
The 2005 International Conference on Preconditioning Techniques for Large Sparse Matrix Problems in Scientific and Industrial Applications (Final Report)
The document is just a short report on the conference. -
Multi-threaded Sparse Matrix Sparse Matrix Multiplication for Many-Core and GPU Architectures.
Sparse Matrix-Matrix multiplication is a key kernel that has applications in several domains such as scientific computing and graph analysis. Several algorithms have been studied in the past for this foundational kernel. In this paper, we develop parallel algorithms for sparse matrix- matrix multiplication with a focus on performance portability across different high performance computing architectures. The performance of these algorithms depend on the data structures used in them. We compare different types of accumulators in these algorithms and demonstrate the performance difference between these data structures. Furthermore, we develop a meta-algorithm, kkSpGEMM, to choose the right algorithm and datamore » -
Parallel sparse matrix computations: Wavefront minimization of sparse matrices. Final report for the period ending June 14, 1998
Gary Kumfert and Alex Pothen have improved the quality and run time of two ordering algorithms for minimizing the wavefront and envelope size of sparse matrices and graphs. These algorithms compute orderings for irregular data structures (e.g., unstructured meshes) that reduce the number of cache misses on modern workstation architectures. They have completed the implementation of a parallel solver for sparse, symmetric indefinite systems for distributed memory computers such as the IBM SP-2. The indefiniteness requires one to incorporate block pivoting (2 by 2 blocks) in the algorithm, thus demanding dynamic, parallel data structures. This is the first reported parallelmore »