skip to main content

DOE PAGESDOE PAGES

This content will become publicly available on May 27, 2017

Title: Large-scale seismic waveform quality metric calculation using Hadoop

Here in this work we investigated the suitability of Hadoop MapReduce and Apache Spark for large-scale computation of seismic waveform quality metrics by comparing their performance with that of a traditional distributed implementation. The Incorporated Research Institutions for Seismology (IRIS) Data Management Center (DMC) provided 43 terabytes of broadband waveform data of which 5.1 TB of data were processed with the traditional architecture, and the full 43 TB were processed using MapReduce and Spark. Maximum performance of ~0.56 terabytes per hour was achieved using all 5 nodes of the traditional implementation. We noted that I/O dominated processing, and that I/O performance was deteriorating with the addition of the 5th node. Data collected from this experiment provided the baseline against which the Hadoop results were compared. Next, we processed the full 43 TB dataset using both MapReduce and Apache Spark on our 18-node Hadoop cluster. We conducted these experiments multiple times with various subsets of the data so that we could build models to predict performance as a function of dataset size. We found that both MapReduce and Spark significantly outperformed the traditional reference implementation. At a dataset size of 5.1 terabytes, both Spark and MapReduce were about 15 times fastermore » than the reference implementation. Furthermore, our performance models predict that for a dataset of 350 terabytes, Spark running on a 100-node cluster would be about 265 times faster than the reference implementation. We do not expect that the reference implementation deployed on a 100-node cluster would perform significantly better than on the 5-node cluster because the I/O performance cannot be made to scale. Finally, we note that although Big Data technologies clearly provide a way to process seismic waveform datasets in a high-performance and scalable manner, the technology is still rapidly changing, requires a high degree of investment in personnel, and will likely require significant changes in other parts of our infrastructure. Nevertheless, we anticipate that as the technology matures and third-party tool vendors make it easier to manage and operate clusters, Hadoop (or a successor) will play a large role in our seismic data processing.« less
Authors:
 [1] ;  [1] ;  [1] ;  [1] ;  [1]
  1. Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
Publication Date:
OSTI Identifier:
1262167
Report Number(s):
LLNL-JRNL--683307
Journal ID: ISSN 0098-3004
Grant/Contract Number:
AC52-07NA27344
Type:
Accepted Manuscript
Journal Name:
Computers and Geosciences
Additional Journal Information:
Journal Volume: 94; Journal Issue: C; Journal ID: ISSN 0098-3004
Publisher:
Elsevier
Research Org:
Lawrence Livermore National Laboratory (LLNL), Livermore, CA
Sponsoring Org:
USDOE
Country of Publication:
United States
Language:
English
Subject:
58 GEOSCIENCES; 97 MATHEMATICS, COMPUTING, AND INFORMATION SCIENCE; 97 MATHEMATICS, COMPUTING, AND INFORMATION SCIENCE