Big Data Meets HPC Log Analytics: Scalable Approach to Understanding Systems at Extreme Scale
- ORNL
Today's high-performance computing (HPC) systems are heavily instrumented, generating logs containing information about abnormal events, such as critical conditions, faults, errors and failures, system resource utilization, and about the resource usage of user applications. These logs, once fully analyzed and correlated, can produce detailed information about the system health, root causes of failures, and analyze an application's interactions with the system, providing valuable insights to domain scientists and system administrators. However, processing HPC logs requires a deep understanding of hardware and software components at multiple layers of the system stack. Moreover, most log data is unstructured and voluminous, making it more difficult for system users and administrators to manually inspect the data. With rapid increases in the scale and complexity of HPC systems, log data processing is becoming a big data challenge. This paper introduces a HPC log data analytics framework that is based on a distributed NoSQL database technology, which provides scalability and high availability, and the Apache Spark framework for rapid in-memory processing of the log data. The analytics framework enables the extraction of a range of information about the system so that system administrators and end users alike can obtain necessary insights for their specific needs. We describe our experience with using this framework to glean insights from the log data about system behavior from the Titan supercomputer at the Oak Ridge National Laboratory.
- Research Organization:
- Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (United States)
- Sponsoring Organization:
- USDOE; USDOE Office of Science (SC), Advanced Scientific Computing Research (ASCR) (SC-21)
- DOE Contract Number:
- AC05-00OR22725
- OSTI ID:
- 1460236
- Country of Publication:
- United States
- Language:
- English
| Correlation clustering 
 | conference | November 2002 | 
| Feng shui of supercomputer memory: positional effects in DRAM and SRAM faults 
 | conference | January 2013 | 
| OVIS: a tool for intelligent, real-time monitoring of computational clusters 
 | conference | January 2006 | 
| What Supercomputers Say: A Study of Five System Logs 
 | conference | June 2007 | 
| Cosmic rays don't strike twice: understanding the nature of DRAM errors and the implications for system design 
 | conference | January 2012 | 
| Improving Log-based Field Failure Data Analysis of multi-node computing systems 
 | conference | June 2011 | 
| Fault prediction under the microscope: A closer look into HPC systems 
 | conference | November 2012 | 
| Assessing time coalescence techniques for the analysis of supercomputer logs 
 | conference | June 2012 | 
| Reliability lessons learned from GPU experience with the Titan supercomputer at Oak Ridge leadership computing facility 
 | conference | January 2015 | 
| A Large-Scale Study of Failures in High-Performance Computing Systems 
 | journal | October 2010 | 
| DRAM errors in the wild: a large-scale field study 
 | conference | January 2009 | 
| The ganglia distributed monitoring system: design, implementation, and experience 
 | journal | July 2004 | 
| Induction of decision trees 
 | journal | March 1986 | 
| Lessons Learned from the Analysis of System Failures at Petascale: The Case of Blue Waters 
 | conference | June 2014 | 
Similar Records
A Big Data Analytics Framework for HPC Log Data: Three Case Studies Using the Titan Supercomputer Log
Deactivation and decommissioning web log analysis using big data technology - 15710