Skip to main content
U.S. Department of Energy
Office of Scientific and Technical Information

Pseudonymization at Scale: OLCF’s Summit Usage Data Case Study

Conference ·

The analysis of vast amounts of data and the processing of complex computational jobs have traditionally relied upon high performance computing (HPC) systems, which offer reliable and efficient management of large-scale computational and data resources. Understanding these analyses’ needs is paramount for designing solutions that can lead to better science, and similarly, understanding the characteristics of the user behavior on those systems is important for improving user experiences on HPC systems. A common approach to gathering data about user behavior is to extract workload characteristics from system log data available only to system administrators. Recently at Oak Ridge Leadership Computing Facility (OLCF), however, we unveiled user behavior about the Summit supercomputer by collecting data from a user’s point of view with ordinary Unix commands.In this paper, we discuss the process, challenges, and lessons learned while preparing this dataset for publication and submission to an open data challenge. The original dataset contains personal identifiable information (PII) about the users of OLCF which needed be masked prior to publication, and we determined that anonymization, which scrubs PII completely, destroyed too much of the structure of the data to be interesting for the data challenge. We instead chose to pseudonymize the dataset, which reduced the linkability of the dataset to the users’ identities. Pseudonymization is significantly more computationally expensive than anonymization, and the size of our dataset, which is approximately 175 million lines of raw text, necessitated the development of a parallelized workflow that could be reused on different HPC machines. We demonstrate the scaling behavior of the workflow on two leadership class HPC systems at OLCF, and we show that we were able to bring the overall makespan time from an impractical 20+ hours on a single node down to around 2 hours. As a result of this work, we release the entire pseudonymized dataset and make the workflows and source code publicly available.

Research Organization:
Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (United States)
Sponsoring Organization:
USDOE; USDOE Office of Science (SC)
DOE Contract Number:
AC05-00OR22725
OSTI ID:
1928931
Country of Publication:
United States
Language:
English

References (16)

The Design, Deployment, and Evaluation of the CORAL Pre-Exascale Systems
  • Vazhkudai, Sudharshan S.; de Supinski, Bronis R.; Bland, Arthur S.
  • SC18: International Conference for High Performance Computing, Networking, Storage and Analysis https://doi.org/10.1109/SC.2018.00055
conference November 2018
Reusability First: Toward FAIR Workflows conference September 2021
Experience with using the Parallel Workloads Archive journal October 2014
The Failure Trace Archive: Enabling the comparison of failure measurements and models of distributed systems journal August 2013
Swift/T: Large-Scale Application Composition via Distributed-Memory Dataflow Processing
  • Wozniak, J. M.; Armstrong, T. G.; Wilde, M.
  • 2013 13th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid), 2013 13th IEEE/ACM International Symposium on Cluster, Cloud, and Grid Computing https://doi.org/10.1109/CCGrid.2013.99
conference May 2013
Prediction of job characteristics for intelligent resource allocation in HPC systems: a survey and future directions journal May 2022
A Community Roadmap for Scientific Workflows Research and Development conference November 2021
Constellation: A science graph network for scalable data and knowledge discovery in extreme-scale scientific collaborations conference December 2016
F*** workflows: when parts of FAIR are missing conference October 2022
Unveiling User Behavior on Summit Login Nodes as a User book January 2022
Understanding HPC Application I/O Behavior Using System Level Statistics conference December 2020
The FAIR Guiding Principles for scientific data management and stewardship journal March 2016
Consecutive Job Submission Behavior at Mira Supercomputer
  • Schlagkamp, Stephan; Ferreira da Silva, Rafael; Allcock, William
  • Proceedings of the 25th ACM International Symposium on High-Performance Parallel and Distributed Computing https://doi.org/10.1145/2907294.2907314
conference May 2016
The MIT Supercloud Dataset conference September 2021
Pseudonymization for research data collection: is the juice worth the squeeze? journal September 2019
Allocation of Virtual Machines in Cloud Data Centers—A Survey of Problem Models and Optimization Algorithms journal August 2015