Exploring Machine Learning Techniques For Dynamic Modeling on Future Exascale Systems
Future exascale systems must be optimized for both power and performance at scale in order to achieve DOE’s goal of a sustained petaflop within 20 Megawatts by 2022 . Massive parallelism of the future systems combined with complex memory hierarchies will form a barrier to efficient application and architecture design. These challenges are exacerbated with emerging complex architectures such as GPGPUs and Intel Xeon Phi as parallelism increases orders of magnitude and system power consumption can easily triple or quadruple. Therefore, we need techniques that can reduce the search space for optimization, isolate power-performance bottlenecks, identify root causes for software/hardware inefficiency, and effectively direct runtime scheduling.
- Publication Date:
- OSTI Identifier:
- Report Number(s):
- DOE Contract Number:
- Resource Type:
- Resource Relation:
- Conference: Modeling & Simulation of Exascale Systems & Applications: Workshop on Modeling & Simulation of Exascale Systems & Applications, September 18-19, 2013, Seattle, Washington
- US Department of Energy, Office of Advanced Scientific Computing Research, Washington DC, United States(US).
- Research Org:
- Pacific Northwest National Laboratory (PNNL), Richland, WA (US)
- Sponsoring Org:
- Country of Publication:
- United States