skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Large Scale Simulation Platform for NODES Validation Study

Abstract

This report summarizes the Large Scale (LS) simulation platform created for the Eaton NODES project. The simulation environment consists of both wholesale market simulator and distribution simulator and includes the CAISO wholesale market model and a PG&E footprint of 25-75 feeders to validate the scalability under a scenario of 33% RPS in California with additional 17% of DERS coming from distribution and customers. The simulator can generate hourly unit commitment, 5-minute economic dispatch, and 4-second AGC regulation signals. The simulator is also capable of simulating greater than 10k individual controllable devices. Simulated DERs include water heaters, EVs, residential and light commercial HVAC/buildings, and residential-level battery storage. Feeder-level voltage regulators and capacitor banks are also simulated for feeder-level real and reactive power management and Vol/Var control.

Authors:
 [1];  [1];  [1]
  1. Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
Publication Date:
Research Org.:
Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
Sponsoring Org.:
USDOE
OSTI Identifier:
1358326
Report Number(s):
LLNL-TR-731436
DOE Contract Number:
AC52-07NA27344
Resource Type:
Technical Report
Country of Publication:
United States
Language:
English
Subject:
24 POWER TRANSMISSION AND DISTRIBUTION; 29 ENERGY PLANNING, POLICY AND ECONOMY

Citation Formats

Sotorrio, P., Qin, Y., and Min, L. Large Scale Simulation Platform for NODES Validation Study. United States: N. p., 2017. Web. doi:10.2172/1358326.
Sotorrio, P., Qin, Y., & Min, L. Large Scale Simulation Platform for NODES Validation Study. United States. doi:10.2172/1358326.
Sotorrio, P., Qin, Y., and Min, L. Thu . "Large Scale Simulation Platform for NODES Validation Study". United States. doi:10.2172/1358326. https://www.osti.gov/servlets/purl/1358326.
@article{osti_1358326,
title = {Large Scale Simulation Platform for NODES Validation Study},
author = {Sotorrio, P. and Qin, Y. and Min, L.},
abstractNote = {This report summarizes the Large Scale (LS) simulation platform created for the Eaton NODES project. The simulation environment consists of both wholesale market simulator and distribution simulator and includes the CAISO wholesale market model and a PG&E footprint of 25-75 feeders to validate the scalability under a scenario of 33% RPS in California with additional 17% of DERS coming from distribution and customers. The simulator can generate hourly unit commitment, 5-minute economic dispatch, and 4-second AGC regulation signals. The simulator is also capable of simulating greater than 10k individual controllable devices. Simulated DERs include water heaters, EVs, residential and light commercial HVAC/buildings, and residential-level battery storage. Feeder-level voltage regulators and capacitor banks are also simulated for feeder-level real and reactive power management and Vol/Var control.},
doi = {10.2172/1358326},
journal = {},
number = ,
volume = ,
place = {United States},
year = {Thu Apr 27 00:00:00 EDT 2017},
month = {Thu Apr 27 00:00:00 EDT 2017}
}

Technical Report:

Save / Share:
  • Before this LDRD research, no single tool could simulate a very high temperature reactor (VHTR) that is coupled to a secondary system and the sulfur iodine (SI) thermochemistry. Furthermore, the SI chemistry could only be modeled in steady state, typically via flow sheets. Additionally, the MELCOR nuclear reactor analysis code was suitable only for the modeling of light water reactors, not gas-cooled reactors. We extended MELCOR in order to address the above deficiencies. In particular, we developed three VHTR input models, added generalized, modular secondary system components, developed reactor point kinetics, included transient thermochemistry for the most important cycles [SImore » and the Westinghouse hybrid sulfur], and developed an interactive graphical user interface for full plant visualization. The new tool is called MELCOR-H2, and it allows users to maximize hydrogen and electrical production, as well as enhance overall plant safety. We conducted validation and verification studies on the key models, and showed that the MELCOR-H2 results typically compared to within less than 5% from experimental data, code-to-code comparisons, and/or analytical solutions.« less
  • The uncertainty and variability with photovoltaic (PV) generation make it very challenging to balance power system generation and load, especially under high penetration cases. Higher reserve requirements and more cycling of conventional generators are generally anticipated for large-scale PV integration. However, whether the existing generation fleet is flexible enough to handle the variations and how well the system can maintain its control performance are difficult to predict. The goal of this project is to develop a software program that can perform intra-hour dispatch and automatic generation control (AGC) simulation, by which the balancing operations of a system can be simulatedmore » to answer the questions posed above. The simulator, named Electric System Intra-Hour Operation Simulator (ESIOS), uses the NV Energy southern system as a study case, and models the system’s generator configurations, AGC functions, and operator actions to balance system generation and load. Actual dispatch of AGC generators and control performance under various PV penetration levels can be predicted by running ESIOS. With data about the load, generation, and generator characteristics, ESIOS can perform similar simulations and assess variable generation integration impacts for other systems as well. This report describes the design of the simulator and presents the study results showing the PV impacts on NV Energy real-time operations.« less
  • Through extensive trade-off studies, designs for OTEC plants producing 100 MW and 400 MW of electrical power have been developed. The OTEC platforms represent a total system integration approach rather than a simple combining of individually optimized subsystems. Factors considered in integrating these systems included subsystem performance, risk and cost. Final cost estimates for the platform are listed. These costs and platform designs represent configurations favorably suited for the Hawaiian, Kahe Point, site and are not necessarily optimized for other OTEC sites. They do, however, represent reasonable data points for large-scale platforms in general and illustrate a reasonable economy ofmore » scale expected of platforms of increasing power output.« less
  • First-Principles Molecular Dynamics (FPMD) is an accurate, atomistic simulation approach that is routinely applied to a variety of areas including solid-state physics, chemistry, biochemistry and nanotechnology. FPMD enables one to perform predictive materials simulations, as no empirical or adjustable parameters are used to describe a given system. Instead, a quantum mechanical description of electrons is obtained by solving the Kohn-Sham equations within a pseudopotential plane-wave formalism. This rigorous first-principles treatment of electronic structure is computationally expensive and limits the size of tractable systems to a few hundred atoms on most currently available parallel computers. Developed specifically for large parallel systemsmore » at LLNL's Center for Applied Scientific Computing, the Qbox implementation of the FPMD method shows unprecedented performance and scaling on BlueGene/L.« less
  • The advent of large-scale collaborative scientific applications has demonstrated the potential for broad scientific communities to pool globally distributed resources to produce unprecedented data acquisition, movement, and analysis. System resources including supercomputers, data repositories, computing facilities, network infrastructures, storage systems, and display devices have been increasingly deployed at national laboratories and academic institutes. These resources are typically shared by large communities of users over Internet or dedicated networks and hence exhibit an inherent dynamic nature in their availability, accessibility, capacity, and stability. Scientific applications using either experimental facilities or computation-based simulations with various physical, chemical, climatic, and biological models featuremore » diverse scientific workflows as simple as linear pipelines or as complex as a directed acyclic graphs, which must be executed and supported over wide-area networks with massively distributed resources. Application users oftentimes need to manually configure their computing tasks over networks in an ad hoc manner, hence significantly limiting the productivity of scientists and constraining the utilization of resources. The success of these large-scale distributed applications requires a highly adaptive and massively scalable workflow platform that provides automated and optimized computing and networking services. This project is to design and develop a generic Scientific Workflow Automation and Management Platform (SWAMP), which contains a web-based user interface specially tailored for a target application, a set of user libraries, and several easy-to-use computing and networking toolkits for application scientists to conveniently assemble, execute, monitor, and control complex computing workflows in heterogeneous high-performance network environments. SWAMP will enable the automation and management of the entire process of scientific workflows with the convenience of a few mouse clicks while hiding the implementation and technical details from end users. Particularly, we will consider two types of applications with distinct performance requirements: data-centric and service-centric applications. For data-centric applications, the main workflow task involves large-volume data generation, catalog, storage, and movement typically from supercomputers or experimental facilities to a team of geographically distributed users; while for service-centric applications, the main focus of workflow is on data archiving, preprocessing, filtering, synthesis, visualization, and other application-specific analysis. We will conduct a comprehensive comparison of existing workflow systems and choose the best suited one with open-source code, a flexible system structure, and a large user base as the starting point for our development. Based on the chosen system, we will develop and integrate new components including a black box design of computing modules, performance monitoring and prediction, and workflow optimization and reconfiguration, which are missing from existing workflow systems. A modular design for separating specification, execution, and monitoring aspects will be adopted to establish a common generic infrastructure suited for a wide spectrum of science applications. We will further design and develop efficient workflow mapping and scheduling algorithms to optimize the workflow performance in terms of minimum end-to-end delay, maximum frame rate, and highest reliability. We will develop and demonstrate the SWAMP system in a local environment, the grid network, and the 100Gpbs Advanced Network Initiative (ANI) testbed. The demonstration will target scientific applications in climate modeling and high energy physics and the functions to be demonstrated include workflow deployment, execution, steering, and reconfiguration. Throughout the project period, we will work closely with the science communities in the fields of climate modeling and high energy physics including Spallation Neutron Source (SNS) and Large Hadron Collider (LHC) projects to mature the system for production use.« less