Electric-power grids are designed to reliably supply power from electric currents that alternate at a practically constant number of times per second (60 in North America, 50 in Europe), without failing due to everyday changes in the power demand as devices are turned on and off, or to less ordinary changes like the occasional loss somewhere of a single power generator. Power grids can still fail, of course, but if a grid is well designed, it will fail only in the face of an extraordinary event, not a commonly-occurring disturbance.
Until recently, electrical power was largely produced by generators that ran rather steadily all the time and consumed by appliances that didn’t compensate for local voltage or frequency changes; grids were designed to supply power reliably under those conditions. Nowadays, though, power grids are incorporating more and more intermittent power sources, such as wind turbines and photovoltaic power plants, and appliances that change their power consumption when the supplied power varies. These disturb the grid dynamics in new ways that can lead to new modes of grid failure if they’re not designed around. Avoiding such failure modes has been the intent of many investigations in the last several years. Reports issued in just the first few months of this year provide a sample of the questions being addressed.
A fact sheet from Los Alamos National Laboratory, “Smart Grid Control and Optimization”[SciTech Connect], briefly describes this challenge as well as some others, along with the accomplishments of a Los Alamos research group that addresses them. Two of this fact sheet’s authors have also co-written a more detailed report, “Getting a grip on the electrical grid”[SciTech Connect], that appeared recently in the American Institute of Physics publication Physics Today. This report describes electrical grids’ physical processes, including voltage collapse, frequency synchronization at different locations, and electromechanical waves in the grid, in terms of grids’ dynamics and power balance. The report also discusses how power grids’ increasingly complex behavior can be mathematically analyzed to work out their new failure modes and how to avoid them.
Although the consumer-interactive “Smart Grid” is mentioned in the fact sheet “Smart Grid Control and Optimization”[SciTech Connect], that report tells more about related matters than about the “Smart Grid” concept itself. The following two quotations serve as example descriptions.
A smart grid is a modernized electrical grid that uses information and communications technology to gather and act on information, such as information about the behaviors of suppliers and consumers, in an automated fashion to improve the efficiency, reliability, economics, and sustainability of the production and distribution of electricity.
—“Smart grid”, Wikipedia
People are often confused by the terms Smart Grid and smart meters. Are they not the same thing? Not exactly. Metering is just one of hundreds of possible applications that constitute the Smart Grid; a smart meter is a good example of an enabling technology that makes it possible to extract value from two-way communication in support of distributed technologies and consumer participation.
As one industry expert explains it, there is no silver bullet when it comes to enabling technologies for a smarter grid; there is instead “silver buckshot,” an array of technological approaches that will make it work.
Further clarification: Devices such as wind turbines, plug-in hybrid electric vehicles and solar arrays are not part of the Smart Grid. Rather, the Smart Grid encompasses the technology that enables us to integrate, interface with and intelligently control these innovations and others.
The ultimate success of the Smart Grid depends on the effectiveness of these devices in attracting and motivating large numbers of consumers.
—pp. 14-15 (pp. 18-19 of 48), “The Smart Grid: An Introduction”[U. S. Department of Energy]
Such mathematical analyses are contrasted with older, previously adequate methods in a slide presentation from Los Alamos National Laboratory, “Taming the Grid: Dynamic Load Composition Quantification at the Distribution-Transformer Level”[SciTech Connect]. The authors note that to understand the behavior of present-day power grids, we need to understand the devices that draw power from it, especially induction motors[Wikipedia], which cause large transient power changes and are thought to be the leading cause of grid instability. Their proposal for obtaining relevant data is the use of load sensors placed on the transformers that provide power at the voltage level supplied to individual customers. Sensors on these transformers are expected to avoid the problems of insufficient detail from substation sensors and of low data-acquisition rate from smart meters.
Homes and commercial buildings use 71% of the electricity in the US and will use 75% by 2025, according to the Department of Energy’s Energy Information Agency. How to model and simulate the integration of energy management systems for buildings and power grids, in order to significantly improve buildings’ energy efficiency and accelerate renewable-energy use in the next decade, was the subject of a pair of workshops held by the National Renewable Energy Laboratory (May 1-2, 2012) and University College Dublin (June 6-7, 2012). Results appear in the January 2013 report “From the Building to the Grid: An Energy Revolution and Modeling Challenge; Workshop Proceedings”[SciTech Connect]. Among the conclusions:
The increasing number of new-technology electric vehicles means that the devices used to charge them up are now loading power grids more and more. Ways to shave the peak loads of car-charging devices are the subject of the Oak Ridge National Laboratory report “Minimization of Impact from Electric Vehicle Supply Equipment to the Electric Grid Using a Dynamically Controlled Battery Bank for Peak Load Shaving”[SciTech Connect]. This report compares load minimization by two different control systems for a vehicle-charging station that’s powered by the sun and stores energy in a battery bank. One control system is based on charging and discharging to the electric power grid at fixed times. The other system forecasts the charging-station load on the grid based on analysis of collected data. The system based on fixed-time activity only shaved the peak load by 14.6% on a cloudy day and 12.7% on a sunny day, while the other more dynamic system shaved the peak load much more—up to 34% on a cloudy day and 38% on a sunny day. Data-based simulations show that the latter system can negate up to 89% of the total load demand on sunny days.
The value of energy storage beyond its use in vehicle-charging stations, i.e. for electrical grids in general, is the subject of “Value of Energy Storage for Grid Applications”[SciTech Connect] from the National Renewable Energy Laboratory (NREL). The analysis described in this report uses a commercial tool for grid simulation to compare an electric utility system’s operational costs with and without energy storage devices. Simulating continuous change by changes at one-hour time intervals, the authors find stored energy to have the highest cost savings in maintaining output frequency and power that would otherwise decrease when supply is lost and the remaining generators slow down in response to the increased load on each generator. Lesser but significant cost savings come from storage devices’ ability to increase the power output of available generators. Compared to these two uses of storage devices, load-leveling saves relatively little operational cost and also requires using more stored energy, but it has a greater market potential since load-leveling is needed more.
However, the revenue that storage would gain in a market setting appears to be much less than the benefit it provides, due to suppression of price differences between on- and off-peak prices and lack of compensation for storage’s ability to reduce thermal plant starts. Also, storage devices affect the marginal price of energy, thus reducing a utility’s own compensation so it doesn’t benefit from the reduction of consumer energy costs. The report’s conclusion notes remaining questions about the effects of increased use of renewable-energy sources, the effects of storage-plant operation at shorter timescales, the additional values provided by distributed storage, and how distributed storage can effectively be integrated into the bulk power system.
Simulation is conjoined with actual hardware to determine the proper interconnection of the many new renewable-energy sources with existing power grids—a matter addressed in a poster[SciTech Connect], a conference paper[SciTech Connect], and a more extensive technical report[SciTech Connect] from the National Renewable Energy Laboratory, all of which are entitled “Advanced Platform for Development and Evaluation of Grid Interconnection Systems Using Hardware-in-the-Loop: Part III -- Grid Interconnection System Evaluator”. The standard for such interconnections, IEEE Std 1547, was designed around issues such as: voltage regulation, synchronization, and isolation; response to abnormal grid conditions; power quality; and islanding[Wikipedia]. Testing whether a given interconnection system meets the standard, using procedures listed in the present version of that standard, is very time-consuming. NREL has automated portions of the test procedures to reduce the time. A real-time simulator runs a software model and the communication interface between the software and a hardware system, forming a single closed-loop (“hardware-in-the-loop”) simulation. When the software is a control model of an IEEE Std 1547 conformance test, and the hardware is a grid interconnection system with the related electrical test equipment, the test can be completed much more quickly. Hardware-in-the-loop simulation also adds further testing capabilities beyond those called for in IEEE Std 1547.
Reducing the time consumed for further, non-IEEE tests of one type of renewable-energy device is one purpose for the system described in “NREL Controllable Grid Interface for Testing MW-Scale Wind Turbine Generators (Poster) ”[SciTech Connect]. Understanding how wind turbines are affected by disturbances in the power grid requires tests and accurate transient[Wikipedia] simulations. Conducting the tests according to the methods of standard IEC 61400-21 can be not only time-consuming, but costly, if done on-site, since the tests need to be conducted at certain wind speeds and grid conditions and that can require transporting test equipment to remote locations and having personnel there for considerable time. Changing the turbine design, or even just its control software, can require repeating the tests. The NREL poster describes an initiative to design and build a 7-megavolt-ampere[Wikipedia] grid simulator to operate with NREL’s 2.5-megawatt and 5.8-megawatt dynamometer facilities. With this system, grid disturbances can be reproduced and wind-turbine/power-system interactions can be simulated, the ultimate goal being to reduce the costs of integrating wind energy into the power grid.
Last year, Science Showcase reported on the Department of Energy’s contributions to the Mars Science Laboratory (or MSL) “Curiosity”, including the ChemCam developed at Los Alamos National Laboratory to chemically analyze Martian rocks and other materials by the light they emit when heated by a laser beam, a process known as Laser-Induced Breakdown Spectroscopy (LIBS). Since then, we have received three reports from Los Alamos of things that ChemCam has revealed about Mars.
The October 2012 document “ChemCam on Mars”[SciTech Connect] is a slide presentation of photos and other images illustrating LIBS and ChemCam, the Curiosity rover and its mission, its landing in Mars’ Gale Crater, preliminary results from its first 60 Martian days (or “sols”)[Wikipedia], and its intended future. From this presentation, we can learn that Curiosity’s mission includes characterization of the human hazards on Mars, as well as investigating Mars’ past and potential habitability. The geological aspect of its mission involves characterizing the area near its landing site, with early examination of a place (named “Glenelg”[Wikipedia]) where three distinct terrain types meet and ultimate exploration of the lower reaches of the 5-km high mountain (Mount Sharp) in the middle of Gale Crater. We also see that LIBS, ChemCam’s chemical analysis technique, was originally developed at Los Alamos for the detection of metals contaminating soils on Earth. LIBS’ first use on Mars revealed the presence of hydrogen, lithium, carbon, oxygen, sodium, magnesium, aluminum, silicon, potassium, calcium, titanium, manganese, and iron in a single spot of its first Martian rock sample.
“ChemCam contributions to the Lunar and Planetary Science Conference”[SciTech Connect] is a set of 26 two-page reports presented at the 44th Lunar and Planetary Science Conference, held in March 2013. These brief reports describe a variety of findings from the first few months of Curiosity’s time on Mars, as indicated by the following sample of report titles:
One of these 26 reports, “Possible Alteration of Rocks Observed by ChemCam along the Traverse to Glenelg in Gale Crater on Mars”, was prepared in a shorter version[SciTech Connect] with a slightly different list of author names for the European Geosciences Union and was issued in April 2013. The authors’ analysis of 359 ChemCam observations and numerical simulation of geochemical processes suggests, among other things, that sporadic evaporations of a calcium-enriched fluid in the soils occurred in the region examined, rather than intensive soil alterations as suspected elsewhere on Mars.
Prepared by Dr. William N. Watson, Physicist
DoE Office of Scientific and Technical Information