Skip to main content
U.S. Department of Energy
Office of Scientific and Technical Information
  1. Convex Decreasing Algorithms: Distributed Synthesis and Finite-Time Termination in Higher Dimension

    Here we establish finite time termination algorithms for consensus algorithms based on geometric properties that yield finite-time guarantees, suited for use in high dimension and in the absence of a central authority. These pursuits motivate a new peer to peer convex hull algorithm which is utilized for one stopping algorithm. Further an alternative lightweight norm based stopping criteria is also developed. The practical utility of the algorithm is illustrated through MATLAB simulations.

  2. Voltage regulation in distribution grids: A survey

    Environmental and sustainability concerns have caused a recent surge in the penetration of distributed energy resources into the power grid. This may lead to voltage violations in the distribution systems making voltage regulation more relevant than ever. Owing to this and rapid advancements in sensing, communication, and computation technologies, the literature on voltage control techniques is growing at a rapid pace in distribution networks. In particular, there is a paradigm shift from traditional offline centralized approaches to distributed ones leveraging increased and varied types of actuators, real-time sensing, fast and efficient computations, and an overall distributed situational awareness. This paper reviews state-of-the-art voltage control algorithms, summarizes the underlying methods, and classifies their coordination mechanisms into local, centralized, distributed, and decentralized. The underlying solution methodologies are further classified into two categories, open-loop and feedback-based. Two specific example workflows are provided to illustrate these solutions for voltage regulation.

  3. A Sparse Distributed Gigascale Resolution Material Point Method

    In this paper, we present a four-layer distributed simulation system and its adaptation to the Material Point Method (MPM). The system is built upon a performance portable C++ programming model targeting major High-Performance-Computing (HPC) platforms. A key ingredient of our system is a hierarchical block-tile-cell sparse grid data structure that is distributable to an arbitrary number of Message Passing Interface (MPI) ranks. We additionally propose strategies for efficient dynamic load balance optimization to maximize the efficiency of MPI tasks. Our simulation pipeline can easily switch among backend programming models, including OpenMP and CUDA, and can be effortlessly dispatched onto supercomputers and the cloud. Finally, we construct benchmark experiments and ablation studies on supercomputers and consumer workstations in a local network to evaluate the scalability and load balancing criteria. We demonstrate massively parallel, highly scalable, and gigascale resolution MPM simulations of up to 1.01 billion particles for less than 323.25 seconds per frame with 8 OpenSSH-connected workstations.

  4. Fast Coordination of Distributed Energy Resources Over Time-Varying Communication Networks

    In this article, we consider the problem of optimally coordinating the response of a group of distributed energy resources (DERs) controlled by distributed agents so they collectively meet the electric power demanded by a collection of loads while minimizing the total generation cost and respecting the DER capacity limits. This problem can be cast as a convex optimization problem, where the global objective is to minimize a sum of convex functions corresponding to individual DER generation cost while satisfying 1) linear inequality constraints corresponding to the DER capacity limits and 2) a linear equality constraint corresponding to the total power generated by the DERs being equal to the total power demand. We develop distributed algorithms to solve the DER coordination problem over time-varying communication networks with either bidirectional or unidirectional communication links. The proposed algorithms can be seen as distributed versions of a centralized primal–dual algorithm. One of the algorithms proposed for directed communication graphs has a geometric convergence rate even when communication out-degrees are unknown to agents. We showcase the proposed algorithms using the standard IEEE 39-bus test system and compare their performance against other ones proposed in the literature.

  5. Virtual Time III, Part 2: Combining Conservative and Optimistic Synchronization

    This is Part 2 of a trio of works intended to provide a unifying framework in which conservative and optimistic synchronization for parallel discrete event simulations can be freely and transparently combined in the same logical process on an event-by-event basis. Here, in this article, we continue the outline of an approach called Unified Virtual Time (UVT) that was introduced in Part 1, showing in detail via two extended examples how conservative synchronization can be refactored and combined with optimistic synchronization in the UVT framework. We describe UVT versions of both a basic time windowing algorithm called Unified Simple Time Windows and a refactored version of the Chandy-Misra-Bryant Null Message algorithm called Unified CMB.

  6. Transient Data Library of Solar Grid Integrated Distributed System

    This submission contains an open-source library of transient events in distributed system with high solar PV. The library includes the collected data, related documents and scripts for loading the data. The data library is built for transient event detection and machine learning based analysis algorithm development. The data was collected via both field test and software simulation. The units for the data are included in the data file headers for each data series. A text editor or spreadsheet software, such as Excel, and Matlab is required to view the data.

  7. Sample IEEE123 Bus system for OEDI SI

    Time series load and PV data from an IEEE123 bus system. An example electrical system, named the OEDI SI feeder, is used to test the workflow in a co-simulation. The system used is the IEEE123 test system, which is a well studied test system (see link below to IEEE PES Test Feeder), but some modifications were made to it to add some solar power modules and measurements on the system. The aim of this project is to create an easy-to-use platform where various types of analytics can be performed on a wide range of electrical grid datasets. The aim is to establish an open-source library of algorithms that universities, national labs and other developers can contribute to which can be used on both open-source and proprietary grid data to improve the analysis of electrical distribution systems for the grid modeling community. OEDI Systems Integration (SI) is a grid algorithms and data analytics API created to standardize how data is sent between different modules that are run as part of a co-simulation. The readme file included in the S3 bucket provides information about the directory structure and how to use the algorithms. The sensors.json file is used to define the measurement locations.

  8. Scalable Pattern Matching in Metadata Graphs via Constraint Checking

    Pattern matching is a fundamental tool for answering complex graph queries. Unfortunately, existing solutions have limited capabilities: They do not scale to process large graphs and/or support only a restricted set of search templates or usage scenarios. Moreover, the algorithms at the core of the existing techniques are not suitable for today’s graph processing infrastructures relying on horizontal scalability and shared-nothing clusters, as most of these algorithms are inherently sequential and difficult to parallelize. In this article we present an algorithmic pipeline that bases pattern matching on constraint checking. The key intuition is that each vertex and edge participating in a match has to meet a set of constraints implicitly specified by the search template. These constraints can be verified independently and typically are less expensive to compute than searching the full template. The pipeline we propose generates these constraints and iterates over them to eliminate all the vertices and edges that do not participate in any match, thus reducing the background graph to a subgraph that is the union of all template matches—the complete set of all vertices and edges that participate in at least one match. Additional analysis can be performed on this annotated, reduced graph, such as full match enumeration, match counting, or computing vertex/edge centrality. Furthermore, a vertex-centric formulation for constraint checking algorithms exists, and this makes it possible to harness existing high-performance, vertex-centric graph processing frameworks. This technique (i) enables highly scalable pattern matching in metadata (labeled) graphs; (ii) supports arbitrary patterns with 100% precision; (iii) enables tradeoffs between precision and time-to-solution, while always selects all vertices and edges that participate in matches, thus offering 100% recall; and (iv) supports a set of popular data analytics scenarios. We implement our approach on top of HavoqGT, an open-source asynchronous graph processing framework, and demonstrate its advantages through strong and weak scaling experiments on massive scale real-world (up to 257 billion edges) and synthetic (up to 4.4 trillion edges) labeled graphs, respectively, and at scales (1,024 nodes / 36,864 cores), orders of magnitude larger than used in the past for similar problems. This article serves two purposes: First, it synthesises the knowledge accumulated during a long-term project. Second, it presents new system features, usage scenarios, optimizations, and comparisons with related work that strengthen the confidence that pattern matching based on iterative pruning via constraint checking is an effective and scalable approach in practice. The new contributions include the following: (i) We demonstrate the ability of the constraint checking approach to efficiently support two additional search scenarios that often emerge in practice, interactive incremental search and exploratory search. (ii) We empirically compare our solution with two additional state-of-the-art systems, Arabsque and TriAD. (iii) We show the ability of our solution to accommodate a more diverse range of datasets with varying properties, e.g., scale, skewness, label distribution, and match frequency. (iv) We introduce or extend a number of system features (e.g., work aggregation, load balancing, and the ability to cap the generated traffic) and design optimizations and demonstrate their advantages with respect to improving performance and scalability. (v) We present bottleneck analysis and insights into artifacts that influence performance. (vi) We present a theoretical complexity argument that motivates the performance gains we observe.

  9. Designing a parallel Feel-the-Way clustering algorithm on HPC systems

    This work introduces a new parallel clustering algorithm, named Feel-the-Way clustering algorithm, that provides better or equivalent convergence rate than the traditional clustering methods by optimizing the synchronization and communication costs. Our algorithm design centers on how to optimize three factors simultaneously: reduced synchronizations, improved convergence rate, and retained same or comparable optimization cost. To compare the optimization cost, we use the Sum of Square Error (SSE) cost as the metric, which is the sum of the square distance between each data point and its assigned clusters. Compared with the traditional MPI k-means algorithm, the new Feel-the-Way algorithm requires less communications among participating processes. As for the convergence rate, the new algorithm requires fewer number of iterations to converge. As for the optimization cost, it obtains the SSE costs that are close to the k-means algorithm. In the paper, we first design the full-step Feel-the-Way k-means clustering algorithm that can significantly reduce the number of iterations that are required by the original k-means clustering method. Next, we improve the performance of the full-step algorithm by adopting an optimized sampling-based approach, named reassignment-history-aware sampling. Our experimental results show that the optimized sampling-based Feel-the-Way method is significantly faster than the widely used k-means clustering method, and can provide comparable optimization costs. More extensive experiments with several synthetic datasets and real-world datasets (e.g., MNIST, CIFAR-10, ENRON, and PLACES-2) show that the new parallel algorithm can outperform the open source MPI k-means library by up to 110% on a high-performance computing system using 4,096 CPU cores. In addition, the new algorithm can take up to 51% fewer iterations to converge than the k-means clustering algorithm.

  10. An Optimal Kalman-Consensus Filter for Distributed Implementation Over a Dynamic Communication Network

    With the rising number of applications for sensor networks comes a need for more accurate cooperative fusion algorithms. In this paper, a distributed and optimal state estimator is presented for implementation through a dynamically switching, yet strongly connected, directed communication network to cooperatively estimate the state of a dynamic system. The Kalman-Consensus filter approach is used to incorporate a consensus protocol of neighboring state estimates into the traditional Kalman filter. It has been known that the main difficulty associated with implementing such an optimal solution is its fully coupled covariance matrix. Presented is a distributed computation of the covariance matrix at every node achieved by taking advantage of its independence from state estimates. Reductions to the distributed covariance computations are achieved through shared processing made available by the strongly connected digraph. Should the digraph change over time, a distributed topology estimation algorithm is included to facilitate the implementation of the proposed Kalman-Consensus filters. Together, these advances render a distributed and optimal solution to the consensus-based cooperative Kalman filter design problem. Convergence and stability of the proposed algorithms are analyzed and analytically concluded with performance verified through simulation of an illustrative example.


Search for:
All Records
Subject
distributed algorithms

Refine by:
Resource Type
Availability
Publication Date
  • 1980: 1 results
  • 1981: 0 results
  • 1982: 9 results
  • 1983: 11 results
  • 1984: 7 results
  • 1985: 20 results
  • 1986: 47 results
  • 1987: 46 results
  • 1988: 29 results
  • 1989: 53 results
  • 1990: 31 results
  • 1991: 19 results
  • 1992: 29 results
  • 1993: 18 results
  • 1994: 8 results
  • 1995: 12 results
  • 1996: 27 results
  • 1997: 2 results
  • 1998: 4 results
  • 1999: 3 results
  • 2000: 5 results
  • 2001: 1 results
  • 2002: 1 results
  • 2003: 2 results
  • 2004: 4 results
  • 2005: 3 results
  • 2006: 1 results
  • 2007: 1 results
  • 2008: 1 results
  • 2009: 2 results
  • 2010: 4 results
  • 2011: 2 results
  • 2012: 4 results
  • 2013: 1 results
  • 2014: 0 results
  • 2015: 0 results
  • 2016: 2 results
  • 2017: 4 results
  • 2018: 2 results
  • 2019: 3 results
  • 2020: 10 results
  • 2021: 3 results
  • 2022: 4 results
  • 2023: 3 results
1980
2023
Author / Contributor
Research Organization