Skip to main content
U.S. Department of Energy
Office of Scientific and Technical Information
  1. VizBrick: A GUI-based Interactive Tool for Authoring Semantic Metadata for Building Datasets

    Brick ontology is a unified semantic metadata schema to address the stand-ardization problem of buildings' physical, logical, and virtual assets and the relationships between them. Creating a Brick model for a building dataset means that the dataset's contents are semantically described using the standard terms defined in the Brick ontology. It will enable the benefits of data standardization, without having to recollect or reorganize the data and opens the possibility of automation leveraging the machine readability of the semantic metadata. The problem is that authoring Brick models for building datasets often requires knowledge of semantic technology (e.g., on-tology declarations and RDF syntax) and leads to repeated manual trial and error processes, which can be time-consuming and challenging to do with-out an interactive visual representation of the data. We developed VizBrick, a tool with a graphical user interface that can assist users in creating Brick models visually and interactively without having to understand the Re-source Description Framework (RDF) syntax. VizBrick provides handy ca-pabilities such as keyword search for easy find of relevant brick concepts and relations to their data columns and automatic suggestions of concept mapping. In this demonstration, we present a use-case of VizBrick to show-case how a Brick model can be created for a real-world building dataset.

  2. Sensor Incipient Fault Impacts on Building Energy Performance: A Case Study on a Multi-Zone Commercial Building

    Existing studies show sensor faults/error could double building energy consumption and carbon emissions compared with the baseline. Those studies assume that the sensor error is fixed or constant. However, sensor faults are incipient in real conditions and there were extremely limited studies investigating the incipient sensor fault impacts systematically. This study filled in this research gap by studying time-developing sensor fault impacts to rule-based controls on a 10-zone office building. The control sequences for variable air volume boxes (VAV) with an air handling unit (AHU) system were selected based on ASHRAE Guideline 36-2018: High-Performance Sequences of Operation for HVAC Systems. Large-scale simulations on cloud were conducted (3600 cases) through stochastic approach. Results show (1) The site energy differences could go –3.3% lower or 18.1% higher, compared with baseline. (2) The heating energy differences could go –66.5% lower or 314.4% higher, compared with baseline. (3) The cooling energy differences could go –11.5% lower or 65.0% higher, compared with baseline. (4) The fan energy differences could go 0.15% lower or 6.9% higher, compared with baseline.

  3. A machine learning approach to predict thermal expansion of complex oxides

    Although it is of scientific and practical importance, the state-of-the-art of predicting the thermal expansion of oxides over broad temperature and composition ranges by physics-based atomistic simulations is currently limited to qualitative agreements. We present an emerging machine learning (ML) approach to accurately predict the thermal expansion of cubic oxides with a dataset consisting of experimentally measured lattice parameters while using the metal cation polyhedron and temperature as descriptors. High-fidelity ML models that can accurately predict temperature- and composition-dependent lattice parameters of cubic oxides with isotropic thermal expansions have been successfully trained. The ML-predicted thermal expansions of oxides not included in the training dataset have shown good agreement with available experiments. The limitations of the current approach and challenges to go beyond cubic oxides with isotropic thermal expansion are also briefly discussed.

  4. Identification of Critical Infrastructure via PageRank

    Assessing critical infrastructure vulnerabilities is paramount to arranging efficient plans for their protection. Critical infrastructures are cyber-physical systems that can be represented as a network consisting of nodes and edges and highly interdependent in nature. Given the interdependent nature of critical infrastuctures, failure in one node may cause failure in many others resulting in a cascade of failures. In this paper, we propose a node criticality metric that uses Google’s PageRank algorithm to identify nodes that are likely to fail (are vulnerable), nodes whose failure may cascade to many other sites in the network (are important), and nodes that are both vulnerable and important (are critical). We then present a series of experiments to understand how protecting certain critical nodes can help mitigate massive cascading failures. Simulating failures in a real-world network with and without critical node protections demonstrates the importance of identifying critical nodes in an infrastructure network.

  5. Efficient Contingency Analysis in Power Systems via Network Trigger Nodes

    Modeling failure dynamics within a power system is a complex and challenging process due to multiple inter-dependencies and convoluted inter-domain relationships. Subject matter experts (SMEs) are interested in understanding these failure dynamics for reducing the impact from future disasters (i.e., losses or failures of power system components, such as transmission lines). Contingency analysis (CA) tools enable such ’what-if’ scenario analyses to evaluate the impacts on the power system. Analyzing all possible contingencies among N system components can be computationally expensive. An important step for performing CA is identifying a set of k ‘trigger’ components, which when failed initially can significantly impact the overall system by causing multiple failures. Currently SMEs focus on identifying these trigger components by running expensive simulations on all possible subsets, which quickly becomes infeasible. Hence finding a relevant set of trigger components (contingencies) rapidly to enable efficient and useful CA is crucial.In a collaboration between computer scientists and power system experts, we propose an efficient method for performing CA by exploiting network inter-dependencies in power system components. First, we construct a network with multiple electric grid infrastructure components and dependencies as connections among them. We reformulate the problem of finding a set of trigger components as a problem of identifying critical nodes in the network, which can cascade power failures through connected nodes and cause significant damage to the network. To guide the practical CA tools, we develop a network-based model with a probabilistic edge-weights setup using intricate domain rules. Then we conduct an empirical study on real power system data in the US for both regional and national levels. Firstly, we use power system datasets for the US to create a national-scale domain-driven model. Secondly, we demonstrate that network-based model outperforms the outputs from a real CA tool and show on average 25 × improved selection of contingencies, thereby showcasing practical benefits to the power experts.

  6. Development of an Open-source Alloy Selection and Lifetime Assessment Tool for Structural Components in CSP

    Lack of sufficient data on high temperature mechanical and corrosion behavior of structural materials is a huge barrier in the technological maturity of current and future Concentrating Solar Power (CSP) technologies. Rapid development and selection of materials cannot be achieved by expensive and time-consuming acquisition of experimental data. The goal of the proposed work is development of an open-source alloy selection and lifetime prediction tool that will integrate validated physics-based models to describe influence of temperature, alloy composition, environment and component geometry (thickness) on mechanical and corrosion behavior of Ni and Fe-based alloys employed in molten salts/sCO2 heat exchangers. This one-year project leveraged the extensive dataset on the creep\corrosion behavior of candidate materials generated at ORNL through past projects and input from current collaborations with industrial partners. Based on previous experience and the feedback provided by industry (Brayton Energy and Echogen), three candidate materials of interest, Ni-based alloys 740H, 282 and 625 and application-specific operating conditions (max. temperature of 730 °C and stress of 150 MPa) were identified for the heat exchanger. An extensive corrosion and creep dataset was assimilated for the relevant operating conditions and was supported by detailed characterization of about 100 metallographic cross-sections. The corrosion dataset consisted of scanning electron microscopy images (secondary electron and backscatter electron), measured concentration profiles of alloying elements using energy dispersive X-ray spectroscopy (EDS), widths of denuded zones (dissolution of strengthening phases) and depths of attack in molten KCl-MgCl2 mixtures using image analyses. The creep dataset comprised of creep rupture data and creep strain curves (for 740H and 282). Coupled thermodynamic-kinetic microstructure-based models were employed to predict the stress-corrosion induced compositional and phase evolutions in the alloy during operation under the identified operating conditions. Reduced order models were developed from advanced physics-based models and were integrated in a user-friendly alloy selection tool. The corrosion model was able to predict the time to a critical Cr concentration at the oxide/alloy interface (chemical lifetime) within ±10% (1 standard deviation) of typical statistical variation in corrosion tests and EDS measurement errors (±0.5 wt%). The initial scope of the project was limited to predict creep rupture times (Larson-Miller parameter). Based on the input provided by industry, the mechanical lifetime of the heat exchanger is governed by accumulated creep strains (2%) rather than creep rupture. To be able to predict the times to specific creep strains, a more extensive creep model development was undertaken largely beyond the initial scope of the project. The continuum damage mechanics creep model was able to predict times to 2% creep strain, t2% with an accuracy of ±500h. Ultimately, a screening protocol for SiC was generated to demonstrate the pathway for integration of one of the currently immature materials from a commercial adoption standpoint in the current material evaluation tool. The modeling tool developed here is accessible to the science community and stakeholders and lays the foundation for methods that will enable a rapid evaluation of optimum materials for CSP applications and reliable prediction of material degradation thereby considerably reducing operational costs, improving reliability and increasing overhaul intervals. However, the complete potential of such a tool to include a wider range of materials and test conditions can only be realized with a more concentrated combined experimental-characterization-computation effort.

  7. Impacts of New Sensor Types for Selected Advanced Controls

    Sensors are critical components for controls in buildings. They collect desired information to input into controls for the completion of subsequent control actions. When sensors work in unhealthy or faulty conditions, the benefits of the control benefits will be compromised regardless of the control’s quality. For buildings, multiple components directly influence the sensor placement and deployment, such as sensor errors, sensor locations, sensor types, and sensor costs.

  8. Exploiting user activeness for data retention in HPC systems

    HPC systems typically rely on the fixed-lifetime (FLT) data retention strategy, which only considers temporal locality of data accesses to parallel file systems. However, our extensive analysis based on the leadership-class HPC system traces suggests that the FLT approach often fails to capture the dynamics in users' behavior and leads to undesired data purge. In this study, we propose an activeness-based data retention (ActiveDR) solution, which advocates considering the data retention approach from a holistic activeness-based perspective. By evaluating the frequency and impact of users' activities, ActiveDR prioritizes the file purge process for inactive users and rewards active users with extended file lifetime on parallel storage. Our extensive evaluations based on the traces of the prior Titan supercomputer show that, when reaching the same purge target, ActiveDR achieves up to 37% file miss reduction as compared to the current FLT retention methodology.

  9. Advanced Health Information Technology Analytic Framework and Application to Hazard Detection

    Health Information Technology (HIT) aims to improve healthcare outcomes by organizing and analyzing various health-related data. With data accumulating at a staggering rate, the importance of real-time analytics has been increasing dramatically, shifting the focus of informatics from batch processing to streaming analytics. HIT is also facing unprecedented challenges in adapting to this new requirement and leveraging advanced IT technologies. This paper introduces a HIT data and compute platform that supports multi-granularity real-time analytics from heterogeneous data sources. The paper first identifies functional requirements and proposes a framework that satisfies the requirements using state-of-the-art big data technologies including Apache Kafka, Spark Structured Streaming Engine, and Delta Lake. To demonstrate its capability to support data analytics in multiple time granularities analytics, a statistical process control-based hazard detection algorithm has been implemented on top of the framework to detect unexpected hazards from order cancellation data of the Department of US Veterans Affairs (VA) in near real-time.

  10. Toward Quantifying Vulnerabilities in Critical Infrastructure Systems

    Modern society is increasingly dependent on the stability of a complex system of interdependent infrastructure sectors. Vulnerability in critical infrastructures (CIs) is defined as a measure of system susceptibility to threat scenarios. Quantifying vulnerability in CIs has not been adequately addressed in the literature. This paper presents ongoing research on how the authors model CIs as network-based models and propose a set of metrics to quantify vulnerability in CI systems. The size and complexity of the CIs make this a challenging task. These metrics could be used for planning and efficient decision-making during extreme events.


Search for:
All Records
Author / Contributor
"Lee, Sangkeun (Matt)"

Refine by:
Resource Type
Availability
Publication Date
  • 2016: 1 results
  • 2017: 3 results
  • 2018: 0 results
  • 2019: 3 results
  • 2020: 5 results
  • 2021: 6 results
  • 2022: 8 results
  • 2023: 4 results
  • 2024: 4 results
2016
2024
Author / Contributor
Research Organization