skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Milestone Completion Report STCO04-1 AAPS: engagements with code teams, vendors, collaborators, developers

Abstract

The Advanced Architecture and Portability Specialists team (AAPS) worked with a select set of LLNL application teams to develop and/or implement a portability strategy for next-generation architectures. The team also investigated new and updated programming models and helped develop programming abstractions targeting maintainability and performance portability. Significant progress was made on both fronts in FY17, resulting in multiple applications being significantly more prepared for the nextgeneration machines than before.

Authors:
 [1]
  1. Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
Publication Date:
Research Org.:
Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
Sponsoring Org.:
USDOE
OSTI Identifier:
1396237
Report Number(s):
LLNL-TR-739012
DOE Contract Number:
AC52-07NA27344
Resource Type:
Technical Report
Country of Publication:
United States
Language:
English
Subject:
97 MATHEMATICS AND COMPUTING

Citation Formats

Draeger, E. W. Milestone Completion Report STCO04-1 AAPS: engagements with code teams, vendors, collaborators, developers. United States: N. p., 2017. Web. doi:10.2172/1396237.
Draeger, E. W. Milestone Completion Report STCO04-1 AAPS: engagements with code teams, vendors, collaborators, developers. United States. doi:10.2172/1396237.
Draeger, E. W. 2017. "Milestone Completion Report STCO04-1 AAPS: engagements with code teams, vendors, collaborators, developers". United States. doi:10.2172/1396237. https://www.osti.gov/servlets/purl/1396237.
@article{osti_1396237,
title = {Milestone Completion Report STCO04-1 AAPS: engagements with code teams, vendors, collaborators, developers},
author = {Draeger, E. W.},
abstractNote = {The Advanced Architecture and Portability Specialists team (AAPS) worked with a select set of LLNL application teams to develop and/or implement a portability strategy for next-generation architectures. The team also investigated new and updated programming models and helped develop programming abstractions targeting maintainability and performance portability. Significant progress was made on both fronts in FY17, resulting in multiple applications being significantly more prepared for the nextgeneration machines than before.},
doi = {10.2172/1396237},
journal = {},
number = ,
volume = ,
place = {United States},
year = 2017,
month = 9
}

Technical Report:

Save / Share:
  • This report contains process descriptions of 189 flue gas desulfurization systems. Many of these processes have previously been evaluated in public documents; however, a fair number of processes were identified by using computer searches of various data bases. The report is designed to be an initial reference guide for persons involved with FGD systems in the utility industry. It is organized on the basis of the type of process and reagents used for each process. Processes have been assigned to 16 categories; wet systems (calcium, sodium, ammonia, magnesium, potassium, organic, others) and dry systems (wet reagent, dry reagent, carbon sorption,more » metal oxide sorption, other sorption, catalytic oxidation, SO/sub 2/ reduction, combustion, others). Subsystems required by some of the FGD systems are also discussed. This final report under RP982-28 provides descriptions and evaluations of all known FGD processes and should provide information useful for evaluating current commercial processes as well as those under development. Although numerous summary FGD process evaluations have been completed in recent years by EPRI, EPA, DOE, and other agencies, this report constitutes the most comprehensive related effort since an interagency group - including all the above organizations - produced a similar review in 1977 (Interagency Flue Gas Desulfurization Evaluation, February 1981, NTIS No. PB81-152043).« less
  • The subject milestone was completed on March 1st. This milestone signifies the completion the mechanical installation and assembly of PuCTF in room 1345 in the LLNT Plutonium Facility. This installation included equipment both in room 1345 and in the loft, As reported in the last milestone, ''LLNL Pu Facility space prepared for installation of PUCTF'', milestone 6.2.2/Fy00/c, steel plates had been installed on the floor to support the PuCTF glovebox and equipment. The steel plate system was a substantial help in completing the mechanical installation reported here. The glovebox sections were brought into the room and attached together. Temporary seismicmore » tie-down straps were used to brace the assembly. This temporary tie-down also provided Flexibility for alignment and adjustment. The internal equipment, (attritors, granulator, press feed shoe and die set, furnace, robot and powder transport system) were subsequently installed. The glovebox was then welded to the steel plates for permanent seismic anchoring. The control racks were attached to the floor and are ready for wiring and the press hydraulic power unit has been installed in the loft.« less
  • The HTI subsurface characterization task will use the Hanford Cone Penetrometer platform (CPP) to deploy soil sensor and sampling probes into the vadose zone/soils around AX-104 during FY-99. This document provides copies of the first data collected from the HTI sensor probes during vendor field developmental tests performed at a Cold test site in the Hanford 200 East area. Conduct of the initial test also established completion of a major contractor milestone of the HTI Characterization task (MS T04-98-523: Complete preparation of the HTICP probes and transfer to Hanford/HTI. Conduct an initial MSP push using the CPP).
  • This report describes the deployment and demonstration of the first phase of the I/O infrastructure for Purple. The report and the references herein are intended to certify the completion of the following Level 2 Milestone from the ASC FY04-05 Implementation Plan, due at the end of Quarter 4 in FY05. The milestone is defined as follows: ''External networking infrastructure installation and performance analysis will be completed for the initial delivery of Purple. The external networking infrastructure includes incorporation of a new 10 Gigabit Ethernet fabric linking the platform to the LLNL High Performance Storage System (HPSS) and other center equipment.more » The LLNL archive will be upgraded to HPSS Release 5.1 to support the requirements of the machine and performance analysis will be completed using the newly deployed I/O infrastructure. Demonstrated throughput to the archive for this infrastructure will be a minimum of 1.5GB/s with a target of 3GB/s. Since Purple delivery is not scheduled until late Q3, demonstration of these performance goals will use parts of Purple and/or an aggregate of other existing resources.''« less
  • There has been substantial development of the Lustre parallel filesystem prior to the configuration described below for this milestone. The initial Lustre filesystems that were deployed were directly connected to the cluster interconnect, i.e. Quadrics Elan3. That is, the clients (OSSes) and Meta-data Servers (MDS) were all directly connected to the cluster's internal high speed interconnect. This configuration serves a single cluster very well, but does not provide sharing of the filesystem among clusters. LLNL funded the development of high-efficiency ''portals router'' code by CFS (the company that develops Lustre) to enable us to move the Lustre servers to amore » GigE-connected network configuration, thus making it possible to connect to the servers from several clusters. With portals routing available, here is what changes: (1) another storage-only cluster is deployed to front the Lustre storage devices (these become the Lustre OSSes and MDS), (2) this ''Lustre cluster'' is attached via GigE connections to a large GigE switch/router cloud, (3) a small number of compute-cluster nodes are designated as ''gateway'' or ''portal router'' nodes, and (4) the portals router nodes are GigE-connected to the switch/router cloud. The Lustre configuration is then changed to reflect the new network paths. A typical example of this is a compute cluster and a related visualization cluster: the compute cluster produces the data (writes it to the Lustre filesystem), and the visualization cluster consumes some of the data (reads it from the Lustre filesystem). This process can be expanded by aggregating several collections of Lustre backend storage resources into one or more ''centralized'' Lustre filesystems, and then arranging to have several ''client'' clusters mount these centralized filesystems. The ''client clusters'' can be any combination of compute, visualization, archiving, or other types of cluster. This milestone demonstrates the operation and performance of a scaled-down version of such a large, centralized, shared Lustre filesystem concept.« less