skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Exploring ROOT Framework for Scientific Simulations.


No abstract provided.

  1. Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
Publication Date:
Research Org.:
Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
Sponsoring Org.:
OSTI Identifier:
Report Number(s):
DOE Contract Number:
Resource Type:
Technical Report
Country of Publication:
United States

Citation Formats

Pakki, Aditya. Exploring ROOT Framework for Scientific Simulations.. United States: N. p., 2017. Web. doi:10.2172/1375860.
Pakki, Aditya. Exploring ROOT Framework for Scientific Simulations.. United States. doi:10.2172/1375860.
Pakki, Aditya. Thu . "Exploring ROOT Framework for Scientific Simulations.". United States. doi:10.2172/1375860.
title = {Exploring ROOT Framework for Scientific Simulations.},
author = {Pakki, Aditya},
abstractNote = {No abstract provided.},
doi = {10.2172/1375860},
journal = {},
number = ,
volume = ,
place = {United States},
year = {Thu Aug 17 00:00:00 EDT 2017},
month = {Thu Aug 17 00:00:00 EDT 2017}

Technical Report:

Save / Share:
  • This project initiated the development of TGYRO - a steady-state Gyrokinetic transport code (SSGKT) that integrates micro-scale GYRO turbulence simulations into a framework for practical multi-scale simulation of conventional tokamaks as well as future reactors. Using a lightweight master transport code, multiple independent (each massively parallel) gyrokinetic simulations are coordinated. The capability to evolve profiles using the TGLF model was also added to TGYRO and represents a more typical use-case for TGYRO. The goal of the project was to develop a steady-state Gyrokinetic transport code (SSGKT) that integrates micro-scale gyrokinetic turbulence simulations into a framework for practical multi-scale simulation ofmore » a burning plasma core ? the International Thermonuclear Experimental Reactor (ITER) in particular. This multi-scale simulation capability will be used to predict the performance (the fusion energy gain, Q) given the H-mode pedestal temperature and density. At present, projections of this type rely on transport models like GLF23, which are based on rather approximate fits to the results of linear and nonlinear simulations. Our goal is to make these performance projections with precise nonlinear gyrokinetic simulations. The method of approach is to use a lightweight master transport code to coordinate multiple independent (each massively parallel) gyrokinetic simulations using the GYRO code. This project targets the practical multi-scale simulation of a reactor core plasma in order to predict the core temperature and density profiles given the H-mode pedestal temperature and density. A master transport code will provide feedback to O(16) independent gyrokinetic simulations (each massively parallel). A successful feedback scheme offers a novel approach to predictive modeling of an important national and international problem. Success in this area of fusion simulations will allow US scientists to direct the research path of ITER over the next two decades. The design of an efficient feedback algorithm is a serious numerical challenge. Although the power source and transport balance coding in the master are standard, it is nontrivial to design a feedback loop that can cope with outputs that are both intermittent and extremely expensive. A prototypical feedback scheme has already been successfully demonstrated for a single global GYRO simulation, although the robustness and efficiency are likely far from optimal. Once the transport feedback scheme is perfected, it could, in principle, be embedded into any of the more elaborate transport codes (ONETWO, TRANSP, and CORSICA), or adopted by other FSP-related multi-scale projects.« less
  • A broad range of scientific computation involves the use of difference stencils. In a parallel computing environment, this computation is typically implemented by decomposing the spacial domain, inducing a 'halo exchange' of process-owned boundary data. This approach adheres to the Bulk Synchronous Parallel (BSP) model. Because commonly available architectures provide strong inter-node bandwidth relative to latency costs, many codes 'bulk up' these messages by aggregating data into a message as a means of reducing the number of messages. A renewed focus on non-traditional architectures and architecture features provides new opportunities for exploring alternatives to this programming approach. In this reportmore » we describe miniGhost, a 'miniapp' designed for exploration of the capabilities of current as well as emerging and future architectures within the context of these sorts of applications. MiniGhost joins the suite of miniapps developed as part of the Mantevo project.« less
  • The goal of this Exploratory Express project was to expand the understanding of the physical properties of our recently discovered class of materials consisting of metal-organic frameworks with electroactive ‘guest’ molecules that together form an electrically conducting charge-transfer complex (molecule@MOF). Thin films of Cu 3(BTC) 2 were grown on fused silica using solution step-by-step growth and were infiltrated with the molecule tetracyanoquinodimethane (TCNQ). The infiltrated MOF films were extensively characterized using optical microscopy, scanning electron microscopy, Raman spectroscopy, electrical conductivity, and thermoelectric properties. Thermopower measurements on TCNQ@Cu 3(BTC) 2 revealed a positive Seebeck coefficient of ~400 μV/k, indicating that holesmore » are the primary carriers in this material. The high value of the Seebeck coefficient and the expected low thermal conductivity suggest that molecule@MOF materials may be attractive for thermoelectric power conversion applications requiring low cost, solution-processable, and non-toxic active materials.« less
  • At a high level, my research interests center around designing, programming, and evaluating computer systems that use new approaches to solve interesting problems. The rapid change of technology allows a variety of different architectural approaches to computationally difficult problems, and a constantly shifting set of constraints and trends makes the solutions to these problems both challenging and interesting. One of the most important recent trends in computing has been a move to commodity parallel architectures. This sea change is motivated by the industry’s inability to continue to profitably increase performance on a single processor and instead to move to multiplemore » parallel processors. In the period of review, my most significant work has been leading a research group looking at the use of the graphics processing unit (GPU) as a general-purpose processor. GPUs can potentially deliver superior performance on a broad range of problems than their CPU counterparts, but effectively mapping complex applications to a parallel programming model with an emerging programming environment is a significant and important research problem.« less
  • We develop scalable algorithms and object-oriented code frameworks for terascale scientific simulations on massively parallel processors (MPPs). Our research in multigrid-based linear solvers and adaptive mesh refinement enables Laboratory programs to use MPPs to explore important physical phenomena. For example, our research aids stockpile stewardship by making practical detailed 3D simulations of radiation transport. The need to solve large linear systems arises in many applications, including radiation transport, structural dynamics, combustion, and flow in porous media. These systems result from discretizations of partial differential equations on computational meshes. Our first research objective is to develop multigrid preconditioned iterative methods formore » such problems and to demonstrate their scalability on MPPs. Scalability describes how total computational work grows with problem size; it measures how effectively additional resources can help solve increasingly larger problems. Many factors contribute to scalability: computer architecture, parallel implementation, and choice of algorithm. Scalable algorithms have been shown to decrease simulation times by several orders of magnitude.« less