skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information
  1. Gyroaveraging operations using adaptive matrix operators

    A new adaptive scheme to be used in particle-in-cell codes for carrying out gyroaveraging operations with matrices is presented. This new scheme uses an intermediate velocity grid whose resolution is adapted to the local thermal Larmor radius. The charge density is computed by projecting marker weights in a field-line following manner while preserving the adiabatic magnetic moment μ. These choices permit to improve the accuracy of the gyroaveraging operations performed with matrices even when strong spatial variation of temperature and magnetic field is present. Accuracy of the scheme in different geometries from simple 2D slab geometry to realistic 3D toroidalmore » equilibrium has been studied. As a result, a successful implementation in the gyrokinetic code XGC is presented in the delta-f limit.« less
    Cited by 1
  2. COLLABORATIVE: FUSION SIMULATION PROGRAM

    New York University, Courant Institute of Mathematical Sciences, participated in the Fusion Simulation Program (FSP) Planning Activities [http://www.pppl.gov/fsp], with C.S. Chang as the institutional PI. FSP's mission was to enable scientific discovery of important new plasma phenomena with associated understanding that emerges only upon integration. This requires developing a predictive integrated simulation capability for magnetically-confined fusion plasmas that are properly validated against experiments in regimes relevant for producing practical fusion energy. Specific institutional goal of the New York University was to participate in the planning of the edge integrated simulation, with emphasis on the usage of large scale HPCs, inmore » connection with the SciDAC CPES project which the PI was leading. New York University successfully completed its mission by participating in the various planning activities, including the edge physics integration, the edge science drivers, and the mathematical verification. The activity resulted in the combined report that can be found in http://www.pppl.gov/fsp/Overview.html. Participation and presentations as part of this project are listed in a separation file.« less
  3. SciDAC-Center for Plasma Edge Simulation

    The SciDAC ProtoFSP Center for Plasma Edge Simulation (CPES) [http://www.cims.nyu.edu/cpes/] was awarded to New York University, Courant Institute of Mathematical Sciences in FY 2006. C.S. Chang was the institutional and national project PI. It's mission was 1) to build kinetic simulation code applicable to tokamak edge region including magnetic divertor geometry, 2) to build a computer science framework which can integrate the kinetic code with MHD/fluid codes in multiscale, 3) to conduct scientific research using the developed tools. CPES has built two such edge kinetic codes XGC0 and XGC1, which are still the only working kinetic edge plasma codes capablemore » of including the diverted magnetic field geometry. CPES has also built the code coupling framework EFFIS (End-to-end Framework for Fusion Integrated Simulation), which incubated and used the Adios (www.olcf.ornl.gov/center-projects/adios/) and eSiMon (http://www.olcf.ornl.gov/center-projects/esimmon/) technologies, together with the Kepler technology.« less
  4. Outcomes from the DOE Workshop on Turbulent Flow Simulation at the Exascale

    This paper summarizes the outcomes from the Turbulent Flow Simulation at the Exascale: Opportunities and Challenges Workshop, which was held 4-5 August 2015, and was sponsored by the U.S. Department of Energy Office of Advanced Scientific Computing Research. The workshop objective was to define and describe the challenges and opportunities that computing at the exascale will bring to turbulent-flow simulations in applied science and technology. The need for accurate simulation of turbulent flows is evident across the U.S. Department of Energy applied-science and engineering portfolios, including combustion, plasma physics, nuclear-reactor physics, wind energy, and atmospheric science. The workshop brought togethermore » experts in turbulent-flow simulation, computational mathematics, and high-performance computing. Building upon previous ASCR workshops on exascale computing, participants defined a research agenda and path forward that will enable scientists and engineers to continually leverage, engage, and direct advances in computational systems on the path to exascale computing.« less
  5. Gyrokinetic projection of the divertor heat-flux width from present tokamaks to ITER

    Here, the XGC1 edge gyrokinetic code is used to study the width of the heat-flux to divertor plates in attached plasma condition. The flux-driven simulation is performed until an approximate power balance is achieved between the heat-flux across the steep pedestal pressure gradient and the heat-flux on the divertor plates.
  6. The fusion code XGC: Enabling kinetic study of multi-scale edge turbulent transport in ITER [Book Chapter]

    The XGC fusion gyrokinetic code combines state-of-the-art, portable computational and algorithmic technologies to enable complicated multiscale simulations of turbulence and transport dynamics in ITER edge plasma on the largest US open-science computer, the CRAY XK7 Titan, at its maximal heterogeneous capability, which have not been possible before due to a factor of over 10 shortage in the time-to-solution for less than 5 days of wall-clock time for one physics case. Frontier techniques such as nested OpenMP parallelism, adaptive parallel I/O, staging I/O and data reduction using dynamic and asynchronous applications interactions, dynamic repartitioning for balancing computational work in pushing particlesmore » and in grid related work, scalable and accurate discretization algorithms for non-linear Coulomb collisions, and communication-avoiding subcycling technology for pushing particles on both CPUs and GPUs are also utilized to dramatically improve the scalability and time-to-solution, hence enabling the difficult kinetic ITER edge simulation on a present-day leadership class computer.« less
  7. Towards real-time detection and tracking of spatio-temporal features: Blob-filaments in fusion plasma

    A novel algorithm and implementation of real-time identification and tracking of blob-filaments in fusion reactor data is presented. Similar spatio-temporal features are important in many other applications, for example, ignition kernels in combustion and tumor cells in a medical image. This work presents an approach for extracting these features by dividing the overall task into three steps: local identification of feature cells, grouping feature cells into extended feature, and tracking movement of feature through overlapping in space. Through our extensive work in parallelization, we demonstrate that this approach can effectively make use of a large number of compute nodes tomore » detect and track blob-filaments in real time in fusion plasma. Here, on a set of 30GB fusion simulation data, we observed linear speedup on 1024 processes and completed blob detection in less than three milliseconds using Edison, a Cray XC30 system at NERSC.« less
  8. The fusion code XGC: Enabling kinetic study of multi-scale edge turbulent transport in ITER

    The XGC fusion gyrokinetic code combines state-of-the-art, portable computational and algorithmic technologies to enable complicated multiscale simulations of turbulence and transport dynamics in ITER edge plasma on the largest US open-science computer, the CRAY XK7 Titan, at its maximal heterogeneous capability, which have not been possible before due to a factor of over 10 shortage in the time-to-solution for less than 5 days of wall-clock time for one physics case. Frontier techniques such as nested OpenMP parallelism, adaptive parallel I/O, staging I/O and data reduction using dynamic and asynchronous applications interactions, dynamic repartitioning.
  9. Fusion Energy Sciences Exascale Requirements Review. An Office of Science review sponsored jointly by Advanced Scientific Computing Research and Fusion Energy Sciences, January 27-29, 2016, Gaithersburg, Maryland

    The additional computing power offered by the planned exascale facilities could be transformational across the spectrum of plasma and fusion research — provided that the new architectures can be efficiently applied to our problem space. The collaboration that will be required to succeed should be viewed as an opportunity to identify and exploit cross-disciplinary synergies. To assess the opportunities and requirements as part of the development of an overall strategy for computing in the exascale era, the Exascale Requirements Review meeting of the Fusion Energy Sciences (FES) community was convened January 27–29, 2016, with participation from a broad range ofmore » fusion and plasma scientists, specialists in applied mathematics and computer science, and representatives from the U.S. Department of Energy (DOE) and its major computing facilities. This report is a summary of that meeting and the preparatory activities for it and includes a wealth of detail to support the findings. Technical opportunities, requirements, and challenges are detailed in this report (and in the recent report on the Workshop on Integrated Simulation). Science applications are described, along with mathematical and computational enabling technologies. Also see http://exascaleage.org/fes/ for more information.« less
...

Search for:
All Records
Creator / Author
"Chang, Choong-Seock"

Refine by:
Resource Type
Availability
Publication Date
Creator / Author
Research Organization