Skip to main content
U.S. Department of Energy
Office of Scientific and Technical Information

Distributed deep reinforcement learning for simulation control

Journal Article · · Machine Learning: Science and Technology
Abstract

Several applications in the scientific simulation of physical systems can be formulated as control/optimization problems. The computational models for such systems generally contain hyperparameters, which control solution fidelity and computational expense. The tuning of these parameters is non-trivial and the general approach is to manually ‘spot-check’ for good combinations. This is because optimal hyperparameter configuration search becomes intractable when the parameter space is large and when they may vary dynamically. To address this issue, we present a framework based on deep reinforcement learning (RL) to train a deep neural network agent that controls a model solve by varying parameters dynamically. First, we validate our RL framework for the problem of controlling chaos in chaotic systems by dynamically changing the parameters of the system. Subsequently, we illustrate the capabilities of our framework for accelerating the convergence of a steady-state computational fluid dynamics solver by automatically adjusting the relaxation factors of the discretized Navier–Stokes equations during run-time. The results indicate that the run-time control of the relaxation factors by the learned policy leads to a significant reduction in the number of iterations for convergence compared to the random selection of the relaxation factors. Our results point to potential benefits for learning adaptive hyperparameter learning strategies across different geometries and boundary conditions with implications for reduced computational campaign expenses 4 4

Data and codes available at https://github.com/Romit-Maulik/PAR-RL .

.

Research Organization:
Argonne National Laboratory (ANL), Argonne, IL (United States)
Sponsoring Organization:
USDOE; USDOE Office of Science (SC), Advanced Scientific Computing Research (ASCR); USDOE Office of Science (SC), Basic Energy Sciences (BES). Scientific User Facilities Division
Grant/Contract Number:
AC02-06CH11357
OSTI ID:
1835483
Alternate ID(s):
OSTI ID: 1837076
Journal Information:
Machine Learning: Science and Technology, Journal Name: Machine Learning: Science and Technology Journal Issue: 2 Vol. 2; ISSN 2632-2153
Publisher:
IOP PublishingCopyright Statement
Country of Publication:
United Kingdom
Language:
English

References (31)

Preturbulence: A regime observed in a fluid flow model of Lorenz journal June 1979
A statistical learning strategy for closed-loop control of fluid flows journal April 2016
Crises, sudden changes in chaotic attractors, and transient chaos journal May 1983
Towards a theory of voltage collapse in electric power systems journal September 1989
Tuning of a fuzzy rule set for controlling convergence of a CFD solver in turbulent flow journal October 2001
Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations journal February 2019
PDE-Net 2.0: Learning PDEs from data with a numeric-symbolic hybrid deep network journal December 2019
The drag-adjoint field of a circular cylinder wake at Reynolds numbers 20, 100 and 500 journal July 2013
Artificial neural networks trained through deep reinforcement learning discover control strategies for active flow control journal February 2019
Prediction of turbulent heat transfer using convolutional neural networks journal November 2019
Nonlinear mode decomposition with convolutional neural networks for fluid dynamics journal November 2019
Mastering the game of Go with deep neural networks and tree search journal January 2016
Glider soaring via reinforcement learning in the field journal September 2018
Sensitivity of aerodynamic forces in laminar and turbulent flow past a square cylinder journal October 2014
Accelerating deep reinforcement learning strategies of flow control through a multi-environment approach journal September 2019
Restoring chaos using deep reinforcement learning journal March 2020
Efficient collective swimming by harnessing vortices through deep reinforcement learning journal May 2018
Reinforcement learning for bluff body active flow control in experiments and simulations journal October 2020
A Fuzzy Logic Algorithm for Acceleration of Convergence in Solving Turbulent flow and heat Transfer Problems journal October 2004
Turbulence closure modeling with data-driven techniques: physical compatibility and consistency considerations journal September 2020
Preserving chaos: Control strategies to preserve complex dynamics with potential relevance to biological disorders journal January 1995
Controlled gliding and perching through deep-reinforcement-learning journal September 2019
Perspective on machine learning for advancing fluid mechanics journal October 2019
Flow Navigation by Smart Microswimmers via Reinforcement Learning journal April 2017
Deep reinforcement learning approaches for process control conference May 2017
MESHFREEFLOWNET: A Physics-Constrained Deep Continuous Space-Time Super-Resolution Framework conference November 2020
Turbulence Modeling in the Age of Data journal January 2019
Machine Learning for Fluid Mechanics journal September 2019
Mixing, Chaotic Advection, and Turbulence journal January 1990
Deep reinforcement learning for partial differential equation control conference May 2017
Features of a reattaching turbulent shear layer in divergent channelflow journal February 1985

Similar Records

Amortized simulation-based frequentist inference for tractable and intractable likelihoods
Journal Article · Thu Feb 01 19:00:00 EST 2024 · Machine Learning: Science and Technology · OSTI ID:2283798

Applications of physics informed neural operators
Journal Article · Wed May 17 20:00:00 EDT 2023 · Machine Learning: Science and Technology · OSTI ID:1974276

Related Subjects