Nonlinearlyconstrained optimization using asynchronous parallel generating set search.
Abstract
Many optimization problems in computational science and engineering (CS&E) are characterized by expensive objective and/or constraint function evaluations paired with a lack of derivative information. Direct search methods such as generating set search (GSS) are well understood and efficient for derivativefree optimization of unconstrained and linearlyconstrained problems. This paper addresses the more difficult problem of general nonlinear programming where derivatives for objective or constraint functions are unavailable, which is the case for many CS&E applications. We focus on penalty methods that use GSS to solve the linearlyconstrained problems, comparing different penalty functions. A classical choice for penalizing constraint violations is {ell}{sub 2}{sup 2}, the squared {ell}{sub 2} norm, which has advantages for derivativebased optimization methods. In our numerical tests, however, we show that exact penalty functions based on the {ell}{sub 1}, {ell}{sub 2}, and {ell}{sub {infinity}} norms converge to good approximate solutions more quickly and thus are attractive alternatives. Unfortunately, exact penalty functions are discontinuous and consequently introduce theoretical problems that degrade the final solution accuracy, so we also consider smoothed variants. Smoothedexact penalty functions are theoretically attractive because they retain the differentiability of the original problem. Numerically, they are a compromise between exact and {ell}{sub 2}{sup 2}, i.e., theymore »
 Authors:
 Publication Date:
 Research Org.:
 Sandia National Laboratories
 Sponsoring Org.:
 USDOE
 OSTI Identifier:
 909393
 Report Number(s):
 SAND20073257
TRN: US200722%%1088
 DOE Contract Number:
 AC0494AL85000
 Resource Type:
 Technical Report
 Country of Publication:
 United States
 Language:
 English
 Subject:
 99 GENERAL AND MISCELLANEOUS//MATHEMATICS, COMPUTING, AND INFORMATION SCIENCE; NONLINEAR PROGRAMMING; OPTIMIZATION; CALCULATION METHODS; NONLINEAR PROBLEMS; CONVERGENCE; Nonlinear programming.; Optimization methods; Mathematical optimization.; Optimization.
Citation Formats
Griffin, Joshua D., and Kolda, Tamara Gibson. Nonlinearlyconstrained optimization using asynchronous parallel generating set search.. United States: N. p., 2007.
Web. doi:10.2172/909393.
Griffin, Joshua D., & Kolda, Tamara Gibson. Nonlinearlyconstrained optimization using asynchronous parallel generating set search.. United States. doi:10.2172/909393.
Griffin, Joshua D., and Kolda, Tamara Gibson. Tue .
"Nonlinearlyconstrained optimization using asynchronous parallel generating set search.". United States.
doi:10.2172/909393. https://www.osti.gov/servlets/purl/909393.
@article{osti_909393,
title = {Nonlinearlyconstrained optimization using asynchronous parallel generating set search.},
author = {Griffin, Joshua D. and Kolda, Tamara Gibson},
abstractNote = {Many optimization problems in computational science and engineering (CS&E) are characterized by expensive objective and/or constraint function evaluations paired with a lack of derivative information. Direct search methods such as generating set search (GSS) are well understood and efficient for derivativefree optimization of unconstrained and linearlyconstrained problems. This paper addresses the more difficult problem of general nonlinear programming where derivatives for objective or constraint functions are unavailable, which is the case for many CS&E applications. We focus on penalty methods that use GSS to solve the linearlyconstrained problems, comparing different penalty functions. A classical choice for penalizing constraint violations is {ell}{sub 2}{sup 2}, the squared {ell}{sub 2} norm, which has advantages for derivativebased optimization methods. In our numerical tests, however, we show that exact penalty functions based on the {ell}{sub 1}, {ell}{sub 2}, and {ell}{sub {infinity}} norms converge to good approximate solutions more quickly and thus are attractive alternatives. Unfortunately, exact penalty functions are discontinuous and consequently introduce theoretical problems that degrade the final solution accuracy, so we also consider smoothed variants. Smoothedexact penalty functions are theoretically attractive because they retain the differentiability of the original problem. Numerically, they are a compromise between exact and {ell}{sub 2}{sup 2}, i.e., they converge to a good solution somewhat quickly without sacrificing much solution accuracy. Moreover, the smoothing is parameterized and can potentially be adjusted to balance the two considerations. Since many CS&E optimization problems are characterized by expensive function evaluations, reducing the number of function evaluations is paramount, and the results of this paper show that exact and smoothedexact penalty functions are wellsuited to this task.},
doi = {10.2172/909393},
journal = {},
number = ,
volume = ,
place = {United States},
year = {Tue May 01 00:00:00 EDT 2007},
month = {Tue May 01 00:00:00 EDT 2007}
}

Generating set search (GSS) is a family of direct search methods that encompasses generalized pattern search and related methods. We describe an algorithm for asynchronous linearlyconstrained GSS, which has some complexities that make it different from both the asynchronous boundconstrained case as well as the synchronous linearlyconstrained case. The algorithm has been implemented in the APPSPACK software framework and we present results from an extensive numerical study using CUTEr test problems. We discuss the results, both positive and negative, and conclude that GSS is a reliable method for solving smalltomedium sized linearlyconstrained optimization problems without derivatives.

Asynchronous parallel generating set search for linearlyconstrained optimization.
We describe an asynchronous parallel derivativefree algorithm for linearlyconstrained optimization. Generating set search (GSS) is the basis of ourmethod. At each iteration, a GSS algorithm computes a set of search directionsand corresponding trial points and then evaluates the objective function valueat each trial point. Asynchronous versions of the algorithm have been developedin the unconstrained and boundconstrained cases which allow the iterations tocontinue (and new trial points to be generated and evaluated) as soon as anyother trial point completes. This enables better utilization of parallel resourcesand a reduction in overall runtime, especially for problems where the objective function takes minutes ormore » 
Stationarity results for generating set search for linearly constrained optimization.
We derive new stationarity results for derivativefree, generating set search methods for linearly constrained optimization. We show that a particular measure of stationarity is of the same order as the step length at an identifiable subset of the iterations. Thus, even in the absence of explicit knowledge of the derivatives of the objective function, we still have information about stationarity. These results help both unify the convergence analysis of several classes of direct search algorithms and clarify the fundamental geometrical ideas that underlie them. In addition, these results validate a practical stopping criterion for such algorithms. 
Asynchronous parallel pattern search for nonlinear optimization
Parallel pattern search (PPS) can be quite useful for engineering optimization problems characterized by a small number of variables (say 1050) and by expensive objective function evaluations such as complex simulations that take from minutes to hours to run. However, PPS, which was originally designed for execution on homogeneous and tightlycoupled parallel machine, is not well suited to the more heterogeneous, looselycoupled, and even faultprone parallel systems available today. Specifically, PPS is hindered by synchronization penalties and cannot recover in the event of a failure. The authors introduce a new asynchronous and fault tolerant parallel pattern search (AAPS) method andmore » 
APPSPACK 4.0 : asynchronous parallel pattern search for derivativefree optimization.
APPSPACK is software for solving unconstrained and bound constrained optimization problems. It implements an asynchronous parallel pattern search method that has been specifically designed for problems characterized by expensive function evaluations. Using APPSPACK to solve optimization problems has several advantages: No derivative information is needed; the procedure for evaluating the objective function can be executed via a separate program or script; the code can be run in serial or parallel, regardless of whether or not the function evaluation itself is parallel; and the software is freely available. We describe the underlying algorithm, data structures, and features of APPSPACK version 4.0more »