Largescale sequential quadratic programming algorithms
Abstract
The problem addressed is the general nonlinear programming problem: finding a local minimizer for a nonlinear function subject to a mixture of nonlinear equality and inequality constraints. The methods studied are in the class of sequential quadratic programming (SQP) algorithms, which have previously proved successful for problems of moderate size. Our goal is to devise an SQP algorithm that is applicable to largescale optimization problems, using sparse data structures and storing less curvature information but maintaining the property of superlinear convergence. The main features are: 1. The use of a quasiNewton approximation to the reduced Hessian of the Lagrangian function. Only an estimate of the reduced Hessian matrix is required by our algorithm. The impact of not having available the full Hessian approximation is studied and alternative estimates are constructed. 2. The use of a transformation matrix Q. This allows the QP gradient to be computed easily when only the reduced Hessian approximation is maintained. 3. The use of a reducedgradient form of the basis for the null space of the working set. This choice of basis is more practical than an orthogonal nullspace basis for largescale problems. The continuity condition for this choice is proven. 4. The use ofmore »
 Authors:
 Publication Date:
 Research Org.:
 Stanford Univ., CA (United States). Systems Optimization Lab.
 Sponsoring Org.:
 USDOE, Washington, DC (United States); Department of Defense, Washington, DC (United States); National Science Foundation, Washington, DC (United States)
 OSTI Identifier:
 10102731
 Report Number(s):
 SOL924
ON: DE93002528; CNN: Grant DDM9204208; Grant N0001490J1242
 DOE Contract Number:
 FG0392ER25117
 Resource Type:
 Technical Report
 Resource Relation:
 Other Information: PBD: Sep 1992
 Country of Publication:
 United States
 Language:
 English
 Subject:
 99 GENERAL AND MISCELLANEOUS//MATHEMATICS, COMPUTING, AND INFORMATION SCIENCE; NONLINEAR PROGRAMMING; ALGORITHMS; FUNCTIONS; NEWTON METHOD; LAGRANGIAN FUNCTION; CONVERGENCE; 990200; MATHEMATICS AND COMPUTERS
Citation Formats
Eldersveld, S.K. Largescale sequential quadratic programming algorithms. United States: N. p., 1992.
Web. doi:10.2172/10102731.
Eldersveld, S.K. Largescale sequential quadratic programming algorithms. United States. doi:10.2172/10102731.
Eldersveld, S.K. 1992.
"Largescale sequential quadratic programming algorithms". United States.
doi:10.2172/10102731. https://www.osti.gov/servlets/purl/10102731.
@article{osti_10102731,
title = {Largescale sequential quadratic programming algorithms},
author = {Eldersveld, S.K.},
abstractNote = {The problem addressed is the general nonlinear programming problem: finding a local minimizer for a nonlinear function subject to a mixture of nonlinear equality and inequality constraints. The methods studied are in the class of sequential quadratic programming (SQP) algorithms, which have previously proved successful for problems of moderate size. Our goal is to devise an SQP algorithm that is applicable to largescale optimization problems, using sparse data structures and storing less curvature information but maintaining the property of superlinear convergence. The main features are: 1. The use of a quasiNewton approximation to the reduced Hessian of the Lagrangian function. Only an estimate of the reduced Hessian matrix is required by our algorithm. The impact of not having available the full Hessian approximation is studied and alternative estimates are constructed. 2. The use of a transformation matrix Q. This allows the QP gradient to be computed easily when only the reduced Hessian approximation is maintained. 3. The use of a reducedgradient form of the basis for the null space of the working set. This choice of basis is more practical than an orthogonal nullspace basis for largescale problems. The continuity condition for this choice is proven. 4. The use of incomplete solutions of quadratic programming subproblems. Certain iterates generated by an activeset method for the QP subproblem are used in place of the QP minimizer to define the search direction for the nonlinear problem. An implementation of the new algorithm has been obtained by modifying the code MINOS. Results and comparisons with MINOS and NPSOL are given for the new algorithm on a set of 92 test problems.},
doi = {10.2172/10102731},
journal = {},
number = ,
volume = ,
place = {United States},
year = 1992,
month = 9
}

The problem addressed is the general nonlinear programming problem: finding a local minimizer for a nonlinear function subject to a mixture of nonlinear equality and inequality constraints. The methods studied are in the class of sequential quadratic programming (SQP) algorithms, which have previously proved successful for problems of moderate size. Our goal is to devise an SQP algorithm that is applicable to largescale optimization problems, using sparse data structures and storing less curvature information but maintaining the property of superlinear convergence. The main features are: 1. The use of a quasiNewton approximation to the reduced Hessian of the Lagrangian function.more »

Sequential quadratic programming algorithms for optimization
The problem considered is that of finding local minimizers for a function subject to general nonlinear inequality constraints, when first and perhaps second derivatives are available. The methods studied belong to the class of sequential quadratic programming (SQP) algorithms. In particular, the methods are based on the SQP algorithm embodied in the code NPSOL, which was developed at the Systems Optimization Laboratory, Stanford University. The goal of this paper is to develop SQP algorithms that allow some flexibility in their design. Specifically, we are interested in introducing modifications that enable the algorithms to solve largescale problems efficiently. The following issuesmore » 
A sequential quadratic programming algorithm using an incomplete solution of the subproblem
We analyze sequential quadratic programming (SQP) methods to solve nonlinear constrained optimization problems that are more flexible in their definition than standard SQP methods. The type of flexibility introduced is motivated by the necessity to deviate from the standard approach when solving large problems. Specifically we no longer require a minimizer of the QP subproblem to be determined or particular Lagrange multiplier estimates to be used. Our main focus is on an SQP algorithm that uses a particular augmented Lagrangian merit function. New results are derived for this algorithm under weaker conditions than previously assumed; in particular, it is notmore »