A parallel gradient distribution algorithm for large-scale optimization
We present a Parallel Gradient Distribution method for the solution of the unconstrained optimization problem min f(x), x {element_of} R{sup n}, where f : R{sup n} {yields} R has continuous first and second partial derivatives and n is typically very large (order of thousands). Given p processors of a parallel computing system, the proposed algorithm is characterized by a parallel phase which produces p points, exploiting the portions of the gradient of the objective function assigned to each processor. Then a coordination phase follows, which determines a new iterate solving a minimization problem in a p + 1 dimensional space, on the basis of the previous iterate and the p points generated by the parallel phase. The parallel and coordination phases are implemented using a limited memory BFGS approach for determining the search direction, and a line search procedure based on the Wolfe sufficient decrease conditions. Global and superlinear convergence results are established in the case of uniformly convex problems. The proposed parallel algorithm is compared, in terms of numerical performance, with the partitioned Quasi-Newton method of Griewank and Toint, the Block Truncated Newton method of Nash and Sofer and the Conjugate Gradient method of Shanno and Phua, using a set of large scale structured optimization problems. Furthermore, the influence of the number of processors available by extensive computational experiments carried out on distributed memory systems (NCUBE and FUJITSU) and on a network of workstations (DEC Alpha) by using PVM.
- OSTI ID:
- 35916
- Report Number(s):
- CONF-9408161--
- Country of Publication:
- United States
- Language:
- English
Similar Records
A Structured Quasi-Newton Algorithm for Optimizing with Incomplete Hessian Information
L-BFGS Class Implementation in C++