Convex quadratic optimization on artificial neural networks
We present continuous-valued Hopfield recurrent neural networks on which we map convex quadratic optimization problems. We consider two different convex quadratic programs, each of which is mapped to a different neural network. Activation functions are shown to play a key role in the mapping under each model. The class of activation functions which can be used in this mapping is characterized in terms of the properties needed. It is shown that the first derivatives of penalty as well as barrier functions belong to this class. The trajectories of dynamics under the first model are shown to be closely related to affine-scaling trajectories of interior-point methods. On the other hand, the trajectories of dynamics under the second model correspond to projected steepest descent pathways.
- OSTI ID:
- 35751
- Report Number(s):
- CONF-9408161-; TRN: 94:009753-0006
- Resource Relation:
- Conference: 15. international symposium on mathematical programming, Ann Arbor, MI (United States), 15-19 Aug 1994; Other Information: PBD: 1994; Related Information: Is Part Of Mathematical programming: State of the art 1994; Birge, J.R.; Murty, K.G. [eds.]; PB: 312 p.
- Country of Publication:
- United States
- Language:
- English
Similar Records
A Dynamical System Associated with Newton's Method for Parametric Approximations of Convex Minimization Problems
Quadratic based primal-dual algorithms for multicommodity convex and linear cost transportation problems with serial and parallel implementations