The purpose of the study was to investigate how effectively the penalty function methods are able to solve constrained optimization problems. The approach in these methods is to transform the constrained optimization problem into an equivalent unconstrained problem and solved using one of the algorithms for unconstrained optimization problems. Algorithms and matrix laboratory (MATLAB) codes are developed using Powell’s method for unconstrained optimization problems and then problems that have appeared frequently in the optimization literature, which have been solved using different techniques compared with other algorithms. It is found out in the research that the sequential transformation methods converge to at least to a local minimum in most cases without the need for the convexity assumptions and with no requirement for differentiability of the objective and constraint functions. For problems of non-convex functions it is recommended to solve the problem with different starting points, penalty parameters and penalty multipliers and take the best solution. But on the other hand for the exact penalty methods convexity assumptions and second-order sufficiency conditions for a local minimum is needed for the solution of unconstrained optimization problem to converge to the solution of the original problem with a finite penalty parameter. In these methods a single application of an unconstrained minimization technique as against the sequential methods is used to solve the constrained optimization problem.
Key words: Penalty function, penalty parameter, augmented lagrangian penalty function, exact penalty function, unconstrained representation of the primal problem.
Copyright © 2022 Author(s) retain the copyright of this article.
This article is published under the terms of the Creative Commons Attribution License 4.0